Technical Deep Dive
A plain-English explanation of the prompt engineering system behind AI Prompt Builder — written for hiring managers, not just engineers.
Instead of typing raw questions into an AI, this app automatically builds structured prompts that assign expert roles, inject your personal context, and define exactly how the AI should respond.
The Problem
Most people type a question directly into ChatGPT or Gemini. The result is usually "pretty good" — but rarely expert-level.
The Technology
Each layer solves a distinct problem that makes the final prompt dramatically more effective than a raw question.
Layer 1
Every prompt is assembled from 4 mandatory sections: Role, Context, Request, and Output Rules. The build_prompt() function dynamically constructs a prompt guaranteed to have all four — regardless of category or language. This is prompt engineering as software engineering: repeatable, testable, and version-controlled.
1,700 lines of carefully crafted prompt logicLayer 2
The app maintains a persistent user profile grouped by domain: common info, tech profile, recipe profile, study profile, and more. When a prompt is built, the relevant profile groups are automatically injected into the Context section — so the AI always knows who it's talking to.
Context-aware AI without repeating yourself every timeLayer 3
The same prompt can be sent to Gemini or OpenAI with a single toggle. The abstraction layer handles provider selection, API key management, model selection, and error handling — all in one place. Switching models requires zero changes to the prompt logic.
Provider-agnostic LLM integration pattern10 Expert Roles
Each category has a meticulously crafted expert persona with specific strengths and decision-making style.
The Architecture
What happens when a user submits a request — across Flask, the prompt engine, and the LLM API.
The vanilla JS frontend sends a POST request to the Flask REST API. Flask has 20+ endpoints — each handling a specific concern: prompt building, AI calls, history, favorites, presets, profile.
get_profile_for_category() loads the relevant profile groups from JSON storage and returns them for injection into the prompt context.
build_prompt() constructs a multi-section prompt: Role → Context → Instructions → Request → Output Rules. Each section is populated from template definitions, user inputs, and profile data.
ask_ai() reads the provider setting, retrieves the API key, and sends the completed prompt. The abstraction layer means the prompt engine works identically regardless of which LLM is selected.
Business Applications
This systematic approach to prompt engineering transforms generative AI from an unpredictable toy into a reliable enterprise tool.
Ensure every employee, regardless of their AI skill level, extracts expert-level, consistently formatted output from LLMs.
Automatically inject internal company guidelines, brand voice, or customer context into every query without manual typing.
Reduce the "trial and error" back-and-forth chatting. Get the exact output needed in a single API call, saving both tokens and employee time.
Why It Matters
Most people treat prompts as one-off text. This project treats prompt construction as software: modular, testable, version-controlled, and reusable across 10 domains.
Flask REST API + vanilla JS + JSON persistence + Docker + Gunicorn. Every layer is hand-built without heavy frameworks — demonstrating core engineering fundamentals.
Provider abstraction, environment variable key management, client-side override, and graceful error handling — the same patterns used in production AI systems.
Built for daily personal use — UX decisions were driven by real friction, not imagined requirements. Presets, favorites, history, and profile persistence all exist because they were genuinely needed.
Explore
The app is live. About 2 minutes to get your first result.