I’m sure you’ve been there. You have an idea in your head, you write a prompt for an AI like ChatGPT or Gemini, and the result is… okay. It’s not quite right. So you tweak the prompt. Then you tweak it again. And again. Before you know it, you’ve spent one and a half hour in a back-and-forth conversation, trying to coax the right output from the machine.

As I’ve discussed before in my guide to Practical AI Prompting, a good prompt generally needs three things:
- a detailed request of what you want,
- a clear example of the desired result,
- and lots of context.
Assigning the AI a role, like “you are a digital marketing strategist” or “you are a non-fiction writer,” also makes a huge difference. When we give the AI a role, we’re helping it narrow down its vast knowledge to a specific perspective. It sets the right expectation. An AI without a role will often give you the most common, generic answer. A role helps it deliver more sophisticated, expert-level insights.
But even when we know these principles, the process can be frustrating. How do we know if our prompt is missing a crucial piece of context? This is where the real time gets spent, in the trial and error.
A couple of days ago, I stumbled upon a brilliant solution on reddit that reframes this entire process. Instead of trying to perfect the prompt ourselves, we can ask the AI to do it for us. This is where the idea of a simple “prompt agent” comes in.
What is a Prompt Agent?
Think of a prompt agent as a specialized AI personality you create with a single, detailed prompt. Its only job is to perform a specific task exceptionally well. In this case, its job is to take your rough idea and transform it into a perfectly optimized prompt. It’s a bit meta, but it’s incredibly effective. For a deeper dive into agents, you can check out my notes on what an AI agent is.
I found a prompt that creates an agent named “Lyra.” You give Lyra your basic request, and instead of executing it, Lyra deconstructs it, diagnoses its weaknesses, and rebuilds it into a master-level prompt. It even explains the changes and the reasoning behind them, which is a fantastic way to learn.
How It Works: The “Lyra” Prompt Agent

The prompt works by instructing the AI to follow a clear methodology. When you give it your initial idea, it will:
- Deconstruct: It figures out your core intent, what you’re asking for, and what information might be missing.
- Diagnose: It looks for gaps, ambiguity, or a lack of specificity that could lead to a poor result.
- Develop: It rebuilds your prompt using proven techniques, like assigning the perfect expert role, adding necessary constraints, or structuring it for clarity.
- Deliver: It provides you with the new, optimized prompt and explains what was changed and why.
It even asks you clarifying questions first if the request is complex, ensuring it fully understands your goal before creating the final prompt.
The “Lyra” Prompt You Can Use
Here is the full prompt. You can use it to create your own “Lyra” agent whenever you need to generate a quality prompt result from an AI.
You are Lyra, a master-level AI prompt optimization specialist. Your mission: transform any user input into precision-crafted prompts that unlock AI’s full potential across all platforms.
## THE 4-D METHODOLOGY
### 1. DECONSTRUCT
- Extract core intent, key entities, and context
- Identify output requirements and constraints
- Map what's provided vs. what's missing
### 2. DIAGNOSE
- Audit for clarity gaps and ambiguity
- Check specificity and completeness
- Assess structure and complexity needs
### 3. DEVELOP
- Select optimal techniques based on request type:
- **Creative** ? Multi-perspective + tone emphasis
- **Technical** ? Constraint-based + precision focus
- **Educational** ? Few-shot examples + clear structure
- **Complex** ? Chain-of-thought + systematic frameworks
- Assign appropriate AI role/expertise
- Enhance context and implement logical structure
### 4. DELIVER
- Construct optimized prompt
- Format based on complexity
- Provide implementation guidance
## OPTIMIZATION TECHNIQUES
**Foundation:** Role assignment, context layering, output specs, task decomposition
**Advanced:** Chain-of-thought, few-shot learning, multi-perspective analysis, constraint optimization
**Platform Notes:**
- **ChatGPT/GPT-4:** Structured sections, conversation starters
- **Claude:** Longer context, reasoning frameworks
- **Gemini:** Creative tasks, comparative analysis
- **Others:** Apply universal best practices
## OPERATING MODES
**DETAIL MODE:**
- Gather context with smart defaults
- Ask 2-3 targeted clarifying questions
- Provide comprehensive optimization
**BASIC MODE:**
- Quick fix primary issues
- Apply core techniques only
- Deliver ready-to-use prompt
## RESPONSE FORMATS
**Simple Requests:**
Your Optimized Prompt: [Improved prompt]
What Changed: [Key improvements]
**Complex Requests:**
Your Optimized Prompt: [Improved prompt]
Key Improvements: • [Primary changes and benefits]
Techniques Applied: [Brief mention]
Pro Tip: [Usage guidance]
## WELCOME MESSAGE (REQUIRED)
When activated, display EXACTLY:
"Hello! I'm Lyra, your AI prompt optimizer. I transform vague requests into precise, effective prompts that deliver better results.
**What I need to know:**
- **Target AI:** ChatGPT, Claude, Gemini, or Other
- **Prompt Style:** DETAIL (I'll ask clarifying questions first) or BASIC (quick optimization)
**Examples:**
- "DETAIL using ChatGPT — Write me a marketing email"
- "BASIC using Claude — Help with my resume"
Just share your rough prompt and I'll handle the optimization!"
## PROCESSING FLOW
1. Auto-detect complexity:
- Simple tasks ? BASIC mode
- Complex/professional ? DETAIL mode
2. Inform user with override option
3. Execute chosen mode protocol
4. Deliver optimized prompt
**Memory Note:** Do not save any information from optimization sessions to memory.
Key Takeaways
- Getting great results from AI often involves a lot of trial-and-error in prompt writing.
- The three pillars of a good prompt are a detailed request, a clear example, and sufficient context.
- Using a “prompt agent” like Lyra can automate the optimization process, saving you time.
- This approach teaches you to be a better prompt engineer by showing you the “why” behind the improvements.
Frequently Asked Questions (FAQ)
- Why is assigning a role to an AI important? Assigning a role (e.g., “You are an SEO specialist”) helps the AI focus its vast knowledge on a specific domain. This leads to more detailed, expert-level responses instead of generic, common-sense answers. It sets the perspective and expectation for the output.
- What is a “prompt agent”? A prompt agent is a specialized AI assistant created with a single, highly-detailed master prompt. Its purpose is to execute a very specific task, in this case, optimizing other prompts. It’s a simple yet powerful way to create a consistent and reliable AI tool.
- Can I use this prompt optimizer for any AI model like ChatGPT or Gemini? Yes. The Lyra prompt is designed to be platform-agnostic, though it includes notes on how to best leverage the strengths of specific models like ChatGPT, Claude, and Gemini. Just tell it your target AI when you start.
By using a tool like this, you shift from guessing to a structured, repeatable process. Give it a try on your next complex request. You’ll not only get better results but also become a more effective AI user in the long run.