š§ Designing AI That Thinks: Mastering Agentic Prompting for Smarter Results Have you ever used an LLM and felt it gave up too soon? Or worse, guessed its way through a task? Yeah, I've been there. Most of the time, the prompt is the problem. To get AI that acts more like a helpful agent and less like a chatbot on autopilot, you need to prompt it like one. Here are the three key components of an effective š Persistence: Ensure the model understands it's in a multi-turn interaction and shouldn't yield control prematurely. š§¾ Example: "You are an agent; please continue working until the user's query is resolved. Only terminate your turn when you are certain the problem is solved." š§° Tool Usage: Encourage the model to use available tools, especially when uncertain, instead of guessing. š§¾ Example:" If you're unsure about file content or codebase structure related to the user's request, use your tools to read files and gather the necessary information. Do not guess or fabricate answers." š§ Planning: Prompt it to plan before actions and reflect afterward. Prevent reactive tool calls with no strategy. š§¾ Example: "You must plan extensively before each function call and reflect on the outcomes of previous calls. Avoid completing the task solely through a sequence of function calls, as this can hinder insightful problem-solving." š” I've used this format in AI-powered research and decision-support tools and saw a clear boost in response quality and reliability. š Takeaway: Agentic prompting turns a passive assistant into an active problem solver. The difference is in the details. Are you using these techniques in your prompts? I would love to hear what's working for you; leave a comment, or let's connect! #PromptEngineering #AgenticPrompting #LLM #AIWorkflow
How to Optimize Prompts for Improved Outcomes
Explore top LinkedIn content from expert professionals.
Summary
Unlock better results from AI systems by learning how to craft prompts that guide and refine their responses effectively, ensuring clear, actionable outcomes tailored to your needs.
- Define clear expectations: Use specific, structured language to set detailed requirementsālike examples, context, and desired outcomesāso the AI can deliver focused and accurate results.
- Iterate and refine: Review initial responses critically, provide constructive feedback, and prompt the AI for revisions to achieve more polished and reliable outputs.
- Encourage critical thinking: Instruct the AI to follow step-by-step problem-solving approaches, reflect on its outputs, or ask clarifying questions before providing solutions.
-
-
Remember Amelia Bedelia? The children's book character who, when asked to "draw the drapes," literally sketched a picture of curtains instead of closing them? That's exactly how AI sourcing tools workā and why most recruiters get terrible results from them. Like Amelia, AI takes your instructions literally. It can't always: - Read between the lines - Make judgment calls - Infer meaning from context - Understand subjective qualities This is why vague prompts like "find experienced backend developers" or "source high-performing sales leaders" return garbage candidates. So hereās how to write prompts that actually work: BAD PROMPT: "Find backend developers with: - Strong leadership skills - Extensive cloud experience - Proven track record of success" GOOD PROMPT: "Find backend developers who: - Led engineering teams of 5+ people for at least 2 years - Deployed and managed cloud infrastructure on AWS/Azure with budgets over $100K - Scaled systems handling 1M+ daily active users - Reduced infrastructure costs by at least 20% through optimization - Contributed to 3+ open source projects in the last 18 months" The difference? The second prompt gives AI concrete, measurable criteria it can evaluate from candidate profiles. 3 rules I follow for every AI sourcing prompt: 1. Replace subjective qualities with objective metrics "Leadership skills" ā "Managed X people for Y years with Z outcomes" 2. Clarify work history "5 years of experience" ā "X+ years as a financial analyst at a multi-national corporation" 3. Quantify impact wherever possible "Experienced in sales" ā "X+ years of experience in SaaS sales with a consistent track record of exceeding quotas by YY%." PS: If you want to dive deeper into this, the Gem team dropped our complete playbook on mastering AI sourcing. Read it now here: https://lnkd.in/gSXeHCfV
-
These 3 AI prompts save me 6 hours every week. Copy them: š§ THE SOCRATIC DEBUGGER Instead of asking AI for answers, make it ask YOU the right questions first: "I have a problem with {{problem_description}}. Before you provide a solution, ask me 5 clarifying questions that will help you understand: 1. The full context 2. What I've already tried 3. Constraints I'm working with 4. The ideal outcome 5. Any edge cases I should consider After I answer, provide your solution with confidence levels for each part." Why this works: Forces you to think through the REAL problem before diving into solutions. š THE CONFIDENCE INTERVAL ESTIMATOR Kill your planning paralysis with brutal honesty: "I need to {{task_description}}. Provide: 1. A detailed plan with specific steps 2. For each step, give a confidence interval (e.g., '85-95% confident this will work') 3. Highlight which parts are most uncertain and why 4. Suggest how to validate the uncertain parts 5. Overall project confidence level Be brutally honest about what could go wrong." Why this works: Surfaces hidden risks BEFORE they blow up your timeline. šØš« THE CLARITY TEACHER Turn any complex topic into crystal-clear understanding: "Explain {{complex_concept}} to me. Start with: 1. A one-sentence ELI5 explanation 2. Then a paragraph with more detail 3. Then the technical explanation 4. Common misconceptions to avoid 5. A practical example I can try right now After each level, ask if I need more detail before proceeding." Why this works: Builds understanding layer by layer instead of info-dumping. The breakthrough wasn't finding better AI tools. It was learning to ask better questions. These 3 prompts alone saved me 6 hours last week. And they compound. The more you use them, the faster you get. (I maintain a vault of 25+ battle-tested prompts like these, adding 5-10 weekly based on what actually works in production) What repetitive task is killing YOUR productivity right now? Drop it below. I might have a prompt that helps š
-
Iāve been getting a ton of DMs and comments on my last post asking about the AI prompt I used to build an executive hiring assessment and interview plan. Not surprising, executive search is still broken in a lot of places. Too many interviews rely on instinct, vague questions, or outdated playbooks. So instead of sending my prompt to everyone individually, I figured Iād walk through how to build one yourself. Most prompts fall flat because theyāre too shallow. A strong AI prompt for executive hiring should be treated like an API request: structured, layered, and loaded with context. Hereās how to build one that actually works: 1. Start with Context Give the AI business and hiring context, not just the role title. āYouāre designing an interview plan for a Series B SaaS company hiring a VP of Marketing to scale pipeline efficiency and launch EMEA.ā 2. Provide Source Material Feed the AI what you already have. Include: - Job description - Job ad - Talent Ladder - Company values - Success profiles or leveling framework - Team charter or hiring manager notes - Previous document structures that have worked in the past - Anything that hasnāt worked in the past Label each section clearly. The more signal you give, the more signal you get. 3. Define the Output Format Tell the AI what you want. Example: āBuild a 5-stage interview loop. For each stage, include: - Interview title and purpose - Core competencies - Suggested questions (behavioral + scenario-based) - Evaluation criteria - Interviewer profile - Timebox in minutesā Structure = clarity. 4. Add Constraints + Nuance Make the AI smarter by telling it what to avoid and what to emphasize. āAvoid generic leadership questions. Emphasize org design, systems thinking, and cross-functional collaboration. Deprioritize āculture fitā in favor of āculture add.āā 5. Layer in Iteration AI isnāt one-and-done. After the first output, ask: āNow revise this plan to reflect our values more clearly.ā āMake this inclusive of candidates from non-traditional backgrounds.ā āSharpen the rubric for strategic thinking.ā Bonus tip: Once youāve dialed in your prompt structure, try formatting it in YAML. It makes the logic cleaner, easier to reuse, and more adaptable for tools like Gemini or ChatGPT. If you build your prompt with this kind of structure, youāll stop getting interview plans that look smart and start getting ones that actually work. And yes, AI can do incredible things š but only if you feed it the right ingredients. Have questions or want to troubleshoot your own prompt design? Happy to trade notes in the comments. #executivesearch #promptengineering #AI #talentacquisition #recruiting
-
Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Hereās code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applicationsā results. If youāre interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]
-
In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMsā performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Googleās Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with āsecret prompting tipsā, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the āTake a deep breathā instruction that improved LLMsā performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y
-
Youāre doing it. Iām doing it. Your friends are doing it. Even the leaders who deny it are doing it. Everyoneās experimenting with AI. But I keep hearing the same complaint: āItās not as game-changing as I thought.ā If AI is so powerful, why isnāt it doing more of your work? The #1 obstacle keeping you and your team from getting more out of AI? You're not bossing it around enough. AI doesnāt get tired and it doesn't push back. It doesnāt give you a side-eye when at 11:45 pm you demand seven rewrite options to compare while snacking in your bathrobe. Yet most people give it maybe one round of feedbackāthen complain itās āmeh.ā The best AI users? They iterate. They refine. They make AI work for them. Hereās how: 1. Tweak AI's basic setting so it sounds like you AI-generated text can feel robotic or too formal. Fix that by teaching it your style from the start. Prompt: āAnalyze the writing style belowātone, sentence structure, and word choiceāand use it for all future responses.ā (Paste a few of your own posts or emails.) Then, take the response and add it to Settings ā Personalization ā Custom Instructions. 2. Strip Out the Jargon Donāt let AI spew corporate-speak. Prompt: āRewrite this so a smart high schooler could understand itāno buzzwords, no filler, just clear, compelling language.ā or āUse human, ultra-clear language thatās straightforward and passes an AI detection test.ā 3. Give It a Solid Outline AI thrives on structure. Instead of āWrite me a whitepaper,ā start with bullet points or a rough outline. Prompt: āHereās my outline. Turn it into a first draft with strong examples, a compelling narrative, and clear takeaways.ā Even better? Record yourself explaining your idea; paste the transcript so AI can capture your authentic voice. 4. Be Brutally Honest If the output feels off, donāt sugarcoat it. Prompt: āYouāre too cheesy. Make this sound like a Fortune 500 executive wrote it.ā or āIdentify all weak, repetitive, or unclear text in this post and suggest stronger alternatives.ā 5. Give it a tough crowd Polished isnāt enoughāsometimes you need pushback. Prompt: āPretend youāre a skeptical CFO who thinks this idea is a waste of money. Rewrite it to persuade them.ā or āAct as a no-nonsense VC who doesnāt buy this pitch. Ask 5 hard questions that make me rethink my strategy.ā 6. Flip the ScriptāAI Interviews You Sometimes the best answers come from sharper questions. Prompt: āYouāre a seasoned journalist interviewing me on this topic. Ask thoughtful follow-ups to surface my best thinking.ā This back-and-forth helps refine your ideas before you even start writing. The Bottom Line: AI isnāt the bottleneckāwe are. If you donāt push it, youāll keep getting mediocrity. But if you treat AI like a tireless assistant that thrives on feedback? Youāll unlock content and insights that truly move the needle. Once you work this way, thereās no going back.
-
Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key. Effective prompting isn't magic; it's about structure, clarity, and iteration. Here are 10 principles to guide your AI interactions: š¹ Checkpoint: Build iteratively. Break down large tasks into smaller, testable steps and save progress often. š¹ Debug: Provide detailed context for errors ā error messages, code snippets, and what you've tried. š¹ Discover: Ask the AI for suggestions on tools, libraries, or approaches. Leverage its knowledge base. š¹ Experiment: Treat prompting as iterative. Refine your requests based on the AI's responses. š¹ Instruct: State clear, positive goals. Tell the AI what to do, not just what to avoid. š¹ Select: Provide focused context. Use file mentions or specific snippets; avoid overwhelming the AI. š¹ Show: Reduce ambiguity with concrete examples ā code samples, desired outputs, data formats, or mockups. š¹ Simplify: Use clear, direct language. Break down complexity and avoid jargon. š¹ Specify: Define exact requirements ā expected outputs, constraints, data formats, edge cases. š¹ Test: Plan your structure and features before prompting. Outline requirements like a PM/engineer. By applying these principles, you can significantly improve your collaboration with AI, leading to faster development cycles and better outcomes.
-
Are humans 5X better than AI? This paper is blowing up (not in a good way) The recent study claims LLMs are 5x less accurate than humans at summarizing scientific research. Thatās a bold claim. But maybe itās not the model thatās off. Maybe it's the AI strategy, system, prompt, data... Whatās your secret sauce for getting the most out of an llm? Scientific summarization is dense, domain-specific, and context-heavy. And evaluating accuracy in this space? Thatās not simple either. So just because a general-purpose LLM is struggling with a turing style test... doesn't mean it can't do better. Is it just how they're using it? I think it's short sighted to drop a complex task into an LLM and expect expert results without expert setup. To get better answers, you need a better AI strategy, system, and deployment. Some tips and tricks we find helpful: 1. Start small and be intentional. Donāt just upload a paper and say āsummarize this.ā Define the structure, tone, and scope you want. Try prompts like: āList three key findings in plain language, and include one real-world implication for each.ā The clearer your expectations, the better the output. 2. Test - Build in a feedback loop from the beginning. Ask the model what might be missing from the summary, or how confident it is in the output. Compare responses to expert-written summaries or benchmark examples. If the model canāt handle tasks where the answers are known, itās not ready for tasks where theyāre not. 3. Tweak - Refine everything: prompts, data, logic. Add retrieval grounding so the model pulls from trusted sources instead of guessing. Fine-tune with domain-specific examples to improve accuracy and reduce noise. Experiment with prompt variations and analyze how the answers change. Tuning isnāt just technical. Its iterative alignment between output and expectation. (Spoiler alert: you might be at this stage for a while.) 4. Repeat Every new domain, dataset, or objective requires a fresh approach. LLMs donāt self-correct across contexts, but your workflow can. Build reusable templates. Create consistent evaluation criteria. Track what works, version your changes, and keep refining. Improving LLM performance isnāt one and done. Itās a cycle. Finally: If you treat a language model like a magic button, it's going to kill the rabbit in the hat. If you treat it like a system you deploy, test, tweak, and evolve It can retrieve magic bunnies flying everywhere Q: How are you using LLMs to improve workflows? Have you tried domain-specific data? Would love to hear your approaches in the comments.
-
Anthropic dropped the best free masterclass on prompt engineering Hereās what youāll learn in 9 chapters: 1. Structure better prompts ā Always start with the intent: āSummarize this article in 5 bullet points for a beginnerā is 10x better than āSummarize this.ā ā Use instruction-first phrasing, the model performs best when it knows exactly what you want upfront. 2. Be clear + direct ā Avoid open-ended ambiguity. Instead of āTell me about success,ā ask āList 3 traits successful startup founders share.ā ā Use active voice, fewer adjectives, and always define vague terms. 3. Assign the right āroleā ā Start with: āYou are a [role]ā, this frames the modelās mindset. Example: āYou are a skeptical investor evaluating a pitch.ā ā Roles unlock tone, precision, and even memory, especially in multi-turn chats. 4. Think step by step (Precondition prompts) ā Ask the model to plan before it answers: āFirst, list your steps. Then, perform them one by one.ā ā This dramatically improves accuracy and reduces hallucinations in complex tasks. 5. Avoid hallucinations ā Anchor the model with clear boundaries: āOnly answer if the input contains [x]. Otherwise, respond: āInsufficient data.āā ā Reduce creativity in factual tasks. E.g., āBe concise. Donāt assume.ā 6. Build complex prompts (with reusable patterns) ā Use modular blocks: context ā instruction ā format ā examples. ā Build a personal prompt library by saving + refining your best-performing prompts over time. Itās not just āhow to prompt better.ā Itās a full-on skill upgrade. Interactive. Structured. Free. Share this with anyone still writing 1-line prompts. Image: Hesamation