How to Optimize Prompts for Improved Outcomes

Explore top LinkedIn content from expert professionals.

Summary

Unlock better results from AI systems by learning how to craft prompts that guide and refine their responses effectively, ensuring clear, actionable outcomes tailored to your needs.

  • Define clear expectations: Use specific, structured language to set detailed requirements—like examples, context, and desired outcomes—so the AI can deliver focused and accurate results.
  • Iterate and refine: Review initial responses critically, provide constructive feedback, and prompt the AI for revisions to achieve more polished and reliable outputs.
  • Encourage critical thinking: Instruct the AI to follow step-by-step problem-solving approaches, reflect on its outputs, or ask clarifying questions before providing solutions.
Summarized by AI based on LinkedIn member posts
  • View profile for Kashif M.

    VP of Technology | CTO | GenAI • Cloud • SaaS • FinOps • M&A | Board & C-Suite Advisor

    4,084 followers

    🧠 Designing AI That Thinks: Mastering Agentic Prompting for Smarter Results Have you ever used an LLM and felt it gave up too soon? Or worse, guessed its way through a task? Yeah, I've been there. Most of the time, the prompt is the problem. To get AI that acts more like a helpful agent and less like a chatbot on autopilot, you need to prompt it like one. Here are the three key components of an effective šŸ” Persistence: Ensure the model understands it's in a multi-turn interaction and shouldn't yield control prematurely. 🧾 Example: "You are an agent; please continue working until the user's query is resolved. Only terminate your turn when you are certain the problem is solved." 🧰 Tool Usage: Encourage the model to use available tools, especially when uncertain, instead of guessing. 🧾 Example:" If you're unsure about file content or codebase structure related to the user's request, use your tools to read files and gather the necessary information. Do not guess or fabricate answers." 🧠 Planning: Prompt it to plan before actions and reflect afterward. Prevent reactive tool calls with no strategy. 🧾 Example: "You must plan extensively before each function call and reflect on the outcomes of previous calls. Avoid completing the task solely through a sequence of function calls, as this can hinder insightful problem-solving." šŸ’” I've used this format in AI-powered research and decision-support tools and saw a clear boost in response quality and reliability. šŸ‘‰ Takeaway: Agentic prompting turns a passive assistant into an active problem solver. The difference is in the details. Are you using these techniques in your prompts? I would love to hear what's working for you; leave a comment, or let's connect! #PromptEngineering #AgenticPrompting #LLM #AIWorkflow

  • View profile for Steve Bartel

    Founder & CEO of Gem ($150M Accel, Greylock, ICONIQ, Sapphire, Meritech, YC) | Author of startuphiring101.com

    31,077 followers

    Remember Amelia Bedelia? The children's book character who, when asked to "draw the drapes," literally sketched a picture of curtains instead of closing them? That's exactly how AI sourcing tools work— and why most recruiters get terrible results from them. Like Amelia, AI takes your instructions literally. It can't always: - Read between the lines - Make judgment calls - Infer meaning from context - Understand subjective qualities This is why vague prompts like "find experienced backend developers" or "source high-performing sales leaders" return garbage candidates. So here’s how to write prompts that actually work: BAD PROMPT: "Find backend developers with: - Strong leadership skills - Extensive cloud experience - Proven track record of success" GOOD PROMPT: "Find backend developers who: - Led engineering teams of 5+ people for at least 2 years - Deployed and managed cloud infrastructure on AWS/Azure with budgets over $100K - Scaled systems handling 1M+ daily active users - Reduced infrastructure costs by at least 20% through optimization - Contributed to 3+ open source projects in the last 18 months" The difference? The second prompt gives AI concrete, measurable criteria it can evaluate from candidate profiles. 3 rules I follow for every AI sourcing prompt: 1. Replace subjective qualities with objective metrics "Leadership skills" → "Managed X people for Y years with Z outcomes" 2. Clarify work history "5 years of experience" → "X+ years as a financial analyst at a multi-national corporation" 3. Quantify impact wherever possible "Experienced in sales" → "X+ years of experience in SaaS sales with a consistent track record of exceeding quotas by YY%." PS: If you want to dive deeper into this, the Gem team dropped our complete playbook on mastering AI sourcing. Read it now here: https://lnkd.in/gSXeHCfV

  • View profile for Tyler Folkman
    Tyler Folkman Tyler Folkman is an Influencer

    Chief AI Officer at JobNimbus | Building AI that solves real problems | 10+ years scaling AI products

    17,641 followers

    These 3 AI prompts save me 6 hours every week. Copy them: 🧠 THE SOCRATIC DEBUGGER Instead of asking AI for answers, make it ask YOU the right questions first: "I have a problem with {{problem_description}}. Before you provide a solution, ask me 5 clarifying questions that will help you understand: 1. The full context 2. What I've already tried   3. Constraints I'm working with 4. The ideal outcome 5. Any edge cases I should consider After I answer, provide your solution with confidence levels for each part." Why this works: Forces you to think through the REAL problem before diving into solutions. šŸ“Š THE CONFIDENCE INTERVAL ESTIMATOR Kill your planning paralysis with brutal honesty: "I need to {{task_description}}. Provide: 1. A detailed plan with specific steps 2. For each step, give a confidence interval (e.g., '85-95% confident this will work') 3. Highlight which parts are most uncertain and why 4. Suggest how to validate the uncertain parts 5. Overall project confidence level Be brutally honest about what could go wrong." Why this works: Surfaces hidden risks BEFORE they blow up your timeline. šŸ‘ØšŸ« THE CLARITY TEACHER Turn any complex topic into crystal-clear understanding: "Explain {{complex_concept}} to me. Start with: 1. A one-sentence ELI5 explanation 2. Then a paragraph with more detail 3. Then the technical explanation 4. Common misconceptions to avoid 5. A practical example I can try right now After each level, ask if I need more detail before proceeding." Why this works: Builds understanding layer by layer instead of info-dumping. The breakthrough wasn't finding better AI tools. It was learning to ask better questions. These 3 prompts alone saved me 6 hours last week. And they compound. The more you use them, the faster you get. (I maintain a vault of 25+ battle-tested prompts like these, adding 5-10 weekly based on what actually works in production) What repetitive task is killing YOUR productivity right now? Drop it below. I might have a prompt that helps šŸ‘‡

  • View profile for Michael Brown

    CEO, Door3 Talent | AI Talent Strategist & Fractional TA Leader | Building AI Systems that Drive Real Impact

    43,729 followers

    I’ve been getting a ton of DMs and comments on my last post asking about the AI prompt I used to build an executive hiring assessment and interview plan. Not surprising, executive search is still broken in a lot of places. Too many interviews rely on instinct, vague questions, or outdated playbooks. So instead of sending my prompt to everyone individually, I figured I’d walk through how to build one yourself. Most prompts fall flat because they’re too shallow. A strong AI prompt for executive hiring should be treated like an API request: structured, layered, and loaded with context. Here’s how to build one that actually works: 1. Start with Context Give the AI business and hiring context, not just the role title. ā€œYou’re designing an interview plan for a Series B SaaS company hiring a VP of Marketing to scale pipeline efficiency and launch EMEA.ā€ 2. Provide Source Material Feed the AI what you already have. Include: - Job description - Job ad - Talent Ladder - Company values - Success profiles or leveling framework - Team charter or hiring manager notes - Previous document structures that have worked in the past - Anything that hasn’t worked in the past Label each section clearly. The more signal you give, the more signal you get. 3. Define the Output Format Tell the AI what you want. Example: ā€œBuild a 5-stage interview loop. For each stage, include: - Interview title and purpose - Core competencies - Suggested questions (behavioral + scenario-based) - Evaluation criteria - Interviewer profile - Timebox in minutesā€ Structure = clarity. 4. Add Constraints + Nuance Make the AI smarter by telling it what to avoid and what to emphasize. ā€œAvoid generic leadership questions. Emphasize org design, systems thinking, and cross-functional collaboration. Deprioritize ā€˜culture fit’ in favor of ā€˜culture add.ā€™ā€ 5. Layer in Iteration AI isn’t one-and-done. After the first output, ask: ā€œNow revise this plan to reflect our values more clearly.ā€ ā€œMake this inclusive of candidates from non-traditional backgrounds.ā€ ā€œSharpen the rubric for strategic thinking.ā€ Bonus tip: Once you’ve dialed in your prompt structure, try formatting it in YAML. It makes the logic cleaner, easier to reuse, and more adaptable for tools like Gemini or ChatGPT. If you build your prompt with this kind of structure, you’ll stop getting interview plans that look smart and start getting ones that actually work. And yes, AI can do incredible things šŸ’œ but only if you feed it the right ingredients. Have questions or want to troubleshoot your own prompt design? Happy to trade notes in the comments. #executivesearch #promptengineering #AI #talentacquisition #recruiting

  • View profile for Andrew Ng
    Andrew Ng Andrew Ng is an Influencer

    Founder of DeepLearning.AI; Managing General Partner of AI Fund; Exec Chairman of LandingAI

    2,303,327 followers

    Last week, I described four design patterns for AI agentic workflows that I believe will drive significant progress: Reflection, Tool use, Planning and Multi-agent collaboration. Instead of having an LLM generate its final output directly, an agentic workflow prompts the LLM multiple times, giving it opportunities to build step by step to higher-quality output. Here, I'd like to discuss Reflection. It's relatively quick to implement, and I've seen it lead to surprising performance gains. You may have had the experience of prompting ChatGPT/Claude/Gemini, receiving unsatisfactory output, delivering critical feedback to help the LLM improve its response, and then getting a better response. What if you automate the step of delivering critical feedback, so the model automatically criticizes its own output and improves its response? This is the crux of Reflection. Take the task of asking an LLM to write code. We can prompt it to generate the desired code directly to carry out some task X. Then, we can prompt it to reflect on its own output, perhaps as follows: Here’s code intended for task X: [previously generated code] Check the code carefully for correctness, style, and efficiency, and give constructive criticism for how to improve it. Sometimes this causes the LLM to spot problems and come up with constructive suggestions. Next, we can prompt the LLM with context including (i) the previously generated code and (ii) the constructive feedback, and ask it to use the feedback to rewrite the code. This can lead to a better response. Repeating the criticism/rewrite process might yield further improvements. This self-reflection process allows the LLM to spot gaps and improve its output on a variety of tasks including producing code, writing text, and answering questions. And we can go beyond self-reflection by giving the LLM tools that help evaluate its output; for example, running its code through a few unit tests to check whether it generates correct results on test cases or searching the web to double-check text output. Then it can reflect on any errors it found and come up with ideas for improvement. Further, we can implement Reflection using a multi-agent framework. I've found it convenient to create two agents, one prompted to generate good outputs and the other prompted to give constructive criticism of the first agent's output. The resulting discussion between the two agents leads to improved responses. Reflection is a relatively basic type of agentic workflow, but I've been delighted by how much it improved my applications’ results. If you’re interested in learning more about reflection, I recommend: - Self-Refine: Iterative Refinement with Self-Feedback, by Madaan et al. (2023) - Reflexion: Language Agents with Verbal Reinforcement Learning, by Shinn et al. (2023) - CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing, by Gou et al. (2024) [Original text: https://lnkd.in/g4bTuWtU ]

  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    40,819 followers

    In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMs’ performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Google’s Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with ā€˜secret prompting tips’, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the ā€œTake a deep breathā€ instruction that improved LLMs’ performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y

  • You’re doing it. I’m doing it. Your friends are doing it. Even the leaders who deny it are doing it. Everyone’s experimenting with AI. But I keep hearing the same complaint: ā€œIt’s not as game-changing as I thought.ā€ If AI is so powerful, why isn’t it doing more of your work? The #1 obstacle keeping you and your team from getting more out of AI? You're not bossing it around enough. AI doesn’t get tired and it doesn't push back. It doesn’t give you a side-eye when at 11:45 pm you demand seven rewrite options to compare while snacking in your bathrobe. Yet most people give it maybe one round of feedback—then complain it’s ā€œmeh.ā€ The best AI users? They iterate. They refine. They make AI work for them. Here’s how: 1. Tweak AI's basic setting so it sounds like you AI-generated text can feel robotic or too formal. Fix that by teaching it your style from the start. Prompt: ā€œAnalyze the writing style below—tone, sentence structure, and word choice—and use it for all future responses.ā€ (Paste a few of your own posts or emails.) Then, take the response and add it to Settings → Personalization → Custom Instructions. 2. Strip Out the Jargon Don’t let AI spew corporate-speak. Prompt: ā€œRewrite this so a smart high schooler could understand it—no buzzwords, no filler, just clear, compelling language.ā€ or ā€œUse human, ultra-clear language that’s straightforward and passes an AI detection test.ā€ 3. Give It a Solid Outline AI thrives on structure. Instead of ā€œWrite me a whitepaper,ā€ start with bullet points or a rough outline. Prompt: ā€œHere’s my outline. Turn it into a first draft with strong examples, a compelling narrative, and clear takeaways.ā€ Even better? Record yourself explaining your idea; paste the transcript so AI can capture your authentic voice. 4. Be Brutally Honest If the output feels off, don’t sugarcoat it. Prompt: ā€œYou’re too cheesy. Make this sound like a Fortune 500 executive wrote it.ā€ or ā€œIdentify all weak, repetitive, or unclear text in this post and suggest stronger alternatives.ā€ 5. Give it a tough crowd Polished isn’t enough—sometimes you need pushback. Prompt: ā€œPretend you’re a skeptical CFO who thinks this idea is a waste of money. Rewrite it to persuade them.ā€ or ā€œAct as a no-nonsense VC who doesn’t buy this pitch. Ask 5 hard questions that make me rethink my strategy.ā€ 6. Flip the Script—AI Interviews You Sometimes the best answers come from sharper questions. Prompt: ā€œYou’re a seasoned journalist interviewing me on this topic. Ask thoughtful follow-ups to surface my best thinking.ā€ This back-and-forth helps refine your ideas before you even start writing. The Bottom Line: AI isn’t the bottleneck—we are. If you don’t push it, you’ll keep getting mediocrity. But if you treat AI like a tireless assistant that thrives on feedback? You’ll unlock content and insights that truly move the needle. Once you work this way, there’s no going back.

  • View profile for Matt Palmer

    developer relations at replit

    15,801 followers

    Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key. Effective prompting isn't magic; it's about structure, clarity, and iteration. Here are 10 principles to guide your AI interactions: šŸ”¹ Checkpoint: Build iteratively. Break down large tasks into smaller, testable steps and save progress often. šŸ”¹ Debug: Provide detailed context for errors – error messages, code snippets, and what you've tried. šŸ”¹ Discover: Ask the AI for suggestions on tools, libraries, or approaches. Leverage its knowledge base. šŸ”¹ Experiment: Treat prompting as iterative. Refine your requests based on the AI's responses. šŸ”¹ Instruct: State clear, positive goals. Tell the AI what to do, not just what to avoid. šŸ”¹ Select: Provide focused context. Use file mentions or specific snippets; avoid overwhelming the AI. šŸ”¹ Show: Reduce ambiguity with concrete examples – code samples, desired outputs, data formats, or mockups. šŸ”¹ Simplify: Use clear, direct language. Break down complexity and avoid jargon. šŸ”¹ Specify: Define exact requirements – expected outputs, constraints, data formats, edge cases. šŸ”¹ Test: Plan your structure and features before prompting. Outline requirements like a PM/engineer. By applying these principles, you can significantly improve your collaboration with AI, leading to faster development cycles and better outcomes.

  • View profile for Vince Lynch

    CEO of IV.AI | The AI Platform to Reveal What Matters | We’re hiring

    10,680 followers

    Are humans 5X better than AI? This paper is blowing up (not in a good way) The recent study claims LLMs are 5x less accurate than humans at summarizing scientific research. That’s a bold claim. But maybe it’s not the model that’s off. Maybe it's the AI strategy, system, prompt, data... What’s your secret sauce for getting the most out of an llm? Scientific summarization is dense, domain-specific, and context-heavy. And evaluating accuracy in this space? That’s not simple either. So just because a general-purpose LLM is struggling with a turing style test... doesn't mean it can't do better. Is it just how they're using it? I think it's short sighted to drop a complex task into an LLM and expect expert results without expert setup. To get better answers, you need a better AI strategy, system, and deployment. Some tips and tricks we find helpful: 1. Start small and be intentional. Don’t just upload a paper and say ā€œsummarize this.ā€ Define the structure, tone, and scope you want. Try prompts like: ā€œList three key findings in plain language, and include one real-world implication for each.ā€ The clearer your expectations, the better the output. 2. Test - Build in a feedback loop from the beginning. Ask the model what might be missing from the summary, or how confident it is in the output. Compare responses to expert-written summaries or benchmark examples. If the model can’t handle tasks where the answers are known, it’s not ready for tasks where they’re not. 3. Tweak - Refine everything: prompts, data, logic. Add retrieval grounding so the model pulls from trusted sources instead of guessing. Fine-tune with domain-specific examples to improve accuracy and reduce noise. Experiment with prompt variations and analyze how the answers change. Tuning isn’t just technical. Its iterative alignment between output and expectation. (Spoiler alert: you might be at this stage for a while.) 4. Repeat Every new domain, dataset, or objective requires a fresh approach. LLMs don’t self-correct across contexts, but your workflow can. Build reusable templates. Create consistent evaluation criteria. Track what works, version your changes, and keep refining. Improving LLM performance isn’t one and done. It’s a cycle. Finally: If you treat a language model like a magic button, it's going to kill the rabbit in the hat. If you treat it like a system you deploy, test, tweak, and evolve It can retrieve magic bunnies flying everywhere Q: How are you using LLMs to improve workflows? Have you tried domain-specific data? Would love to hear your approaches in the comments. 

  • View profile for Aadit Sheth

    The Narrative Company | Executive Narrative & Influence Strategy

    96,579 followers

    Anthropic dropped the best free masterclass on prompt engineering Here’s what you’ll learn in 9 chapters: 1. Structure better prompts → Always start with the intent: ā€œSummarize this article in 5 bullet points for a beginnerā€ is 10x better than ā€œSummarize this.ā€ → Use instruction-first phrasing, the model performs best when it knows exactly what you want upfront. 2. Be clear + direct → Avoid open-ended ambiguity. Instead of ā€œTell me about success,ā€ ask ā€œList 3 traits successful startup founders share.ā€ → Use active voice, fewer adjectives, and always define vague terms. 3. Assign the right ā€œroleā€ → Start with: ā€œYou are a [role]ā€, this frames the model’s mindset. Example: ā€œYou are a skeptical investor evaluating a pitch.ā€ → Roles unlock tone, precision, and even memory, especially in multi-turn chats. 4. Think step by step (Precondition prompts) → Ask the model to plan before it answers: ā€œFirst, list your steps. Then, perform them one by one.ā€ → This dramatically improves accuracy and reduces hallucinations in complex tasks. 5. Avoid hallucinations → Anchor the model with clear boundaries: ā€œOnly answer if the input contains [x]. Otherwise, respond: ā€˜Insufficient data.ā€™ā€ → Reduce creativity in factual tasks. E.g., ā€œBe concise. Don’t assume.ā€ 6. Build complex prompts (with reusable patterns) → Use modular blocks: context → instruction → format → examples. → Build a personal prompt library by saving + refining your best-performing prompts over time. It’s not just ā€œhow to prompt better.ā€ It’s a full-on skill upgrade. Interactive. Structured. Free. Share this with anyone still writing 1-line prompts. Image: Hesamation

Explore categories