In the last three months alone, over ten papers outlining novel prompting techniques were published, boosting LLMs’ performance by a substantial margin. Two weeks ago, a groundbreaking paper from Microsoft demonstrated how a well-prompted GPT-4 outperforms Google’s Med-PaLM 2, a specialized medical model, solely through sophisticated prompting techniques. Yet, while our X and LinkedIn feeds buzz with ‘secret prompting tips’, a definitive, research-backed guide aggregating these advanced prompting strategies is hard to come by. This gap prevents LLM developers and everyday users from harnessing these novel frameworks to enhance performance and achieve more accurate results. https://lnkd.in/g7_6eP6y In this AI Tidbits Deep Dive, I outline six of the best and recent prompting methods: (1) EmotionPrompt - inspired by human psychology, this method utilizes emotional stimuli in prompts to gain performance enhancements (2) Optimization by PROmpting (OPRO) - a DeepMind innovation that refines prompts automatically, surpassing human-crafted ones. This paper discovered the “Take a deep breath” instruction that improved LLMs’ performance by 9%. (3) Chain-of-Verification (CoVe) - Meta's novel four-step prompting process that drastically reduces hallucinations and improves factual accuracy (4) System 2 Attention (S2A) - also from Meta, a prompting method that filters out irrelevant details prior to querying the LLM (5) Step-Back Prompting - encouraging LLMs to abstract queries for enhanced reasoning (6) Rephrase and Respond (RaR) - UCLA's method that lets LLMs rephrase queries for better comprehension and response accuracy Understanding the spectrum of available prompting strategies and how to apply them in your app can mean the difference between a production-ready app and a nascent project with untapped potential. Full blog post https://lnkd.in/g7_6eP6y
How to Use AI for Improved Results
Explore top LinkedIn content from expert professionals.
Summary
Artificial intelligence (AI) can significantly improve results across various industries by optimizing processes, enhancing decision-making, and streamlining workflows. By using strategic prompting methods, building context-aware commands, or employing prompt assessment metrics, users can effectively guide AI to produce highly accurate and relevant outputs.
- Design clear prompts: Structuring your questions or commands with clarity, purpose, and specific instructions helps AI deliver accurate results tailored to your needs.
- Incorporate step-by-step reasoning: Use prompting techniques like chain-of-thought or rephrasing, which encourage AI to process information logically and reduce errors in its outputs.
- Test and refine: Continuously iterate and analyze your prompts to ensure better AI responses, using examples, structured formats, and feedback for improvement.
-
-
🧠 Designing AI That Thinks: Mastering Agentic Prompting for Smarter Results Have you ever used an LLM and felt it gave up too soon? Or worse, guessed its way through a task? Yeah, I've been there. Most of the time, the prompt is the problem. To get AI that acts more like a helpful agent and less like a chatbot on autopilot, you need to prompt it like one. Here are the three key components of an effective 🔁 Persistence: Ensure the model understands it's in a multi-turn interaction and shouldn't yield control prematurely. 🧾 Example: "You are an agent; please continue working until the user's query is resolved. Only terminate your turn when you are certain the problem is solved." 🧰 Tool Usage: Encourage the model to use available tools, especially when uncertain, instead of guessing. 🧾 Example:" If you're unsure about file content or codebase structure related to the user's request, use your tools to read files and gather the necessary information. Do not guess or fabricate answers." 🧠 Planning: Prompt it to plan before actions and reflect afterward. Prevent reactive tool calls with no strategy. 🧾 Example: "You must plan extensively before each function call and reflect on the outcomes of previous calls. Avoid completing the task solely through a sequence of function calls, as this can hinder insightful problem-solving." 💡 I've used this format in AI-powered research and decision-support tools and saw a clear boost in response quality and reliability. 👉 Takeaway: Agentic prompting turns a passive assistant into an active problem solver. The difference is in the details. Are you using these techniques in your prompts? I would love to hear what's working for you; leave a comment, or let's connect! #PromptEngineering #AgenticPrompting #LLM #AIWorkflow
-
𝐀𝐈 𝐰𝐢𝐭𝐡𝐨𝐮𝐭 𝐩𝐫𝐨𝐦𝐩𝐭 𝐦𝐞𝐭𝐫𝐢𝐜𝐬 𝐢𝐬 𝐥𝐢𝐤𝐞 𝐬𝐚𝐥𝐞𝐬 𝐰𝐢𝐭𝐡𝐨𝐮𝐭 𝐜𝐨𝐧𝐯𝐞𝐫𝐬𝐢𝐨𝐧 𝐫𝐚𝐭𝐞𝐬. 𝘛𝘩𝘦 𝘍𝘶𝘵𝘶𝘳𝘦 𝘰𝘧 𝘈𝘐 𝘈𝘨𝘦𝘯𝘵𝘴: 𝘔𝘦𝘢𝘴𝘶𝘳𝘪𝘯𝘨 𝘗𝘳𝘰𝘮𝘱𝘵 𝘚𝘶𝘤𝘤𝘦𝘴𝘴 𝘸𝘪𝘵𝘩 𝘗𝘳𝘦𝘤𝘪𝘴𝘪𝘰𝘯 Most AI agents fail not from bad models but from weak prompts. Advanced 𝐏𝐫𝐨𝐦𝐩𝐭 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐢𝐧𝐠 isn’t just about crafting inputs. It’s about 𝐦𝐞𝐚𝐬𝐮𝐫𝐢𝐧𝐠 impact. How do we assess prompt success? 𝐁𝐞𝐲𝐨𝐧𝐝 𝐠𝐮𝐭 𝐟𝐞𝐞𝐥𝐢𝐧𝐠. 𝐁𝐞𝐲𝐨𝐧𝐝 𝐠𝐮𝐞𝐬𝐬𝐰𝐨𝐫𝐤. 𝐇𝐨𝐰 𝐭𝐨 𝐂𝐫𝐞𝐚𝐭𝐞 𝐏𝐫𝐨𝐦𝐩𝐭 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭 𝐌𝐞𝐭𝐫𝐢𝐜𝐬: 1) 𝐑𝐞𝐥𝐞𝐯𝐚𝐧𝐜𝐞 𝐒𝐜𝐨𝐫𝐞: Are outputs aligned with intent? 2) 𝐏𝐫𝐞𝐜𝐢𝐬𝐢𝐨𝐧 & 𝐑𝐞𝐜𝐚𝐥𝐥: Does the AI retrieve the right information? 3) 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐞 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲: Are outputs concise and useful? 4) 𝐔𝐬𝐞𝐫 𝐒𝐚𝐭𝐢𝐬𝐟𝐚𝐜𝐭𝐢𝐨𝐧: Do users trust and use the response? 5) 𝐂𝐨𝐧𝐯𝐞𝐫𝐬𝐢𝐨𝐧 𝐈𝐦𝐩𝐚𝐜𝐭: Does it drive action in sales or engagement? 6) 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: Does it improve efficiency in manufacturing workflows? 7) 𝐓𝐡𝐫𝐞𝐚𝐭 𝐃𝐞𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐑𝐚𝐭𝐞: Does it enhance security without false alarms? 8) 𝐀𝐮𝐭𝐨𝐧𝐨𝐦𝐲 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞: Does the AI make reliable and context-aware decisions? 𝑪𝒂𝒔𝒆 𝑺𝒕𝒖𝒅𝒊𝒆𝒔: ↳ 𝐂𝐮𝐬𝐭𝐨𝐦𝐞𝐫 𝐒𝐮𝐩𝐩𝐨𝐫𝐭: AI reduced resolution time by 40% through clearer prompts. ↳ 𝐋𝐞𝐠𝐚𝐥 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡: AI cut irrelevant results by 60% by optimizing specificity. ↳ 𝐒𝐚𝐥𝐞𝐬 𝐎𝐮𝐭𝐫𝐞𝐚𝐜𝐡: AI boosted reply rates by 35% with refined personalization. ↳ 𝐄-𝐜𝐨𝐦𝐦𝐞𝐫𝐜𝐞 𝐒𝐞𝐚𝐫𝐜𝐡: AI improved product matches by 50% with structured prompts. ↳ 𝐌𝐞𝐝𝐢𝐜𝐚𝐥 𝐀𝐈: AI reduced diagnostic errors by 30% by improving context clarity. ↳ 𝐌𝐚𝐧𝐮𝐟𝐚𝐜𝐭𝐮𝐫𝐢𝐧𝐠 𝐀𝐈: AI improved defect detection by 45% by enhancing prompt precision. ↳ 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐀𝐈: AI reduced false alerts by 50% in fraud detection systems. ↳ 𝐀𝐮𝐭𝐨𝐧𝐨𝐦𝐨𝐮𝐬 𝐀𝐈: AI enhanced robotics decision-making by 55%, reducing human intervention. 𝐌𝐞𝐭𝐫𝐢𝐜𝐬 𝐦𝐚𝐭𝐭𝐞𝐫. Precision beats intuition. AI Agents thrive when we measure what works. What’s your framework for 𝐏𝐫𝐨𝐦𝐩𝐭 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭 𝐟𝐨𝐫 𝐲𝐨𝐮𝐫 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬? ♻️ Repost to your LinkedIn followers if AI should be more accessible and follow Timothy Goebel for expert insights on AI & innovation. #AIagents #PromptEngineering #AIMetrics #ArtificialIntelligence #TechInnovation
-
Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key. Effective prompting isn't magic; it's about structure, clarity, and iteration. Here are 10 principles to guide your AI interactions: 🔹 Checkpoint: Build iteratively. Break down large tasks into smaller, testable steps and save progress often. 🔹 Debug: Provide detailed context for errors – error messages, code snippets, and what you've tried. 🔹 Discover: Ask the AI for suggestions on tools, libraries, or approaches. Leverage its knowledge base. 🔹 Experiment: Treat prompting as iterative. Refine your requests based on the AI's responses. 🔹 Instruct: State clear, positive goals. Tell the AI what to do, not just what to avoid. 🔹 Select: Provide focused context. Use file mentions or specific snippets; avoid overwhelming the AI. 🔹 Show: Reduce ambiguity with concrete examples – code samples, desired outputs, data formats, or mockups. 🔹 Simplify: Use clear, direct language. Break down complexity and avoid jargon. 🔹 Specify: Define exact requirements – expected outputs, constraints, data formats, edge cases. 🔹 Test: Plan your structure and features before prompting. Outline requirements like a PM/engineer. By applying these principles, you can significantly improve your collaboration with AI, leading to faster development cycles and better outcomes.
-
These 3 AI prompts save me 6 hours every week. Copy them: 🧠 THE SOCRATIC DEBUGGER Instead of asking AI for answers, make it ask YOU the right questions first: "I have a problem with {{problem_description}}. Before you provide a solution, ask me 5 clarifying questions that will help you understand: 1. The full context 2. What I've already tried 3. Constraints I'm working with 4. The ideal outcome 5. Any edge cases I should consider After I answer, provide your solution with confidence levels for each part." Why this works: Forces you to think through the REAL problem before diving into solutions. 📊 THE CONFIDENCE INTERVAL ESTIMATOR Kill your planning paralysis with brutal honesty: "I need to {{task_description}}. Provide: 1. A detailed plan with specific steps 2. For each step, give a confidence interval (e.g., '85-95% confident this will work') 3. Highlight which parts are most uncertain and why 4. Suggest how to validate the uncertain parts 5. Overall project confidence level Be brutally honest about what could go wrong." Why this works: Surfaces hidden risks BEFORE they blow up your timeline. 👨🏫 THE CLARITY TEACHER Turn any complex topic into crystal-clear understanding: "Explain {{complex_concept}} to me. Start with: 1. A one-sentence ELI5 explanation 2. Then a paragraph with more detail 3. Then the technical explanation 4. Common misconceptions to avoid 5. A practical example I can try right now After each level, ask if I need more detail before proceeding." Why this works: Builds understanding layer by layer instead of info-dumping. The breakthrough wasn't finding better AI tools. It was learning to ask better questions. These 3 prompts alone saved me 6 hours last week. And they compound. The more you use them, the faster you get. (I maintain a vault of 25+ battle-tested prompts like these, adding 5-10 weekly based on what actually works in production) What repetitive task is killing YOUR productivity right now? Drop it below. I might have a prompt that helps 👇
-
How I Use AI as In-House Counsel Are you one of those attorneys who uses checklists to review an agreement? First, good for you. You're so organized. Second, checklists are perfect for AI! Many attorneys don’t trust AI to review an entire contract yet. But even the most skeptical lawyer can save time by using AI for the more administrative parts of a review. Here’s a real prompt I use with my enterprise-grade Legal AI tool to do a preliminary review of an incoming agreement against my checklist. Step-by-step 1. Upload the contract and your checklist. If you’re using a public model, you might consider prepping the document and/or your system settings. 2. Prompt “# Instructions 1. Perform a comprehensive review of the attached contract using the checklist as a guide. The review should be from the perspective of [party name]. 2. Create a detailed analysis table with these columns: - Checklist Item: Summarize the requirement in one sentence - Contract Language: Exact quotes from the contract - Section: Where the language appears - Analysis: Whether the language satisfies the checklist item - Risk Level: 🔴 High / 🟡 Medium / 🟢 Low or None - Recommendation: How to fix or improve the clause [3. For each checklist item: a. Identify relevant sections b. Extract exact quotes with citations c. Evaluate adequacy d. Assess risk to [party designation] e. Recommend improvements] 4. After the table, provide: a. Executive summary of top issues b. Prioritized list of recommended changes c. Any risks specific to [industry] business" 3. Review & Edit Nothing replaces your legal brain. I double-check the analysis, then use the table as a guide for redlining. 💡 Notes 1. Setting up a table in a prompt is a little more involved. Save the prompt so you can reuse the table structure next time. 2. You may not need [Number 3]. If you’re not getting a clear enough response, add in #3 which will give the model more specific instructions. ⚠️ Public Models This one is trickier to use with a public model like ChatGPT, or Claude since you're uploading an agreement. Consider doing the following: 1. Turn off training. For ChatGPT, consider using a temporary chat. 2. Thoroughly redact names and sensitive info and replace them with generic terms. 3. Persona Prompt at the start: “You are an experienced in-house counsel who is an expert contract reviewer.” 4. Consider using a generic account where you haven't already indicated where you work. 5. Use your own judgement as an attorney. Unfortunately, you might not be able to upload an entire contract into a public AI model and protect your data and confidentiality. It might depend on the type of agreement or how well you can redact it. Ultimately, you need to use your judgement as an attorney to determine how extensively you can use AI for this use case. Want help building your own prompt or refining your workflow? Drop a comment.👇 #LegalAI #InHouseCounsel #ContractReview #LegalTech #AI
-
Which is it: use LLMs to improve the prompt, or is that over-engineering? By now, we've all seen a 1000 conflicting prompt guides. So, I wanted to get back to the research: • What do actual studies say? • What actually works in 2025 vs 2024? • What do experts at OpenAI, Anthropic, & Google say? I spent the past month in Google Scholar, figuring it out. I firmed up the learnings with Miqdad Jaffer at OpenAI. And I'm ready to present: "The Ultimate Guide to Prompt Engineering in 2025: The Latest Best Practices." https://lnkd.in/d_qYCBT7 We cover: 1. Do You Really Need Prompt Engineering? 2. The Hidden Economics of Prompt Engineering 3. What the Research Says About Good Prompts 4. The 6-Layer Bottom-Line Framework 5. Step-by-step: Improving Your Prompts as a PM 6. The 301 Advanced Techniques Nobody Talks About 7. The Ultimate Prompt Template 2.0 8. The 3 Most Common Mistakes Some of my favorite takeaways from the research: 1. It's not just revenue, but cost You have to realize that APIs charge by number of input and output tokens. An engineered prompt can deliver the same quality with 76% cost reduction. We're talking $3,000 daily vs $706 daily for 100k calls. 2. Chain-of-Table beats everything else This new technique gets 8.69% improvement on structured data by manipulating table structure step-by-step instead of reasoning about tables in text. For things like financial dashboards and data analysis tools, it's the best. 3. Few-shot prompting hurts advanced models OpenAI's o1 and DeepSeek's R1 actually perform worse with examples. These reasoning models don't need your sample outputs - they're smart enough to figure it out themselves. 4. XML tags boost Claude performance Anthropic specifically trained Claude to recognize XML structure. You get 15-20% better performance just by changing your formatting from plain text to XML tags. 5. Automated prompt engineering destroys manual AI systems create better prompts in 10 minutes than human experts do after 20 hours of careful optimization work. The machines are better at optimizing themselves than we are. 6. Most prompting advice is complete bullshit Researchers analyzed 1,500+ academic papers and found massive gaps between what people claim works and what's actually been tested scientifically. And what about Ian Nuttal's tweet? Well, Ian's right about over-engineering. But for products, prompt engineering IS the product. Bolt hit $50M ARR via systematic prompt engineering. The Key? Knowing when to engineer vs keep it simple.
-
Some of the best AI breakthroughs we’ve seen came from small, focused teams working hands-on, with structured inputs and the right prompting. Here’s how we help clients unlock AI value in days, not months: 1. Start with a small, cross-functional team (4–8 people) 1–2 subject matter experts (e.g., supply chain, claims, marketing ops) 1–2 technical leads (e.g., SWE, data scientist, architect) 1 facilitator to guide, capture, and translate ideas Optional: an AI strategist or business sponsor 2. Context before prompting - Capture SME and tech lead deep dives (recorded and transcribed) - Pull in recent internal reports, KPIs, dashboards, and documentation - Enrich with external context using Deep Research tools: Use OpenAI’s Deep Research (ChatGPT Pro) to scan for relevant AI use cases, competitor moves, innovation trends, and regulatory updates. Summarize into structured bullets that can prime your AI. This is context engineering: assembling high-signal input before prompting. 3. Prompt strategically, not just creatively Prompts that work well in this format: - “Based on this context [paste or refer to doc], generate 100 AI use cases tailored to [company/industry/problem].” - “Score each idea by ROI, implementation time, required team size, and impact breadth.” - “Cluster the ideas into strategic themes (e.g., cost savings, customer experience, risk reduction).” - “Give a 5-step execution plan for the top 5. What’s missing from these plans?” - “Now 10x the ambition: what would a moonshot version of each idea look like?” Bonus tip: Prompt like a strategist (not just a user) Start with a scrappy idea, then ask AI to structure it: - “Rewrite the following as a detailed, high-quality prompt with role, inputs, structure, and output format... I want ideas to improve our supplier onboarding process with AI. Prioritize fast wins.” AI returns something like: “You are an enterprise AI strategist. Based on our internal context [insert], generate 50 AI-driven improvements for supplier onboarding. Prioritize for speed to deploy, measurable ROI, and ease of integration. Present as a ranked table with 3-line summaries, scoring by [criteria].” Now tune that prompt; add industry nuances, internal systems, customer data, or constraints. 4. Real examples we’ve seen work: - Logistics: AI predicts port congestion and auto-adjusts shipping routes - Retail: Forecasting model helps merchandisers optimize promo mix by store cluster 5. Use tools built for context-aware prompting - Use Custom GPTs or Claude’s file-upload capability - Store transcripts and research in Notion, Airtable, or similar - Build lightweight RAG pipelines (if technical support is available) - Small teams. Deep context. Structured prompting. Fast outcomes. This layered technique has been tested by some of the best in the field, including a few sharp voices worth following, including Allie K. Miller!
-
“You don’t need to be a data scientist or a machine learning engineer- everyone can write a prompt” Google recently released a comprehensive guide on prompt engineering for Large Language Models (LLMs), specifically Gemini via Vertex AI. key takeaways for the article: What is prompt engineering really about? It’s the art (and science) of designing prompts that guide LLMs to produce the most accurate, useful outputs. It involves iterating, testing, refining not just throwing in a question and hoping for the best. Things you should know: 1. Prompt design matters. Not just what you say, but how you say it: wording, structure, examples, tone, and clarity all affect results. 2. LLM settings are critical: • Temperature = randomness. Lower means more focused, higher means more creative (but riskier). • Top-K / Top-P = how much the model “thinks outside the box.” • For balanced results: Temperature 0.2 / Top-P 0.95 / Top-K 30 is a solid start. • Prompting strategies that actually work: • Zero-shot, one-shot, few-shot • System / Context / Role prompting • Chain of Thought (reasoning step by step) • Tree of Thoughts (explore multiple paths) • ReAct (reasoning + external tools = power moves) 4. Use prompts for code too! Writing, translating, debugging, just test your output. 5. Best Practices Checklist: • Use relevant examples • Prefer instructions over restrictions • Be specific • Control token length • Use variables • Test different formats (Q&A, statements, structured outputs like JSON) • Document everything (settings, model version, results) Bottom line: Prompting is a strategic skill. If you’re building anything with AI, this is a must read👇
-
You only need 10 Prompt Engineering techniques to build a production-grade AI application. Save these 👇 After analyzing 100s of prompting techniques, I found the most common principles that every #AIengineer follows. Keep them in mind when building apps with LLMs: 1. Stop relying on vague instructions; be explicit instead. ❌ Don't say: "Analyze this customer review." ✅ Say: "Analyze this customer review and extract 3 actionable insights to improve the product." Why? Ambiguity confuses models. 2. Stop overloading prompts ❌ Asking the model to do everything at once. ✅ Break it down: Step 1: Identify the main issues. Step 2: Suggest specific improvements for each issue. Why? Smaller steps reduce errors and improve reliability. 3. Always provide examples. ❌ Skipping examples for context-dependent tasks. ✅ Follow this example: 'The battery life is terrible.' → Insight: Improve battery performance to meet customer expectations. Why? Few-shot examples improve performance. 4. Stop ignoring instruction placement. ❌ Putting the task description in the middle. ✅ Place instructions at the start or end of the system prompt. Why? Models process beginning and end information more effectively. 5. Encourage step-by-step thinking. ❌ What are the insights from this review? ✅ Analyze this review step by step: First, identify the main issues. Then, suggest actionable insights for each issue. Why? Chain-of-thought (CoT) prompting reduces errors. 6. Stop ignoring output formats. ❌ Expecting structured outputs without clear instructions. ✅ Provide the output as JSON: {‘Name’: [value], ‘Age’: [value]}. Use Pydantic to validate the LLM outputs. Why? Explicit formats prevent unnecessary or malformed text. 7. Restrict to the provided context. ❌ Answer the question about a customer. ✅ Answer only using the customer's context below. If unsure, respond with 'I don’t know. Why? Clear boundaries prevent reliance on inaccurate internal knowledge. 8. Stop assuming that the first version of a prompt is the best version. ❌ Never iterating on prompts ✅ Use the model to critique and refine your prompt. 9. Don't forget about the edge cases. ❌ Designing for the “ideal” or most common inputs. ✅ Test different edge cases and specify fallback instructions. Why? Real-world use often involves imperfect inputs. Cover for most of them. 10. Stop overlooking prompt security; design prompts defensively.** ❌ Ignoring risks like prompt injection or extraction. ✅ Explicitly define boundaries: *"Do not return sensitive information."* Why? Defensive prompts reduce vulnerabilities and prevent harmful outputs. --- #promptengineering