How to Master Prompt Engineering for AI Outputs

Explore top LinkedIn content from expert professionals.

Summary

Learning how to master prompt engineering for AI outputs is becoming an essential skill in the age of artificial intelligence. It involves crafting precise, structured instructions to guide AI systems in generating accurate, useful, and context-driven responses.

  • Define roles and goals: Clearly specify the role the AI should take, the task you need it to perform, and the desired format or outcome to get more tailored results.
  • Provide examples: Use concrete examples or templates in your prompts to help AI understand the expected output and maintain consistent formatting.
  • Break tasks into steps: Simplify complex requests by dividing them into smaller, sequential tasks to ensure the AI processes and executes them effectively.
Summarized by AI based on LinkedIn member posts
  • View profile for 🦾Eric Nowoslawski

    Founder Growth Engine X | Clay Enterprise Partner

    47,818 followers

    Prompting tips from someone that spends probably $13k+ per month on OpenAI API calls. I'll break the tips into chatGPT user interface tips as well as API tips. My bias is of course going to be about outbound sales and cold email because this is where we spend from and 100% of this spend is on 4o mini API calls. Chat GPT Prompting Tips 1. Use transcription as much as possible. Straight in the UI or use whisprflow(dot)ai (can't tag them for some reason). I personally get frustrated with a prompt when I'm typing it out vs. talking and can add so much more detail. 2. Got this one from Yash Tekriwal 🤔 - When you're working on something complex like a deep research request or something you want o3 to run or analyzing a lot of data, ask chatgpt to give you any follow up questions it might have before it runs fully. Helps you increase your prompt accuracy like crazy. 3. I've found that o3 is pretty good at building simple automations in make as well so we will ask it to output what we want in a format that we can input into make and often we can build automations just by explaining what we need and then plugging in our logins in Make. API prompting tips 1. Throwing back to the Chat GPT UI, but we will often create our complex prompts in the user interface first and then bring it into the API via Clay asking ChatGPT along the way on how to improve the prompt and help us think of edge cases. This can take any team member to a prompting pro immediately. 2. Examples are your best friend. Giving examples of what you would want the output to be is how we can get our outputs to be the same format and not put "synergies" in every email we are sending. I tell the team, minimum 2 examples for single line outputs. 4 examples for anything more complex than that. 6 examples for industry tagging because that gets so odd. Save on costs by putting some real examples in your system prompt. 3. Request the output in JSON. It keeps everything more uniform in the format you need. 4. Speaking of JSON, ask the API to prove to you why it thinks what it thinks and then output the answer. Especially for company category tagging, I find this works really well. I see this greatly increase the accuracy of our results for 2 reasons. I think if AI has to take the extra second to prove to you why a company is an ecommerce brand, the results are demonstrably better. This is just a guess, but I also think that because LLMs basically work by guessing what the next best word is, if you have it tell you why it thinks something is a certain industry and then it gives the output, I think it's much more likely to be correct. Anything else you've found?

  • The most underrated skill for 2025? (Not code. Not ads. Not funnels.) It's knowing how to talk to AI. Seriously. Prompt writing is becoming the new leverage skill. And no one’s teaching it right until now. I’ve built AI workflows for content, marketing, and growth. They save me 10+ hours/week and cut down on team overhead. The key? 👉 It’s not just asking ChatGPT questions. It’s knowing how to structure your prompts with frameworks like these: Here are 4 frameworks I use to get 🔥 outputs in minutes: 1. R-T-F → Role → Task → Format “Act as a copywriter. Write an Instagram ad script. Format it as a conversation.” 2. T-A-G → Task → Action → Goal “Review my website copy. Suggest changes. Goal: Boost conversion by 15%.” 3. B-A-B → Before → After → Bridge “Traffic is low. I want 10k monthly visitors. Give me a 90-day SEO plan.” 4. C-A-R-E → Context → Action → Result → Example “We’re launching a podcast. Write a guest outreach email. Goal: Book 10 experts.” You’re not just prompting. You’re building AI systems. Mastering this skill will: ✅ 10x your productivity ✅ Reduce dependency on agencies ✅ Help you scale solo (or with a lean team) The AI era belongs to the strategic communicators. Learn how to prompt, and you won’t need to hire half as much. 📌 Save this post. 🔁 Repost if you believe AI is a partner, not a replacement. #ChatGPT #PromptEngineering

  • View profile for Arielle Gross Samuels

    CMO & CCO at General Catalyst | Ex-Blackstone, Meta, Deloitte | Forbes Top 50 CMO & 30 under 30

    8,875 followers

    In a world where access to powerful AI is increasingly democratized, the differentiator won’t be who has AI, but who knows how to direct it. The ability to ask the right question, frame the contextual scenario, or steer the AI in a nuanced direction is a critical skill that’s strategic, creative, and ironically human. My engineering education taught me to optimize systems with known variables and predictable theorems. But working with AI requires a fundamentally different cognitive skill: optimizing for unknown possibilities. We're not just giving instructions anymore; we're co-creating with an intelligence that can unlock potential. What separates AI power users from everyone else is they've learned to think in questions they've never asked before. Most people use AI like a better search engine or a faster typist. They ask for what they already know they want. But the real leverage comes from using AI to challenge your assumptions, synthesize across domains you'd never connect, and surface insights that weren't on your original agenda. Consider the difference between these approaches: - "Write a marketing plan for our product" (optimization for known variables) - "I'm seeing unexpected churn in our enterprise segment. Act as a customer success strategist, behavioral economist, and product analyst. What are three non-obvious reasons this might be happening that our internal team would miss?" (optimization for unknown possibilities) The second approach doesn't just get you better output, it gets you output that can shift your entire strategic direction. AI needs inputs that are specific and not vague, provide context, guide output formats, and expand our thinking. This isn't just about prompt engineering, it’s about developing collaborative intelligence - the ability to use AI not as a tool, but as a thinking partner that expands your cognitive range. The companies and people who master this won't just have AI working for them. They'll have AI thinking with them in ways that make them fundamentally more capable than their competition. What are your pro-tips for effective AI prompts? #AppliedAI #CollaborativeIntelligence #FutureofWork

  • View profile for Tim Valicenti

    AI/ML Faculty @ MIT

    2,965 followers

    On Friday we had our first fully hands-on class for Day 5 of 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 𝐟𝐨𝐫 𝐌𝐚𝐧𝐚𝐠𝐞𝐫𝐬: 𝘏𝘢𝘯𝘥𝘴-𝘖𝘯 𝘗𝘳𝘰𝘮𝘱𝘵 𝘌𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 During the class students got their feet wet over the course of 3 modules: 1. 𝐏𝐫𝐨𝐦𝐩𝐭 𝐢𝐧𝐣𝐞𝐜𝐭𝐢𝐨𝐧𝐬: how your commercial chatbot will get fooled 2. 𝐏𝐫𝐨𝐦𝐩𝐭 𝐭𝐨𝐨𝐥𝐬: how to analyze financial statements in under a minute 3. 𝐏𝐫𝐨𝐦𝐩𝐭 𝐭𝐮𝐭𝐨𝐫𝐢𝐧𝐠: how to use prompting to learn new skills quickly In Module 2, many students had trouble getting ChatGPT to analyze Apple's 10-K by calculating iPhone growth and overall financial health of the company. The AI would often just say "I can't complete this request". This happened whenever the student asked for everything in one go. 🔨 One fix applied for this was to break the ask down into baby steps for the AI: • First just summarize the 10-K to ensure ChatGPT processed it correctly and has the information saved into the context window (or RAG pipeline if used) • Then ask for a simple calculation of iPhone CAGR • Finally end with a full F-Score analysis to assess Apple's financial position (but even this we broke the 9 components of F-Scores into 3 batches of 3). 💡 As you get more sophisticated in prompting (and more realistically when the LLMs continue to improve), this analysis will become more streamlined, but when starting out, it's often helpful to treat the AI as you would a new hire: by giving piecemeal tasks gradually instead of overloading them with a mountain of responsibilities. Just like a new hire, LLMs that are overloaded with tasks often give up or fail (see attached video) ⤵ Any prompt experts out there: what are your favorite prompts? Comment below!

  • View profile for Conner Ardman

    Creator of FrontendExpert @ AlgoExpert | Ex-Facebook Software Engineer | 100k+ on YouTube | Professional Yapper

    9,873 followers

    I was wrong about “prompt engineering.” I thought it was silly. What’s important is just the model, right? Wrong. With well formulated prompts you can: 1. Prevent the AI from being overly agreeable 2. Minimize hallucinations 3. Increase maintainability, performance, and scalability 4. Provide boundaries for what the AI can and can’t do 5. Control the formatting, tone, and style of responses 6. Prevent bugs and vulnerabilities 7. So much more… Here’s one of my favorites from my most recent video (link in comments). It’s a great way to get a high-quality code review from AI: Review this function as if you are a senior engineer. Specifically look for the following, and provide a list of potential improvements with reasoning for those improvements: 1. Logical mistakes that could cause errors. 2. Unaccounted for edge cases. 3. Poor or inconsistent naming conventions and styling that would make the code hard to understand. 4. Performance optimizations. 5. Security vulnerabilities or concerns to consider. 6. Ambiguous or hard to understand code that requires documentation. 7. Debugging code that should be removed before pushing to production. 8. Any other ways to improve the code quality, readability, performance, security, scalability, or maintainability. Expected behavior: … Code: …

Explore categories