There's a science to writing research prompts that actually work. Stop winging it and start using proven methodology. I used to get wildly inconsistent results from AI research until I realized I was approaching it completely wrong. My breakthrough came when I stopped treating AI like Google and started treating it like hiring a 180IQ analyst who is incredibly smart but doesn't know exactly what you need. You wouldn't just staff an analyst with "research Tesla". You'd give them context, constraints, and clear deliverables. I started assigning specific roles instead of generic questions... When I tell ChatGPT to be "a shrewd equity analyst with 15 years of tech experience," the analysis quality/tone jumps dramatically. I learned to package context upfront... Instead of hoping AI would guess what I needed, I'd specify my audience (institutional investors vs retail), tone (formal vs conversational), exact length requirements, attachments, etc. The game-changer was building workflows instead of one-shot prompts... I now use a multi step prompting process: gap check, outline approval, first draft, feedback round, revision, and final sign-off. It sounds like overkill but you save hours net. Try it out and let me know what you think. PS: PDF version of this doc here https://lnkd.in/eSiGURnv
How to Improve AI Responses with Structured Prompts
Explore top LinkedIn content from expert professionals.
Summary
Structured prompts are a method to enhance AI interactions by providing clear roles, tasks, and formatting instructions, enabling artificial intelligence to deliver more accurate and relevant responses. By treating AI like a collaborative partner, users can guide it to deliver meaningful and high-quality results while saving time and effort.
- Define clear roles: Specify the role you want the AI to play (e.g., "Act as a financial analyst") to focus its responses and achieve tailored results.
- Give examples: Provide specific examples or desired output formats to help the AI understand your expectations and deliver consistent results.
- Break down tasks: Divide complex requests into smaller, manageable steps to improve the AI's accuracy and avoid overwhelming it.
-
-
The most underrated skill for 2025? (Not code. Not ads. Not funnels.) It's knowing how to talk to AI. Seriously. Prompt writing is becoming the new leverage skill. And no one’s teaching it right until now. I’ve built AI workflows for content, marketing, and growth. They save me 10+ hours/week and cut down on team overhead. The key? 👉 It’s not just asking ChatGPT questions. It’s knowing how to structure your prompts with frameworks like these: Here are 4 frameworks I use to get 🔥 outputs in minutes: 1. R-T-F → Role → Task → Format “Act as a copywriter. Write an Instagram ad script. Format it as a conversation.” 2. T-A-G → Task → Action → Goal “Review my website copy. Suggest changes. Goal: Boost conversion by 15%.” 3. B-A-B → Before → After → Bridge “Traffic is low. I want 10k monthly visitors. Give me a 90-day SEO plan.” 4. C-A-R-E → Context → Action → Result → Example “We’re launching a podcast. Write a guest outreach email. Goal: Book 10 experts.” You’re not just prompting. You’re building AI systems. Mastering this skill will: ✅ 10x your productivity ✅ Reduce dependency on agencies ✅ Help you scale solo (or with a lean team) The AI era belongs to the strategic communicators. Learn how to prompt, and you won’t need to hire half as much. 📌 Save this post. 🔁 Repost if you believe AI is a partner, not a replacement. #ChatGPT #PromptEngineering
-
Prompting tips from someone that spends probably $13k+ per month on OpenAI API calls. I'll break the tips into chatGPT user interface tips as well as API tips. My bias is of course going to be about outbound sales and cold email because this is where we spend from and 100% of this spend is on 4o mini API calls. Chat GPT Prompting Tips 1. Use transcription as much as possible. Straight in the UI or use whisprflow(dot)ai (can't tag them for some reason). I personally get frustrated with a prompt when I'm typing it out vs. talking and can add so much more detail. 2. Got this one from Yash Tekriwal 🤔 - When you're working on something complex like a deep research request or something you want o3 to run or analyzing a lot of data, ask chatgpt to give you any follow up questions it might have before it runs fully. Helps you increase your prompt accuracy like crazy. 3. I've found that o3 is pretty good at building simple automations in make as well so we will ask it to output what we want in a format that we can input into make and often we can build automations just by explaining what we need and then plugging in our logins in Make. API prompting tips 1. Throwing back to the Chat GPT UI, but we will often create our complex prompts in the user interface first and then bring it into the API via Clay asking ChatGPT along the way on how to improve the prompt and help us think of edge cases. This can take any team member to a prompting pro immediately. 2. Examples are your best friend. Giving examples of what you would want the output to be is how we can get our outputs to be the same format and not put "synergies" in every email we are sending. I tell the team, minimum 2 examples for single line outputs. 4 examples for anything more complex than that. 6 examples for industry tagging because that gets so odd. Save on costs by putting some real examples in your system prompt. 3. Request the output in JSON. It keeps everything more uniform in the format you need. 4. Speaking of JSON, ask the API to prove to you why it thinks what it thinks and then output the answer. Especially for company category tagging, I find this works really well. I see this greatly increase the accuracy of our results for 2 reasons. I think if AI has to take the extra second to prove to you why a company is an ecommerce brand, the results are demonstrably better. This is just a guess, but I also think that because LLMs basically work by guessing what the next best word is, if you have it tell you why it thinks something is a certain industry and then it gives the output, I think it's much more likely to be correct. Anything else you've found?
-
Building with AI = Failing A Lot Take writing custom GPT instructions. I've probably written 50+ at this point and for the most part they have all followed a similar markdown pattern I learned over a year ago from Rachel Woods. But models evolve and what worked yesterday might not work as well today. When GPT-4o came out a few weeks ago, I definitely noticed some of my custom GPTs acting weird. Some got better. Some got worse. Some straight up broke. So I went back to the lab again and started to rebuild. Lots of failures later, I now have a new custom GPT prompt structure that still has the bones of my older markdown method, but incorporates a lot of OpenAI's recent guidelines. And now I have GPTs performing better than ever. You can check out the full article, but here are some good guidelines for any prompt writing: ✔ Simplify Complex Instructions -break larger steps down ✔Structure for clarity - use delimiters and examples ✔Promote Attention to Detail - encourage the model to focus on certain areas of the prompt ✔Avoid Negative Instructions - frame instructions positively ✔Granular Steps - break down steps as granularly as possible ✔Consistency and Clarity - be explicit with terms and be sure to define what you want with examples.
-
Even if your AI prompts are performing well, you should still keep iterating and improving on them as new prompting research emerges and you gain more prompting expertise. I got a great reminder of this after a live training session a couple of weeks ago. Scott Brown emailed me the next day to share an experiment he did. He applied two prompting best practices he learned in the live training to MY long-standing voice training prompt I shared with the group. This prompt analyzes your writing to understand its unique voice, style, and tone and then writes a prompt snippet to help AI emulate this style when it writes the same kind of copy for you. I started iterating on this prompt about a year and a half ago. It's been improved several times, but I have only thought to improve it when it stops performing due to what's called prompt drift -- when the AI stops responding as you expect due to model updates. (In my mind, I think I was thinking "if it's not broke, you don't need to fix it). Scott's improvements (based on what he learned in our training) included adding: 👉 Instructing the AI to take its time and think step by step 👉 Embedding an emotional motivation to stress the importance of the task These additions significantly improved the prompt's performance, even though the original prompt had served me (and my clients!) well for a long time. Scott's initiative highlighted a blind spot I had around remembering to update my own prompts even when they work! I did a bunch of A-B testing adding my own prompt updates, and oh WOW, it made a gigantic difference. The version I use has definitely now been updated with these two elements! (If you're enrolled in my Foundations of Generative AI for B2B Marketing course, the prompt document in the "Teaching the AI to Write Like You" chapter has been updated, too!) Scott caught my own blind spot on ALWAYS going back and updating old prompts that work with new prompt understanding. This experience definitely underscores the importance of revisiting and improving even your good prompts regularly. I love when the student becomes the teacher. I love that he learned these strategies from me and then applied them immediately! If you have a copy of the voice training prompt and are not taking my course (where you can just download the updated prompt), I highly recommend you consider adding these two prompt elements to significantly enhance it's performance: ✔️Instruct the AI to take a deep breath and think step by step. ✔️Provide emotional motivation to do a great job on the task. Thanks again for the reminder Scott! ---- Want to learn more about how to get the most out of generative AI for Marketing? Definitely hit that + Follow button because I post a ton of content like this on here! And if you want to go really deep, there's a 🔗 to my Foundations of Generative AI for B2B Marketing Course in my bio.
-
Wouldn't it be great if ChatGPT came with an instruction manual? After thousands of interactions with the tool, here's the most useful prompt format I've found, and it's more ART than science: What's the ART prompt? Ask for Role Request Task Type of Format And here's a sample with how you can use it: Ask for Role: You are a legal analyst. Request Task: Evaluate the potential risks of a new client contract. Type of Format: A summary paragraph with key points highlighted. When you identify those three things, you can customize it like this: "You are a legal analyst. Evaluate the potential risks associated with entering into a contract with a new client who operates in multiple international jurisdictions. Provide a summary paragraph with key risks highlighted." Why does the ART prompt work so well for general ideas? 1.) It provides a context from which the AI can operate. It's like putting blinders on a horse...only look this direction and ignore everything else. 2.) A well-defined, specific task coupled with context makes it very easy for the AI to understand what you'd like it to accomplish. 3.) When you give the AI a desired style of output (summary paragraph, bullet points, a conversation between two people, etc.) it knows how to deliver the the task requirements to you. Have you tried using prompt formats like this one? What have you found to be most useful? #ai #chatgpt
-
On Friday we had our first fully hands-on class for Day 5 of 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 𝐟𝐨𝐫 𝐌𝐚𝐧𝐚𝐠𝐞𝐫𝐬: 𝘏𝘢𝘯𝘥𝘴-𝘖𝘯 𝘗𝘳𝘰𝘮𝘱𝘵 𝘌𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘪𝘯𝘨 During the class students got their feet wet over the course of 3 modules: 1. 𝐏𝐫𝐨𝐦𝐩𝐭 𝐢𝐧𝐣𝐞𝐜𝐭𝐢𝐨𝐧𝐬: how your commercial chatbot will get fooled 2. 𝐏𝐫𝐨𝐦𝐩𝐭 𝐭𝐨𝐨𝐥𝐬: how to analyze financial statements in under a minute 3. 𝐏𝐫𝐨𝐦𝐩𝐭 𝐭𝐮𝐭𝐨𝐫𝐢𝐧𝐠: how to use prompting to learn new skills quickly In Module 2, many students had trouble getting ChatGPT to analyze Apple's 10-K by calculating iPhone growth and overall financial health of the company. The AI would often just say "I can't complete this request". This happened whenever the student asked for everything in one go. 🔨 One fix applied for this was to break the ask down into baby steps for the AI: • First just summarize the 10-K to ensure ChatGPT processed it correctly and has the information saved into the context window (or RAG pipeline if used) • Then ask for a simple calculation of iPhone CAGR • Finally end with a full F-Score analysis to assess Apple's financial position (but even this we broke the 9 components of F-Scores into 3 batches of 3). 💡 As you get more sophisticated in prompting (and more realistically when the LLMs continue to improve), this analysis will become more streamlined, but when starting out, it's often helpful to treat the AI as you would a new hire: by giving piecemeal tasks gradually instead of overloading them with a mountain of responsibilities. Just like a new hire, LLMs that are overloaded with tasks often give up or fail (see attached video) ⤵ Any prompt experts out there: what are your favorite prompts? Comment below!
-
I was wrong about “prompt engineering.” I thought it was silly. What’s important is just the model, right? Wrong. With well formulated prompts you can: 1. Prevent the AI from being overly agreeable 2. Minimize hallucinations 3. Increase maintainability, performance, and scalability 4. Provide boundaries for what the AI can and can’t do 5. Control the formatting, tone, and style of responses 6. Prevent bugs and vulnerabilities 7. So much more… Here’s one of my favorites from my most recent video (link in comments). It’s a great way to get a high-quality code review from AI: Review this function as if you are a senior engineer. Specifically look for the following, and provide a list of potential improvements with reasoning for those improvements: 1. Logical mistakes that could cause errors. 2. Unaccounted for edge cases. 3. Poor or inconsistent naming conventions and styling that would make the code hard to understand. 4. Performance optimizations. 5. Security vulnerabilities or concerns to consider. 6. Ambiguous or hard to understand code that requires documentation. 7. Debugging code that should be removed before pushing to production. 8. Any other ways to improve the code quality, readability, performance, security, scalability, or maintainability. Expected behavior: … Code: …
-
In a world where access to powerful AI is increasingly democratized, the differentiator won’t be who has AI, but who knows how to direct it. The ability to ask the right question, frame the contextual scenario, or steer the AI in a nuanced direction is a critical skill that’s strategic, creative, and ironically human. My engineering education taught me to optimize systems with known variables and predictable theorems. But working with AI requires a fundamentally different cognitive skill: optimizing for unknown possibilities. We're not just giving instructions anymore; we're co-creating with an intelligence that can unlock potential. What separates AI power users from everyone else is they've learned to think in questions they've never asked before. Most people use AI like a better search engine or a faster typist. They ask for what they already know they want. But the real leverage comes from using AI to challenge your assumptions, synthesize across domains you'd never connect, and surface insights that weren't on your original agenda. Consider the difference between these approaches: - "Write a marketing plan for our product" (optimization for known variables) - "I'm seeing unexpected churn in our enterprise segment. Act as a customer success strategist, behavioral economist, and product analyst. What are three non-obvious reasons this might be happening that our internal team would miss?" (optimization for unknown possibilities) The second approach doesn't just get you better output, it gets you output that can shift your entire strategic direction. AI needs inputs that are specific and not vague, provide context, guide output formats, and expand our thinking. This isn't just about prompt engineering, it’s about developing collaborative intelligence - the ability to use AI not as a tool, but as a thinking partner that expands your cognitive range. The companies and people who master this won't just have AI working for them. They'll have AI thinking with them in ways that make them fundamentally more capable than their competition. What are your pro-tips for effective AI prompts? #AppliedAI #CollaborativeIntelligence #FutureofWork