🚀 Chain of Thought (CoT), Tree of Thought, React, Contrastive Chain of Thought, Thread of Thought, and EmotionPrompt are all strategies for prompting language models, as described in various research papers. With new strategies being added regularly, the question arises: should you implement any of these methods, and if so, which where do you start? Full article: https://lnkd.in/ecK2kDam Here's one way to think about the decision-making process: 1️⃣ **Task Requirements**: Determine if your task requires planning and multi-step reasoning. For example, extracting detailed information from the Titanic movie script (e.g. total number of people Jack interacted with, sorted by profession) may require a more advanced prompting strategy (e.g., classic CoT with planning examples). On the other hand, outputting a JSON-formatted list of entities given some text can be achieved by providing a detailed prompt. Smaller models like LLAMA 13B may need more careful prompting compared to larger models like GPT4-Turbo. If you need more control over model behavior, consider fine-tuning your smaller model. 2️⃣ **Use Few-Shot Examples**: If your use case requires some reasoning steps, carefully craft few-shot examples (Chain of Thought) that encode your domain expertise. If necessary, assemble a dataset of these examples and retrieve a ranked relevant set dynamically at inference time. 3️⃣ **Costs Matter**: Consider whether your final application is latency or compute constrained. Methods like Tree of Thought and React often require multiple LLM inference calls, which might be cost-prohibitive in terms of both latency and compute. This consideration is even more critical for real-time apps deployed at scale. In conclusion, many recent techniques, while insightful, may offer only incremental improvements and may not generalize well beyond the tasks discussed in the papers. For example, Contrastive CoT adds positive and negative few-shot examples; Thread of Thought suggests adding "walk me through this context"; and EmotionPrompt proposes adding personal stakes, like "this is essential to my career." The best approach may be to start with refining the details in your prompt, asking the model to show its work and think step by step, and crafting high-quality few-shot examples. This way, you can prioritize your resources effectively and make the best use of the available strategies. #AI #NLP #LanguageModels #AIstrategies #FewShotLearning
Strategies for Automatic Prompt Generation
Explore top LinkedIn content from expert professionals.
Summary
Strategies for automatic prompt generation are methods used to craft better queries for AI language models, helping them deliver clearer, more accurate, and context-specific responses. These strategies range from creating detailed examples to iteratively refining prompts for complex tasks.
- Focus on task clarity: Understand your goal and align the prompt with the level of complexity needed, such as providing step-by-step instructions for multi-step tasks or simpler phrasing for straightforward outputs.
- Use iterative refinement: Interact with the language model by asking it for feedback on your prompt or revisions to improve accuracy and contextual understanding.
- Incorporate examples: Provide clear, relevant examples in your prompt to guide the model toward producing the desired format or style in its responses.
-
-
Prompting tips from someone that spends probably $13k+ per month on OpenAI API calls. I'll break the tips into chatGPT user interface tips as well as API tips. My bias is of course going to be about outbound sales and cold email because this is where we spend from and 100% of this spend is on 4o mini API calls. Chat GPT Prompting Tips 1. Use transcription as much as possible. Straight in the UI or use whisprflow(dot)ai (can't tag them for some reason). I personally get frustrated with a prompt when I'm typing it out vs. talking and can add so much more detail. 2. Got this one from Yash Tekriwal 🤔 - When you're working on something complex like a deep research request or something you want o3 to run or analyzing a lot of data, ask chatgpt to give you any follow up questions it might have before it runs fully. Helps you increase your prompt accuracy like crazy. 3. I've found that o3 is pretty good at building simple automations in make as well so we will ask it to output what we want in a format that we can input into make and often we can build automations just by explaining what we need and then plugging in our logins in Make. API prompting tips 1. Throwing back to the Chat GPT UI, but we will often create our complex prompts in the user interface first and then bring it into the API via Clay asking ChatGPT along the way on how to improve the prompt and help us think of edge cases. This can take any team member to a prompting pro immediately. 2. Examples are your best friend. Giving examples of what you would want the output to be is how we can get our outputs to be the same format and not put "synergies" in every email we are sending. I tell the team, minimum 2 examples for single line outputs. 4 examples for anything more complex than that. 6 examples for industry tagging because that gets so odd. Save on costs by putting some real examples in your system prompt. 3. Request the output in JSON. It keeps everything more uniform in the format you need. 4. Speaking of JSON, ask the API to prove to you why it thinks what it thinks and then output the answer. Especially for company category tagging, I find this works really well. I see this greatly increase the accuracy of our results for 2 reasons. I think if AI has to take the extra second to prove to you why a company is an ecommerce brand, the results are demonstrably better. This is just a guess, but I also think that because LLMs basically work by guessing what the next best word is, if you have it tell you why it thinks something is a certain industry and then it gives the output, I think it's much more likely to be correct. Anything else you've found?
-
Earlier this month, researchers from DeepMind published a paper outlining a new approach for optimizing prompts for LLMs (Optimizing by Prompting, or OPRO), with research showing that: 💥 LLMs can self-optimize to find the best prompts for successfully completing a task (proven through iterative prompting and performance, using two LLMs interacting with one another) 💥 Natural language prompts can be used to generate optimal prompts for a given task, particularly if the task is difficult to define mathematically. 💥 OPRO outperforms human-designed prompts by 8%-50%! In other words, LLMs are better at developing natural language prompts for their own performance than humans are. (DeepMind tested the methodology on PaLM, GPT-3 and GShard, but the research suggests OPRO is transferable to other models). What does this mean for you? While OPRO is not accessible to users, there are learnings we can take from it: - Try asking the LLM you're working with what prompt it suggests you should use for the particular problem you're trying to solve. It will be able to tell you what contextual information it needs and how to structure the prompt in a way that is more likely to produce a higher quality response. - Work iteratively - one prompt or instruction is rarely enough to get you to the desired outcome or to reach the highest quality response possible. #LLMs #AI #GenAI #OPRO