How to Use Step-by-Step Prompting in LLMs

Explore top LinkedIn content from expert professionals.

Summary

Step-by-step prompting in large language models (LLMs) is a method of structuring instructions to guide AI through tasks logically and accurately, ensuring better responses by breaking down complex queries into manageable parts.

  • Start with clarity: Clearly define the task, desired output format, and tone to ensure the AI understands your expectations upfront.
  • Structure instructions logically: Use a sequence of steps, such as providing context, examples, and specific rules, to guide the AI through the process.
  • Test and refine: Treat prompting as an iterative process by testing outputs, identifying errors, and adjusting the instructions to achieve optimal results.
Summarized by AI based on LinkedIn member posts
  • View profile for Armand Ruiz
    Armand Ruiz Armand Ruiz is an Influencer

    building AI systems

    202,068 followers

    Anthropic’s “Prompting 101” is one of the best real world tutorials I’ve seen lately on how to actually build a great prompt. Not a toy example. They showcase a real task: analyzing handwritten Swedish car accident forms. Here’s the breakdown: 𝟭. 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝗶𝘀 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝘃𝗲. You don’t write the perfect prompt on the first try. You test, observe, refine. Just like any other product loop. 𝟮. 𝗦𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗺𝗮𝘁𝘁𝗲𝗿𝘀. The best prompts follow a playbook: - Start with task + tone context - Load static knowledge into the system prompt - Give clear rules and step-by-step instructions - Show concrete examples - Ask the model to think step-by-step - Define structured output 𝟯. 𝗗𝗼𝗻’𝘁 𝘁𝗿𝘂𝘀𝘁 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹, 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗱𝗲𝘀𝗶𝗴𝗻 𝗳𝗼𝗿 𝗶𝘁. In the first version, Claude hallucinated a skiing accident. Only after adding context, rules, and constraints did it produce reliable results. You wouldn’t let a junior analyst guess on regulatory filings. Don’t let your LLM do it either. 𝟰. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗺𝗽𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗶𝗻𝘁𝗲𝗿𝗳𝗮𝗰𝗲. In traditional software, interfaces are buttons and APIs. In GenAI, the interface is language. Your prompt is the program. Most teams still treat prompts like notes in a playground. High-performing teams treat them like production code. That's why in our IBM watsonx platform, prompts are assets just like code or data. 👉Access the video tutorial here: https://lnkd.in/gUdHc2uy ________________________ ♻️ Repost to share these insights! ➕ Follow Armand Ruiz for more

  • View profile for Aadit Sheth

    The Narrative Company | Executive Narrative & Influence Strategy

    96,579 followers

    Anthropic dropped the best free masterclass on prompt engineering Here’s what you’ll learn in 9 chapters: 1. Structure better prompts → Always start with the intent: “Summarize this article in 5 bullet points for a beginner” is 10x better than “Summarize this.” → Use instruction-first phrasing, the model performs best when it knows exactly what you want upfront. 2. Be clear + direct → Avoid open-ended ambiguity. Instead of “Tell me about success,” ask “List 3 traits successful startup founders share.” → Use active voice, fewer adjectives, and always define vague terms. 3. Assign the right “role” → Start with: “You are a [role]”, this frames the model’s mindset. Example: “You are a skeptical investor evaluating a pitch.” → Roles unlock tone, precision, and even memory, especially in multi-turn chats. 4. Think step by step (Precondition prompts) → Ask the model to plan before it answers: “First, list your steps. Then, perform them one by one.” → This dramatically improves accuracy and reduces hallucinations in complex tasks. 5. Avoid hallucinations → Anchor the model with clear boundaries: “Only answer if the input contains [x]. Otherwise, respond: ‘Insufficient data.’” → Reduce creativity in factual tasks. E.g., “Be concise. Don’t assume.” 6. Build complex prompts (with reusable patterns) → Use modular blocks: context → instruction → format → examples. → Build a personal prompt library by saving + refining your best-performing prompts over time. It’s not just “how to prompt better.” It’s a full-on skill upgrade. Interactive. Structured. Free. Share this with anyone still writing 1-line prompts. Image: Hesamation

  • View profile for Colin S. Levy
    Colin S. Levy Colin S. Levy is an Influencer

    General Counsel @ Malbek - CLM for Enterprise | Adjunct Professor of Law | Author of The Legal Tech Ecosystem | Legal Tech Advisor and Investor | Named to the Fastcase 50 (2022)

    45,322 followers

    The ability to effectively communicate with generative AI tools has become a critical skill. A. Here's some tips on getting the best results: 1) Be crystal clear - Replace "Tell me about oceans" with "Provide an overview of the major oceans and their unique characteristics" 2) Provide context - Include relevant background information and constraints Structure logically - Organize instructions, examples, and questions in a coherent flow. 3) Stay concise - Include only the necessary details. B. Try the "Four Pillars:" 1) Task - Use specific action words (create, analyze, summarize) 2) Format - Specify desired output structure (list, essay, table) 3) Voice - Indicate tone and style (formal, persuasive, educational) 4) Context - Supply relevant background and criteria C. Advanced Techniques: 1) Chain-of-Thought Prompting - Guide AI through step-by-step reasoning. 2) Assign a Persona - "Act as an expert historian" to tailor expertise level. 3) Few-Shot Prompting - Provide examples of desired outputs. 4) Self-Refine Prompting - Ask AI to critique and improve its own responses. D. Avoid: 1) Vague instructions leading to generic responses. 2) Overloading with too much information at once. What prompting techniques have yielded the best results in your experience? #legaltech #innovation #law #business #learning

Explore categories