Tips for Improving AI Contextual Understanding

Explore top LinkedIn content from expert professionals.

Summary

Improving AI's contextual understanding is about enhancing how AI systems interpret and respond to nuanced and specific human inputs by providing better context and structured interactions. This approach ensures AI generates more accurate, relevant, and usable outputs tailored to user needs.

  • Start with clarity: Be specific about format, tone, and goals when crafting AI prompts to guide the output toward your intended results.
  • Provide rich context: Share relevant background data, examples, or structured notes to help the AI generate precise and insightful responses.
  • Iterate thoughtfully: Collaborate with AI by refining prompts and making small adjustments to outputs for improved accuracy and alignment with your intent.
Summarized by AI based on LinkedIn member posts
  • View profile for Jimi Gibson

    No‑Fluff Digital Strategies for Fed‑Up Owners | VP, Thrive Agency | Keynote Speaker

    3,087 followers

    Stuck with generic AI answers? Your prompts are to blame. Inside the AI Mind ChatGPT isn’t clairvoyant—it’s a pattern matcher trained on billions of words. Your prompt is the GPS signal: the clearer the directions, the closer you get to your destination. Mastering these hacks means you spend less time massaging outputs and more time using them to drive real business results. Hack 1 – Tame the T‑Rex What it is: Lock in format and length from the start. Pro tip: “In 3 bullet points, explain…” Why it matters: Vague prompts give you walls of text that need heavy editing. By specifying format up front, you force ChatGPT to sculpt its response into the shape you actually want—cutting your rewrite time in half. Hack 2 – Feed the Beast What it is: Supply rich context—background data, customer profiles, past examples. Pro tip: Begin with “Based on the text above, draft…” Why it matters: The AI only knows what you feed it. Without context, it fakes knowledge. By “feeding” it your specifics, you get custom, nuanced answers instead of generic guesswork. Hack 3 – Chain Its Thoughts What it is: Ask for step‑by‑step reasoning before the final answer. Pro tip: “Walk me through your thought process on…” Why it matters: You discover how the AI arrived at its conclusion—spotting gaps, bias, or hallucinations. This transparency lets you catch mistakes early and refine your prompt for more trustworthy insights. Hack 4 – Dress It Up What it is: Define tone, style, and word count as clearly as a dress code. Pro tip: “Write a friendly LinkedIn post under 100 words.” Why it matters: You maintain brand consistency. Whether you need a snarky tweet or a formal memo, setting the “voice” prevents you from spending time rewriting blunt or off‑tone copy. Hack 5 – Play Pretend What it is: Assign a persona—expert, coach, critic—to shape the lens. Pro tip: “Act as a veteran UX designer and critique this homepage.” Why it matters: Personas tap into specialized knowledge. Rather than a one‑size‑fits‑all answer, you get domain‑specific insight that feels like expert consultation—no extra hire required. Hack 6 – Show Your Work What it is: Provide an example snippet or previous output as a style guide. Pro tip: Paste a 2‑line sample and add “Match this tone and structure.” Why it matters: Examples anchor the AI’s voice and structure. You get consistent, on‑brand content that matches your past successes—no more tone drift or awkward phrasing. Hack 7 – Polish the Gem What it is: Iterate one element at a time: clarity, length, emphasis. Pro tip: Reply “Make this more concise” or “Expand point 2.” Why it matters: Small, targeted tweaks compound into polished perfection. Rather than starting over, you refine in place—saving time and ensuring each change builds on solid foundations. Marketing isn’t magic—just third-grade math and psychology. DM “TruthBomb” for a no-BS audit of your digital marketing.

  • View profile for Chip Huyen
    Chip Huyen Chip Huyen is an Influencer

    Building something new | AI x storytelling x education

    297,067 followers

    Very useful tips on tool use and memory from Manus's context engineering blog post. Key takeaways: 1. Reversible compact summary Most models allow 128K context, which can easily fill up after a few turns when working with data like PDFs or web pages. When the context gets full, they have to compact it. It’s important to compact the context so that it’s reversible. Eg, removing the content of a file/web page if the path/URL is kept. 2. Tool use Given how easy it is to add new tools (e.g., with MCP servers), the number of tools a user adds to an agent can explode. Too many tools make it easier for the agent to choose the wrong action, making them dumber. They caution against removing tools mid-iteration. Instead, you can force an agent to choose certain tools with response prefilling. Ex: starting your response with <|im_start|>assistant<tool_call>{"name": “browser_ forces the agent to choose a browser. Name your tools so that related tools have the same prefix. Eg: browser tools should start with `browser_`, and command line tools should start with `shell_` 3. Dynamic few shot prompting They cautioned against using the traditional few shot prompting for agents. Seeing the same few examples again and again will cause the agent to overfit to these examples. Ex: if you ask the agent to process a batch of 20 resumes, and one example in the prompt visits the job description, the agent might visit the same job description 20 times for these 20 resumes. Their solution is to introduce small structured variations each time an example is used: different phrasing, minor noise in formatting, etc Link: https://lnkd.in/gHnWvvcZ #AIAgents #AIEngineering #AIApplications

  • View profile for Andreas Sjostrom
    Andreas Sjostrom Andreas Sjostrom is an Influencer

    LinkedIn Top Voice | AI Agents | Robotics I Vice President at Capgemini's Applied Innovation Exchange | Author | Speaker | San Francisco | Palo Alto

    13,554 followers

    Some of the best AI breakthroughs we’ve seen came from small, focused teams working hands-on, with structured inputs and the right prompting. Here’s how we help clients unlock AI value in days, not months: 1. Start with a small, cross-functional team (4–8 people) 1–2 subject matter experts (e.g., supply chain, claims, marketing ops) 1–2 technical leads (e.g., SWE, data scientist, architect) 1 facilitator to guide, capture, and translate ideas Optional: an AI strategist or business sponsor 2. Context before prompting - Capture SME and tech lead deep dives (recorded and transcribed) - Pull in recent internal reports, KPIs, dashboards, and documentation - Enrich with external context using Deep Research tools: Use OpenAI’s Deep Research (ChatGPT Pro) to scan for relevant AI use cases, competitor moves, innovation trends, and regulatory updates. Summarize into structured bullets that can prime your AI. This is context engineering: assembling high-signal input before prompting. 3. Prompt strategically, not just creatively Prompts that work well in this format: - “Based on this context [paste or refer to doc], generate 100 AI use cases tailored to [company/industry/problem].” - “Score each idea by ROI, implementation time, required team size, and impact breadth.” - “Cluster the ideas into strategic themes (e.g., cost savings, customer experience, risk reduction).” - “Give a 5-step execution plan for the top 5. What’s missing from these plans?” - “Now 10x the ambition: what would a moonshot version of each idea look like?” Bonus tip: Prompt like a strategist (not just a user) Start with a scrappy idea, then ask AI to structure it: - “Rewrite the following as a detailed, high-quality prompt with role, inputs, structure, and output format... I want ideas to improve our supplier onboarding process with AI. Prioritize fast wins.” AI returns something like: “You are an enterprise AI strategist. Based on our internal context [insert], generate 50 AI-driven improvements for supplier onboarding. Prioritize for speed to deploy, measurable ROI, and ease of integration. Present as a ranked table with 3-line summaries, scoring by [criteria].” Now tune that prompt; add industry nuances, internal systems, customer data, or constraints. 4. Real examples we’ve seen work: - Logistics: AI predicts port congestion and auto-adjusts shipping routes - Retail: Forecasting model helps merchandisers optimize promo mix by store cluster 5. Use tools built for context-aware prompting - Use Custom GPTs or Claude’s file-upload capability - Store transcripts and research in Notion, Airtable, or similar - Build lightweight RAG pipelines (if technical support is available) - Small teams. Deep context. Structured prompting. Fast outcomes. This layered technique has been tested by some of the best in the field, including a few sharp voices worth following, including Allie K. Miller!

  • View profile for Tomasz Tunguz
    Tomasz Tunguz Tomasz Tunguz is an Influencer
    402,356 followers

    In working with AI, I’m stopping before typing anything into the box to ask myself a question : what do I expect from the AI? 2x2 to the rescue! Which box am I in? On one axis, how much context I provide : not very much to quite a bit. On the other, whether I should watch the AI or let it run. If I provide very little information & let the system run : ‘research Forward Deployed Engineer trends,’ I get throwaway results: broad overviews without relevant detail. Running the same project with a series of short questions produces an iterative conversation that succeeds - an Exploration. “Which companies have implemented Forward Deployed Engineers (FDEs)? What are the typical backgrounds of FDEs? Which types of contract structures & businesses lend themselves to this work?” When I have a very low tolerance for mistakes, I provide extensive context & work iteratively with the AI. For blog posts or financial analysis, I share everything (current drafts, previous writings, detailed requirements) then proceed sentence by sentence. Letting an agent run freely requires defining everything upfront. I rarely succeed here because the upfront work demands tremendous clarity - exact goals, comprehensive information, & detailed task lists with validation criteria - an outline. These prompts end up looking like the product requirements documents I wrote as a product manager. The answer to ‘what do I expect?’ will get easier as AI systems access more of my information & improve at selecting relevant data. As I get better at articulating what I actually want, the collaboration improves. I aim to move many more of my questions out of the top left bucket - how I was trained with Google search - into the other three quadrants. I also expect this habit will help me work with people better.

  • View profile for Alison W.

    Strategy & Transformation Consultant, ASTM International | Founder, Outlook Lab | Tech Adoption, Enterprise Innovation, Strategic Comms | Former Honeywell, GE, Emirates

    7,242 followers

    As Generative AI (GenAI) becomes more common place, a new Human superpower will emerge. There will be those with expert ability at getting quality information from LLMs (large language models), and those without. This post provides simple tips and tricks to help you gain that superpower. TL; DR: To better interact with specific #GenAI tools, bring focused problems, provide sufficient context, engage in interactive and iterative conversations, and utilize spoken audio for a more natural interaction. Couple background notes. I'm an applied linguist by education; historically, a communicator by trade (human-to-human communication); and passionate about responsibly guiding the future of AI at Honeywell. When we announced a pilot program last year to trial use of LLMs in our daily work, I jumped on the opportunity. The potential for increased productivity and creativity was of course a large draw, but the opportunity to explore an area of linguistics I haven't touched in over a decade: human-computer interaction and communication (computational linguistics) was as well. Words are essential elements of effective communication, shaping how messages are perceived, understood, and acted upon. Similar to H2H communication, words we use in conversation with LLMs largely impact the output of the interaction, from both user experience and quality. A drawback is that we often approach an LLM like a search engine, just looking for answers. Instead, we must approach like a conversation partner. This will feel like more work for a human, which is often discouraging. ChatGPT has a reputation of being a "magical" tool or solution. When we find out it's not an easy button but actually requires work and input, we're demotivated. But in reality, the AI tool is pulling your best thinking from you. How to have an effective conversation with AI: 1. Bring a focused problem. Instead of asking, "What recommendations would you make for using ChatGPT?" Start with, "I'm writing a blog post and I'd like to give concrete, tangible suggestions to professionals who haven't had much exposure to ChatGPT." 2. Provide good and enough context. Hot Tip: Ask #ChatGPT to ask you for the context. "I'm writing a LinkedIn post on human-computer interaction. Ask me 3 questions to would help me provide you with sufficient context to assist me with writing this post." 3. Make your conversation interactive and iterative, just as you would with a human. Never accept the first response. (Imagine if we did this in H2H conversation.) 4. Interact via an app versus web. Some web browsers mimic a search box, which influences *how we interact with the tool. Try to use spoken audio. Talk naturally. And try using different models, just as you would speak with different friends for advice. What tips can you share? A special shout out to Stanford Graduate School of Business' Think Fast, Talk Smart podcast for some of the input exchanged here. Sapan Shah Laura Kelleher Tena Mueller Adam O'Neill

  • View profile for Bhrugu Pange
    3,358 followers

    I’ve had the chance to work across several #EnterpriseAI initiatives esp. those with human computer interfaces. Common failures can be attributed broadly to bad design/experience, disjointed workflows, not getting to quality answers quickly, and slow response time. All exacerbated by high compute costs because of an under-engineered backend. Here are 10 principles that I’ve come to appreciate in designing #AI applications. What are your core principles? 1. DON’T UNDERESTIMATE THE VALUE OF GOOD #UX AND INTUITIVE WORKFLOWS Design AI to fit how people already work. Don’t make users learn new patterns — embed AI in current business processes and gradually evolve the patterns as the workforce matures. This also builds institutional trust and lowers resistance to adoption. 2. START WITH EMBEDDING AI FEATURES IN EXISTING SYSTEMS/TOOLS Integrate directly into existing operational systems (CRM, EMR, ERP, etc.) and applications. This minimizes friction, speeds up time-to-value, and reduces training overhead. Avoid standalone apps that add context-switching or friction. Using AI should feel seamless and habit-forming. For example, surface AI-suggested next steps directly in Salesforce or Epic. Where possible push AI results into existing collaboration tools like Teams. 3. CONVERGE TO ACCEPTABLE RESPONSES FAST Most users have gotten used to publicly available AI like #ChatGPT where they can get to an acceptable answer quickly. Enterprise users expect parity or better — anything slower feels broken. Obsess over model quality, fine-tune system prompts for the specific use case, function, and organization. 4. THINK ENTIRE WORK INSTEAD OF USE CASES Don’t solve just a task - solve the entire function. For example, instead of resume screening, redesign the full talent acquisition journey with AI. 5. ENRICH CONTEXT AND DATA Use external signals in addition to enterprise data to create better context for the response. For example: append LinkedIn information for a candidate when presenting insights to the recruiter. 6. CREATE SECURITY CONFIDENCE Design for enterprise-grade data governance and security from the start. This means avoiding rogue AI applications and collaborating with IT. For example, offer centrally governed access to #LLMs through approved enterprise tools instead of letting teams go rogue with public endpoints. 7. IGNORE COSTS AT YOUR OWN PERIL Design for compute costs esp. if app has to scale. Start small but defend for future-cost. 8. INCLUDE EVALS Define what “good” looks like and run evals continuously so you can compare against different models and course-correct quickly. 9. DEFINE AND TRACK SUCCESS METRICS RIGOROUSLY Set and measure quantifiable indicators: hours saved, people not hired, process cycles reduced, adoption levels. 10. MARKET INTERNALLY Keep promoting the success and adoption of the application internally. Sometimes driving enterprise adoption requires FOMO. #DigitalTransformation #GenerativeAI #AIatScale #AIUX

  • View profile for Dave Greenberger

    Head, Enterprise Manufacturing at Shopify | Unlocking innovation inside the world’s most complex companies

    7,379 followers

    Stop treating ChatGPT like a search engine. It's a strategic consultant. "The quality of your questions is the ceiling of your output." One of the reasons I came to Shopify was to get extremely deep into AI. I've only been here 2 months now, but I've already learned more in the last 60 days about the topic than my entire life previously. This guy Chris Koerner is absolutely surgical with his business ideas and recently has been a go-to follow for me to use AI more efficiently, now that I have some good 101 understanding under my belt. He's geared towards SMB Entrepreneurs which is so up my alley, but I've applied his learnings big time in the Enterprise software world. For example, his latest 20 min mastermind (https://lnkd.in/eRNN4_27) is jammed pack with things I've immediately used this week like: 1. Stop asking for facts, start asking for strategy His example: Instead of "What are some good business ideas?" → "What are eight off the radar business ideas that people are talking about in message boards and in subreddits that are poised to explode over the next few years?" For me: "What are the top pain points manufacturing executives are discussing in industry forums that indicate they're open to evaluating new commerce platforms in the next 12 months?" 2. Feed it real context, not generic requests His example: Instead of "Write this email more simply" → "Write this email so a fifth grader could understand it." Instead of "Use good copywriting techniques" → First ask "What are some good copywriting techniques?" then pick the ones you want implemented. 3. Build repeatable workflows, not just prompts "Don't think about 'I need one good email' you want to think about 'I need a prompt that will write one good email anytime I need it to.'" 4. "What industries are notorious for having a bunch of one-star reviews where I could cold email owners and sell them a fix?" me: Perfect for sales prospecting - identify underserved markets in your vertical 5.: "Give me cool phrases from the Book of Mormon that don't show up anywhere else" (forcing ChatGPT to find unique, quality content) me: "Give me enterprise software implementation quotes that are absolutely gold based on what you know about digital transformation, but don't appear very often" - cuts through generic industry speak 6. "Here's what I have: a truck, time, and access to firewood. Give me a launch plan."  Perfect framework for resource optimization - "Here's what I have: $2M budget, 6-month timeline, team of 12 developers. Give me a market expansion plan." The switching costs of learning new AI workflows are massive, but the leverage once you get them dialed in? Game changing. (pictured below: the hilarious first-time output AI delivered me "show me a frustrated manufacturer trying to leverage AI" 🤪 )

  • View profile for Conner Ardman

    Creator of FrontendExpert @ AlgoExpert | Ex-Facebook Software Engineer | 100k+ on YouTube | Professional Yapper

    9,873 followers

    I was wrong about “prompt engineering.” I thought it was silly. What’s important is just the model, right? Wrong. With well formulated prompts you can: 1. Prevent the AI from being overly agreeable 2. Minimize hallucinations 3. Increase maintainability, performance, and scalability 4. Provide boundaries for what the AI can and can’t do 5. Control the formatting, tone, and style of responses 6. Prevent bugs and vulnerabilities 7. So much more… Here’s one of my favorites from my most recent video (link in comments). It’s a great way to get a high-quality code review from AI: Review this function as if you are a senior engineer. Specifically look for the following, and provide a list of potential improvements with reasoning for those improvements: 1. Logical mistakes that could cause errors. 2. Unaccounted for edge cases. 3. Poor or inconsistent naming conventions and styling that would make the code hard to understand. 4. Performance optimizations. 5. Security vulnerabilities or concerns to consider. 6. Ambiguous or hard to understand code that requires documentation. 7. Debugging code that should be removed before pushing to production. 8. Any other ways to improve the code quality, readability, performance, security, scalability, or maintainability. Expected behavior: … Code: …

  • View profile for Cameron R. Wolfe, Ph.D.

    Research @ Netflix

    21,097 followers

    Prompt engineering is a rapidly-evolving topic in AI, but recent research can be grouped into four categories... (1) Reasoning: Simple prompting techniques are effective for many problems, but more sophisticated strategies are required to solve multi-step reasoning problems. - [1] uses zero-shot CoT prompting to automatically generate problem-solving rationales to use for standard CoT prompting. - [2] selects CoT exemplars based on their complexity (exemplars that have the maximum number of reasoning steps are selected first). - [3] improves CoT prompting by asking the LLM to progressively refine the generated rationale. - [4] decomposes complex tasks into several sub-tasks that can be solved via independent prompts and later aggregated into a final answer. (2) Tool Usage: LLMs are powerful, but they have notable limitations. We can solve many of these limitations by teaching the LLM how to leverage external, specialized tools. - [5, 6] finetune a language model to teach it how to leverage a fixed, simple set of text-based APIs when answering questions. - [7] uses a central LLM-based controller to generate a program—written in natural language—that composes several tools to solve a complex reasoning task. - [8] uses a retrieval-based finetuning technique to teach an LLM to adaptively make calls to APIs based on their documentation when solving a problem. - [9] uses an LLM as a central controller for leveraging a variety of tools in the form of deep learning model APIs. - [10, 11] integrates code-capable LLMs with a sandboxed Python environment to execute programs when solving problems. (3) Context Window: Given the emphasis of recent LLMs on long contexts for RAG / few-shot learning, the properties of context windows and in-context learning have been studied in depth. - [12] shows that including irrelevant context in the LLM’s prompt can drastically deteriorate performance. - [13] finds that LLMs pay the most attention to information at the beginning/end of the prompt, while information placed in the middle of a long context is forgotten. - [14] proposes a theoretically-grounded strategy for optimally selecting few-shot exemplars. (4) Better Writing: One of the most popular use-cases of LLMs is for improving human writing, and prompt engineering can be used to make more effective writing tools with LLMs. - [15] improves the writing abilities of an LLM by first generating an outline and then filling in each component of the outline one-by-one. - [16] uses a smaller LLM to generate a “directional stimulus” (i.e., a textual hint) that can be used as extra context to improve an LLM’s writing ability on a given task. - [17] improves the quality of LLM-generated summaries by iteratively prompting the LLM to increase the information density of the summary.

  • View profile for Adam Chan

    Bringing developers together to build epic projects with epic tools!

    9,079 followers

    𝗟𝗼𝗻𝗴 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 does not mean 𝗯𝗲𝘁𝘁𝗲𝗿 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 You need a retrieval system: As developers, we often get excited about massive context windows: "Just dump all your data in and get perfect recall and reasoning" But let's be practical: it's not that straightforward. While long context is valuable, it needs to be paired with effective retrieval systems. As you're designing reasoning models or building agents, consider implementing robust retrieval mechanisms alongside long context - this becomes critical when working with large, heterogeneous datasets. What's happening under the hood? LLMs fundamentally struggle to identify and utilize relevant information in extended contexts, especially without explicit keyword matches. Consider these technical insights: • 𝗟𝗼𝗻𝗴 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 ≠ 𝗕𝗲𝘁𝘁𝗲𝗿 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 → The NoLiMa benchmark demonstrated that 10 out of 12 models operating at 32K context lengths experienced a 50% performance degradation. • 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗱𝗶𝗹𝘂𝘁𝗶𝗼𝗻 𝗶𝘀 𝗿𝗲𝗮𝗹 → Significant portions of context are often irrelevant and can distract the model from its primary objective. • 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 → Extended contexts increase latency and costs. Each token represents an investment in computational resources, budget, and cognitive clarity. The evolution of LLMs isn't simply about expanding context windows—it's about implementing smarter contextual processing. This is why retrieval systems remain essential in our AI architecture. Learn more about RAG and chunking techniques here: https://lnkd.in/gxYCFYzj

Explore categories