How to Approach Vibe Coding Challenges

Explore top LinkedIn content from expert professionals.

Summary

Vibe coding challenges, especially when working with AI coding agents, require a structured and thoughtful approach to avoid chaos and maximize efficiency. By treating AI as an assistant rather than a magical solution, developers can better manage tasks and improve outcomes.

  • Start with clear preparation: Define project goals with concise rules and guidelines in a single document, ensuring AI agents follow specific principles for clean, consistent code.
  • Break tasks into steps: Work iteratively by dividing the project into manageable components, allowing you and the AI to focus on one task at a time for better results.
  • Review and refine often: Avoid long, unchecked coding sessions by frequently stopping to assess progress, refactor messy sections, and ensure the AI adheres to the project’s design and functionality.
Summarized by AI based on LinkedIn member posts
  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    40,822 followers

    Most developers treat AI coding agents like magical refactoring engines, but few have a system, and that's wrong. Without structure, coding with tools like Cursor, Windsurf, and Claude Code often leads to files rearranged beyond recognition, subtle bugs, and endless debugging. In my new post, I share the frameworks and tactics I developed to move from chaotic vibe coding sessions to consistently building better, faster, and more securely with AI. Three key shifts I cover: -> Planning like a PM – starting every project with a PRD and modular project-docs folder radically improves AI output quality -> Choosing the right models – using reasoning-heavy models like Claude 3.7 Sonnet or o3 for planning, and faster models like Gemini 2.5 Pro for focused implementation -> Breaking work into atomic components – isolating tasks improves quality, speeds up debugging, and minimizes context drift Plus, I share under-the-radar tactics like: (1) Using .cursor/rules to programmatically guide your agent’s behavior (2) Quickly spinning up an MCP server for any Mintlify-powered API (3) Building a security-first mindset into your AI-assisted workflows This is the first post in my new AI Coding Series. Future posts will dive deeper into building secure apps with AI IDEs like Cursor and Windsurf, advanced rules engineering, and real-world examples from my projects. Post + NotebookLM-powered podcast https://lnkd.in/gTydCV9b

  • View profile for Esco Obong

    Senior Software Engineer @ Airbnb

    23,034 followers

    AI Coding agents are amazing. I hear too much hate for them. Here are a few techniques I use to get great results: 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗮 “𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝘁𝗼 𝗖𝗼𝗱𝗲 𝗕𝘆” 𝗰𝗵𝗲𝗮𝘁-𝘀𝗵𝗲𝗲𝘁: start with a persona prompt that tells it to be a world class engineer who follows key principals such as: single-responsibility, avoid global state, use clear names, have pure functions, absolutely no hacks allowed...etc 𝗚𝗶𝘃𝗲 𝘁𝗵𝗲 𝗮𝗴𝗲𝗻𝘁 𝗮𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝘀𝗻𝗮𝗽𝘀𝗵𝗼𝘁 𝗳𝗶𝗿𝘀𝘁: Tell it to draft a document that outlines the codebase or specific files you want to change. Then paste that in the prompt for the agent to ground it in the context. 𝗗𝗼𝗻'𝘁 𝗹𝗲𝘁 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗴𝗲𝘁 𝘁𝗼𝗼 𝗹𝗼𝗻𝗴: performance decreases as the length of your session increases. Frequently stop your session and restart, prime it with context again (architecturual principles, codebase outline..etc) and tell it your new tasks. Do not stay on one session long. 𝗧𝗲𝗹𝗹 𝗶𝘁 𝗻𝗼𝘁 𝘁𝗼 𝗰𝗼𝗱𝗲: Command it to explain its approach before coding. Literally tell it "Do not code" and only allow it to code once you see the approach it came up with makes sense. 𝗣𝗿𝗼𝘁𝗼𝘁𝘆𝗽𝗲 → 𝗥𝗲𝗳𝗮𝗰𝘁𝗼𝗿 → 𝗥𝗲𝗽𝗲𝗮𝘁: Vibe coding is perfectly fine in small doses. But make sure to stop adding features and tell it to refactor if things get messy. Its easier to get LLMs to refine an existing codebase to be clean than to start out clean. 𝗜𝗻𝘃𝗲𝘀𝘁𝗶𝗴𝗮𝘁𝗲 𝗯𝗲𝗳𝗼𝗿𝗲 𝗷𝘂𝗺𝗽𝗶𝗻𝗴 𝗶𝗻𝘁𝗼 𝗯𝘂𝗴 𝗳𝗶𝘅𝗶𝗻𝗴: when you have bugs, tell the agent to investigate and explain the solution. Again tell it "Do not code" and don't let it code until you're satisfied with its explanation. 𝗪𝗼𝗿𝗸 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝘃𝗲𝗹𝘆: Have the agent take on small-medium changes at a time, don't expect to one shot a huge codebase!

  • View profile for Andrew Churchill

    Co-Founder & CTO at Weave (YC W25)

    5,872 followers

    Why do some engineers 10x their output with Cursor, while others barely see improvement? Because using AI for software engineering is a skill! Some are naturally better than others, but everyone can get better with practice and effort. Engineering teams need to adapt to avoid being left behind by the AI wave, so I've put together some 𝗽𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝗼𝗻 𝗵𝗼𝘄 𝘁𝗼 𝘃𝗶𝗯𝗲 𝗰𝗼𝗱𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆 :) 𝟭. 𝗗𝗲𝗳𝗮𝘂𝗹𝘁 𝘁𝗼 𝗔𝗜 𝗲𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 Start every task with AI only, and only write code manually when you absolutely have to. This isn't about blindly accepting output, it's about building intuition for what AI is capable of. And you'll find that the scope of what you can do with AI will only grow. 𝟮. 𝗠𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗲 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗹𝗼𝗼𝗽 When you write code, you don't write line by line then submit it; you first set up a structure, then start filling it in piece by piece, then review and edit it before submitting a PR. AI works best when you let it use a similar workflow. The simplest version of this: after it writes a bunch of code, just prompt it to "Rewrite this to be better" 3-6 times. You'll be shocked at how much better it gets when the AI has time to "review" its own work! 𝟯. 𝗕𝗲 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗰 𝗮𝗯𝗼𝘂𝘁 𝘄𝗵𝗶𝗰𝗵 𝗽𝗮𝗿𝘁𝘀 𝗼𝗳 𝘆𝗼𝘂𝗿 𝗰𝗼𝗱𝗲𝗯𝗮𝘀𝗲 𝘆𝗼𝘂 𝘃𝗶𝗯𝗲 𝗰𝗼𝗱𝗲 Core systems: Human-planned, AI-assisted Demo apps/dashboards: Full AI generation Tests: AI-generated, human-guided 𝟰. 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 If you're having trouble getting AI to solve a problem, it's normally because you aren't giving the right context to the model. Modern models are shockingly capable...when they have the right context. Rules files (e.g. .cursor/rules, CLAUDE. md) help orient the agent to the codebase; detailed prompts that mention related files and structures are the other half of the battle. 𝟱. 𝗨𝘀𝗲 𝘃𝗼𝗶𝗰𝗲 𝗶𝗻𝗽𝘂𝘁 𝘁𝗼𝗼𝗹𝘀 Since detailed prompts are so important, and we speak much faster than we type, tools like Wispr Flow or Willow (YC X25) significantly speed up the AI coding workflow. Try it for yourself! Any other tips you've seen work particularly well for yourself or your team?

  • View profile for Ryan Mitchell

    O'Reilly / Wiley Author | LinkedIn Learning Instructor | Principal Software Engineer @ GLG

    29,022 followers

    I’ve been working on a massive prompt that extracts structured data from unstructured text. It's effectively a program, developed over the course of weeks, in plain English. Each instruction is precise. The output format is strict. The logic flows. It should Just Work™. And the model? Ignores large swaths of it. Not randomly, but consistently and stubbornly. This isn't a "program," it's a probability engine with auto-complete. This is because LLMs don’t "read" like we do, or execute prompts like a program does. They run everything through the "attention mechanism," which mathematically weighs which tokens matter in relation to others. Technically speaking: Each token is transformed into a query, key, and value vector. The model calculates dot products between the query vector and all key vectors to assign weights. Basically: "How relevant is this other token to what I’m doing right now?" Then it averages the values using those weights and moves on. No state. No memory. Just a rolling calculation over a sliding window of opaquely-chosen context. It's kind of tragic, honestly. You build this beautifully precise setup, but because your detailed instructions are buried in the middle of a long prompt -- or phrased too much like background noise -- they get low scores. The model literally pays less attention to them. We thought we were vibe coding, but the real vibe coder was the LLM all along! So how to fix it? Don’t just write accurate instructions. Write ATTENTION-WORTHY ones. - 🔁 Repeat key patterns. Repetition increases token relevance, especially when you're relying on specific phrasing to guide the model's output. - 🔝 Push constraints to the top. Instructions buried deep in the prompt get lower attention scores. Front-load critical rules so they have a better chance of sticking. - 🗂️ Use structure to force salience. Consistent headers, delimiters, and formatting cues help key sections stand out. Markdown, line breaks, and even ALL CAPS (sparingly) can help direct the model's focus to what actually matters. - ✂️ Cut irrelevant context. The less junk in the prompt, the more likely your real instructions are to be noticed and followed. You're not teaching a model. You're gaming a scoring function.

  • View profile for Balaji Viswanathan Ph.D.

    Building KAPI: AI-Native IDE Where Developers Engineer Systems, Not Syntax | Ex-Microsoft | CS PhD

    19,219 followers

    In Vibecoding, we are going 10-20x the regular speed and that means all the regular coding accidents will happen 10-20x faster. Leverage works both ways. Technical debt clearing and refactoring that was required once a month or a quarter is now required everyday. You should plan to get rid about 50% of the code that you wrote the previous day as you fold those into neat packets with strong test cases. LLMs can ofter overengineer or underengineer, thus keeping consistency becomes a key duty of the engineer. If you bring the old bad habits [such as not writing test cases, not raising frequent PRs to keep things consistent, not refactoring consistently] vibe coding can bring you nasty surprises 10x fast.

  • View profile for Nate Baldwin

    Systems-oriented designer, focusing on enterprise design systems.

    6,086 followers

    Here are ten things I've found helpful when "vibe coding": 1. Use a single "project-rules" file. I've seen better conformity to rules when they are in a single file rather than modularized. 2. Keep rules simple and direct. Pro tip: Write out your rules, then post them into a chat agent with a prompt such as "Please consolidate these into clear, actionable rules for an AI agent to follow" 3. Never ask too much at once 4. If you're going to ask a lot, be PRECISE in your prompt requests 5. Periodically ask the agent to summarize an update into a set of decisions within a "Technical decisions" markdown file 6. Start new chats by asking the agent to thoroughly review key files (Readme, project-rules, technical decisions, etc) and to use this understanding for future prompts 7. End every prompt with "Adhere to project-rules" 8. Almost always include "PRESERVE EXISTING DESIGN AND FUNCTIONALITY" (you'll thank me for this one) 9. Always double-check the output. Sometimes it will create an empty function despite telling you what this function "does". 10. Pair-program with the agent. Paste errors into chats, ask for code reviews (even on its own outputs), and if things get buggy, ask for thorough analysis or to add debug logs so you can both find the issues together. These are just a few things that have helped me so far. Hope they help you as well!

  • View profile for Maarten Masschelein

    CEO & Co-Founder @ Soda | Data quality & Governance for the Data Product Era

    13,230 followers

    I tried my hands at vibe coding and ended up created a full stack prototype in 3 days in Cursor (without manually changing a single line of code) Here’s what I learned. 👉 Write a project brief as if you’re explaining it to a smart but new hire Use this prompt : "Create a step by step plan, recommend tools and structure for [idea]. Choose fast feedback defaults: hot reload, mock data, simple frameworks. This is a prototype. Help me move fast." 👉 Don’t touch the code Let the model write everything.If it breaks, go back to a known-good checkpoint. Don’t manually fix. Two chats editing the same file = chaos. Frequently ask: “Can you review for gaps, bugs, or inconsistencies?” 👉 Build end-to-end (no half features): Backend first: mock vs. persisted data? Be clear. Frontend next: make it real enough to click. Finally, wire it up: API to UI to working prototype. Tools used that I used: - Cursor.com: AI-native dev environment. - Claude 4: strong reasoning, low hallucination. - GitHub Copilot in VSCode also works, but Cursor + Claude is much better in my experience. 💡 Final tip: Treat AI like a junior engineer with infinite energy but zero context. Feed it product thinking. Reward is speed and leverage. Curious if others are building this way too. What tips would you give when it comes to vibe coding.

Explore categories