I've been using AI coding tools for a while now & it feels like every 3 months the paradigm shifts. Anyone remember putting "You are an elite software engineer..." at the beginning of your prompts or manually providing context? The latest paradigm is Agent Driven Development & here are some tips that have helped me get good at taming LLMs to generate high quality code. 1. Clear & focused prompting ❌ "Add some animations to make the UI super sleek" ✅ "Add smooth fade-in & fade out animations to the modal dialog using the motion library" Regardless of what you ask, the LLM will try to be helpful. The less it has to infer, the better your result will be. 2. Keep it simple stupid ❌ Add a new page to manage user settings, also replace the footer menu from the bottom of the page to the sidebar, right now endless scrolling is making it unreachable & also ensure the mobile view works, right now there is weird overlap ✅ Add a new page to manage user settings, ensure only editable settings can be changed. Trying to have the LLM do too many things at once is a recipe for bad code generation. One-shotting multiple tasks has a higher chance of introducing bad code. 3. Don't argue ❌ No, that's not what I wanted, I need it to use the std library, not this random package, this is the 4th time you've failed me! ✅ Instead of using package xyz, can you recreate the functionality using the standard library When the LLM fails to provide high quality code, the problem is most likely the prompt. If the initial prompt is not good, follow on prompts will just make a bigger mess. I will usually allow one follow up to try to get back on track & if it's still off base, I will undo all the changes & start over. It may seem counterintuitive, but it will save you a ton of time overall. 4. Embrace agentic coding AI coding assistants have a ton of access to different tools, can do a ton of reasoning on their own, & don't require nearly as much hand holding. You may feel like a babysitter instead of a programmer. Your role as a dev becomes much more fun when you can focus on the bigger picture and let the AI take the reigns writing the code. 5. Verify With this new ADD paradigm, a single prompt may result in many files being edited. Verify that the code generated is what you actually want. Many AI tools will now auto run tests to ensure that the code they generated is good. 6. Send options, thx I had a boss that would always ask for multiple options & often email saying "send options, thx". With agentic coding, it's easy to ask for multiple implementations of the same feature. Whether it's UI or data models asking for a 2nd or 10th opinion can spark new ideas on how to tackle the task at hand & a opportunity to learn. 7. Have fun I love coding, been doing it since I was 10. I've done OOP & functional programming, SQL & NoSQL, PHP, Go, Rust & I've never had more fun or been more creative than coding with AI. Coding is evolving, have fun & let's ship some crazy stuff!
How to Overcome AI-Driven Coding Challenges
Explore top LinkedIn content from expert professionals.
Summary
As artificial intelligence (AI) significantly reshapes software development, many developers encounter challenges when coding with AI tools like large language models (LLMs). To overcome these obstacles, it's essential to understand how to communicate effectively with AI, implement clear workflows, and embrace AI as a collaborative partner, rather than a magical solution.
- Start with clear prompts: Be specific and concise when giving instructions to AI coding tools. Avoid ambiguous or overly complex requests to reduce errors and improve the quality of generated code.
- Work iteratively: Break tasks into smaller, manageable steps and test frequently. If your initial prompt doesn't yield the desired results, revise and clarify before continuing.
- Review and refine output: Carefully review all AI-generated code to ensure it meets your project requirements and coding standards. Collaborate with the AI by providing feedback and guiding it toward better solutions.
-
-
I’ve been running a quiet experiment: using AI coding (Vibe Coding) across 10 different closed-loop production projects — from minor refactors to major migrations. In each, I varied the level of AI involvement, from 10% to 80%. Here’s what I found: The sweet spot? 40–55% AI involvement. Enough to accelerate repetitive or structural work, but not so much that the codebase starts to hallucinate or drift. Where AI shines: - Boilerplate and framework code - Large-scale refactors - Migration scaffolds - Test case generation Where it stumbles: - Complex logic paths - Context-heavy features - Anything requiring real systems thinking [and new architectures etc]. - Anything stateful or edge-case-heavy I tracked bugs and % of total dev time spent fixing AI-generated code across each project. Here's the chart. My learning is that: overreliance on AI doesn’t just plateau, it backfires. AI doesn't write perfect code. The future is a collaboration, not a handoff. Would love to hear how others are navigating this balance. #LLM #VibeCoding #AI #DeveloperTools #Dev
-
Most developers treat AI coding agents like magical refactoring engines, but few have a system, and that's wrong. Without structure, coding with tools like Cursor, Windsurf, and Claude Code often leads to files rearranged beyond recognition, subtle bugs, and endless debugging. In my new post, I share the frameworks and tactics I developed to move from chaotic vibe coding sessions to consistently building better, faster, and more securely with AI. Three key shifts I cover: -> Planning like a PM – starting every project with a PRD and modular project-docs folder radically improves AI output quality -> Choosing the right models – using reasoning-heavy models like Claude 3.7 Sonnet or o3 for planning, and faster models like Gemini 2.5 Pro for focused implementation -> Breaking work into atomic components – isolating tasks improves quality, speeds up debugging, and minimizes context drift Plus, I share under-the-radar tactics like: (1) Using .cursor/rules to programmatically guide your agent’s behavior (2) Quickly spinning up an MCP server for any Mintlify-powered API (3) Building a security-first mindset into your AI-assisted workflows This is the first post in my new AI Coding Series. Future posts will dive deeper into building secure apps with AI IDEs like Cursor and Windsurf, advanced rules engineering, and real-world examples from my projects. Post + NotebookLM-powered podcast https://lnkd.in/gTydCV9b
-
Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key. Effective prompting isn't magic; it's about structure, clarity, and iteration. Here are 10 principles to guide your AI interactions: 🔹 Checkpoint: Build iteratively. Break down large tasks into smaller, testable steps and save progress often. 🔹 Debug: Provide detailed context for errors – error messages, code snippets, and what you've tried. 🔹 Discover: Ask the AI for suggestions on tools, libraries, or approaches. Leverage its knowledge base. 🔹 Experiment: Treat prompting as iterative. Refine your requests based on the AI's responses. 🔹 Instruct: State clear, positive goals. Tell the AI what to do, not just what to avoid. 🔹 Select: Provide focused context. Use file mentions or specific snippets; avoid overwhelming the AI. 🔹 Show: Reduce ambiguity with concrete examples – code samples, desired outputs, data formats, or mockups. 🔹 Simplify: Use clear, direct language. Break down complexity and avoid jargon. 🔹 Specify: Define exact requirements – expected outputs, constraints, data formats, edge cases. 🔹 Test: Plan your structure and features before prompting. Outline requirements like a PM/engineer. By applying these principles, you can significantly improve your collaboration with AI, leading to faster development cycles and better outcomes.
-
Most people think AI coding is just about asking the right prompts. That's true. Until you hit a wall. Then: • Confusing outputs • Misaligned tasks • Wasted hours rewriting AI’s work I’ve been there. So I tried something different: Instead of asking what to build, I started asking how to build it better. That’s when I found a new rhythm. 1/ Start with a clear roadmap → Let AI break down the task, edge cases, and blockers 2/ Drop that into your doc → Friction disappears when you plan before you build 3/ Guide the AI with instruction files → Not “make an auth flow,” but “build daily script using X, output to Y” 4/ Work in small loops → Plan → Execute → Review → Repeat 5/ Pin files, configs, and past bugs → AI needs context just like humans do 6/ Use version control like a pro → Clean commits, readable PRs, checkpoint history 7/ When things go wrong? → Ask it what changed, why, and where You don’t need more AI. You need a better system to work with it. That’s how you build AI apps in hours, not weeks.
-
Just published: "The Prompt Engineering Playbook for Programmers" My latest free write-up: https://lnkd.in/g9Kxa7hG ✍️ After working with AI coding assistants daily, I've learned that the quality of your output depends entirely on the quality of your prompts. A vague "fix my code" gets you generic advice, while a well-crafted prompt can produce thoughtful, accurate solutions. I've distilled the key patterns and frameworks that actually work into a playbook covering: ✅ Patterns that work - Role-playing, rich context, specific goals, and iterative refinement ✅ Debugging strategies - From "my code doesn't work" to surgical problem-solving ✅ Refactoring techniques - How to get AI to improve performance, readability, and maintainability ✅ Feature implementation - Building new functionality step-by-step with AI as your pair programmer ✅ Common anti-patterns - What NOT to do (and how to fix it when things go wrong) The article includes side-by-side comparisons of poor vs. improved prompts with actual AI responses, plus commentary on why one succeeds where the other fails. Key insight: Treat AI coding assistants like very literal, knowledgeable collaborators. The more context you provide, the better the output. It's not magic - it's about communication. Whether you're debugging a tricky React hook, refactoring legacy code, or implementing a new feature, these techniques can turn AI from a frustrating tool into a true development partner. #ai #softwareengineering #programming
-
AI Coding agents are amazing. I hear too much hate for them. Here are a few techniques I use to get great results: 𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗮 “𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗮𝗹 𝗣𝗿𝗶𝗻𝗰𝗶𝗽𝗹𝗲𝘀 𝘁𝗼 𝗖𝗼𝗱𝗲 𝗕𝘆” 𝗰𝗵𝗲𝗮𝘁-𝘀𝗵𝗲𝗲𝘁: start with a persona prompt that tells it to be a world class engineer who follows key principals such as: single-responsibility, avoid global state, use clear names, have pure functions, absolutely no hacks allowed...etc 𝗚𝗶𝘃𝗲 𝘁𝗵𝗲 𝗮𝗴𝗲𝗻𝘁 𝗮𝗻 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝘀𝗻𝗮𝗽𝘀𝗵𝗼𝘁 𝗳𝗶𝗿𝘀𝘁: Tell it to draft a document that outlines the codebase or specific files you want to change. Then paste that in the prompt for the agent to ground it in the context. 𝗗𝗼𝗻'𝘁 𝗹𝗲𝘁 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗴𝗲𝘁 𝘁𝗼𝗼 𝗹𝗼𝗻𝗴: performance decreases as the length of your session increases. Frequently stop your session and restart, prime it with context again (architecturual principles, codebase outline..etc) and tell it your new tasks. Do not stay on one session long. 𝗧𝗲𝗹𝗹 𝗶𝘁 𝗻𝗼𝘁 𝘁𝗼 𝗰𝗼𝗱𝗲: Command it to explain its approach before coding. Literally tell it "Do not code" and only allow it to code once you see the approach it came up with makes sense. 𝗣𝗿𝗼𝘁𝗼𝘁𝘆𝗽𝗲 → 𝗥𝗲𝗳𝗮𝗰𝘁𝗼𝗿 → 𝗥𝗲𝗽𝗲𝗮𝘁: Vibe coding is perfectly fine in small doses. But make sure to stop adding features and tell it to refactor if things get messy. Its easier to get LLMs to refine an existing codebase to be clean than to start out clean. 𝗜𝗻𝘃𝗲𝘀𝘁𝗶𝗴𝗮𝘁𝗲 𝗯𝗲𝗳𝗼𝗿𝗲 𝗷𝘂𝗺𝗽𝗶𝗻𝗴 𝗶𝗻𝘁𝗼 𝗯𝘂𝗴 𝗳𝗶𝘅𝗶𝗻𝗴: when you have bugs, tell the agent to investigate and explain the solution. Again tell it "Do not code" and don't let it code until you're satisfied with its explanation. 𝗪𝗼𝗿𝗸 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝘃𝗲𝗹𝘆: Have the agent take on small-medium changes at a time, don't expect to one shot a huge codebase!
-
Every developer I know is "vibe coding" with AI -- and they're doing it completely wrong. We need a new term for AI-assisted coding. Something that isn't "vibing" (which sounds like someone dosing on some shrooms over in a van down by the river). I propose "stoplight engineering" 🚦 because it's how I build apps these days. Here are the steps: 1. 🔴 RED LIGHT: Write requirements (without AI) For my custom exam app, I wrote requirements for the ENTIRE thing before coding. It took a month! I thought through every feature I want to add. Dreamed up scenarios that may happen. Follow some edges-cases in my head, as much as I could. How does this feature affect that one? What happens when someone clicks *here*? What did my "back of the napkin" notes miss? Devs skip this because it's "boring" -- and it's usually someone else's job. But THIS IS MY FAVORITE STEP! Using your ACTUAL MIND to figure things out. No AI. No hallucinations. Just write, refine, iterate. It's coding without code. 2. 🟡 YELLOW LIGHT: Feed requirements to AI (but scrutinize everything) Then I fire up Claude Code for the coding work that's now obsolete for humans. Here's where everyone screws up -- they think AI writes perfect code on the first shot. The first output might be great. But it also might be garbage. But it doesn't matter because this is the YELLOW LIGHT phase 🟡 No vibing here. I review every. single. line. Check coding standards, design patterns, everything. I push back constantly before accepting anything. You wouldn't just blindly accept a PR, so you don't want to do it here either. The idea is to use the AI as an assistant to your brain. This is the step that requires maximum brain power. You're teaching the AI how to write YOUR code, like a senior guiding a junior (which is what it is, since no one hires juniors anymore). 3. 🟢 GREEN LIGHT: Auto-accept (after foundation is set) Long coding session? Now I vibe a little 😅 Full green 🟢 Once it knows my standards and patterns, I shift+tab in Claude Code and grab coffee. Maybe it runs 5 minutes. Maybe 15. But this is where the agentic process takes over. It's super scary for some devs to accept, but with the proper foundation in place, and it knowing how I code... the AI builds pretty much exactly what I want, and at a super high quality. The problem is that most developers jump straight to green. But the red and yellow phases are what create AWESOME results. But you can't get to this pure-genius-level-vibe-coding-rockstar level unless you already know how to code, know some design patterns, and understand programming fundamentals. This is why very senior-level developers, solution architects, and technical PMs will be safe for many, many years (maybe forever?). But it's also why I think every position below is in immediately grave danger. So... what do you think of stoplight engineering? If you've "vibed," did you get crappy code when you didn't push back? 👇