I've been using AI coding tools for a while now & it feels like every 3 months the paradigm shifts. Anyone remember putting "You are an elite software engineer..." at the beginning of your prompts or manually providing context? The latest paradigm is Agent Driven Development & here are some tips that have helped me get good at taming LLMs to generate high quality code. 1. Clear & focused prompting ❌ "Add some animations to make the UI super sleek" ✅ "Add smooth fade-in & fade out animations to the modal dialog using the motion library" Regardless of what you ask, the LLM will try to be helpful. The less it has to infer, the better your result will be. 2. Keep it simple stupid ❌ Add a new page to manage user settings, also replace the footer menu from the bottom of the page to the sidebar, right now endless scrolling is making it unreachable & also ensure the mobile view works, right now there is weird overlap ✅ Add a new page to manage user settings, ensure only editable settings can be changed. Trying to have the LLM do too many things at once is a recipe for bad code generation. One-shotting multiple tasks has a higher chance of introducing bad code. 3. Don't argue ❌ No, that's not what I wanted, I need it to use the std library, not this random package, this is the 4th time you've failed me! ✅ Instead of using package xyz, can you recreate the functionality using the standard library When the LLM fails to provide high quality code, the problem is most likely the prompt. If the initial prompt is not good, follow on prompts will just make a bigger mess. I will usually allow one follow up to try to get back on track & if it's still off base, I will undo all the changes & start over. It may seem counterintuitive, but it will save you a ton of time overall. 4. Embrace agentic coding AI coding assistants have a ton of access to different tools, can do a ton of reasoning on their own, & don't require nearly as much hand holding. You may feel like a babysitter instead of a programmer. Your role as a dev becomes much more fun when you can focus on the bigger picture and let the AI take the reigns writing the code. 5. Verify With this new ADD paradigm, a single prompt may result in many files being edited. Verify that the code generated is what you actually want. Many AI tools will now auto run tests to ensure that the code they generated is good. 6. Send options, thx I had a boss that would always ask for multiple options & often email saying "send options, thx". With agentic coding, it's easy to ask for multiple implementations of the same feature. Whether it's UI or data models asking for a 2nd or 10th opinion can spark new ideas on how to tackle the task at hand & a opportunity to learn. 7. Have fun I love coding, been doing it since I was 10. I've done OOP & functional programming, SQL & NoSQL, PHP, Go, Rust & I've never had more fun or been more creative than coding with AI. Coding is evolving, have fun & let's ship some crazy stuff!
How to Adapt Coding Skills for AI
Explore top LinkedIn content from expert professionals.
Summary
Adapting coding skills for AI involves learning to interact effectively with AI tools, understanding their capabilities, and rethinking traditional development processes for modern, AI-assisted workflows. This ensures developers can maximize productivity and stay relevant in the evolving tech landscape.
- Master prompt crafting: Learn to write clear and concise prompts that provide AI tools with specific instructions, examples, and desired outcomes to improve results.
- Focus on quality control: Always review and validate the AI-generated code by cross-checking it with your programming standards, design patterns, and project requirements.
- Embrace new roles: Shift from traditional coding to overseeing AI as a collaborator, guiding it clearly while leveraging your expertise to ensure the final output aligns with your vision.
-
-
Your engineers don’t need “AI training.” They need to learn how to prompt. We’re building AI agents at Optimal AI, and here’s what’s clear: Prompting is the new interface between humans and machines. If you're serious about building an AI-native engineering team, you need to train them like it’s a muscle — not a magic trick. Here’s what that looks like in practice: 🧱 1. Start with prompt structure. Prompting well is like writing clean function signatures. “You are a senior engineer. Review this PR for security and performance risks. Respond in markdown with line comments and a summary.” 🎯 2. Add tight constraints. AI will try to do everything unless you scope it. “Do not suggest style changes. Focus only on logic bugs and unused code.” 📂 3. Use examples like test cases. The best prompting strategy? Show, don’t just tell. “Here’s a great PR comment. Now generate similar feedback for this diff.” 🧪 4. Prompt like you debug. Engineers already know how to iterate. Prompting is no different. Adjust instructions → rerun → check output → repeat. 🧠 5. Make it part of code review culture. The future dev stack = GitHub + CI + Agents (like Optibot). If your team can't prompt an agent to triage a PR, they’re falling behind. — Your devs don’t need more ChatGPT hacks. They need to think in prompts — like they think in functions, tests, and logs. That’s how you scale engineering productivity with AI.
-
Every developer I know is "vibe coding" with AI -- and they're doing it completely wrong. We need a new term for AI-assisted coding. Something that isn't "vibing" (which sounds like someone dosing on some shrooms over in a van down by the river). I propose "stoplight engineering" 🚦 because it's how I build apps these days. Here are the steps: 1. 🔴 RED LIGHT: Write requirements (without AI) For my custom exam app, I wrote requirements for the ENTIRE thing before coding. It took a month! I thought through every feature I want to add. Dreamed up scenarios that may happen. Follow some edges-cases in my head, as much as I could. How does this feature affect that one? What happens when someone clicks *here*? What did my "back of the napkin" notes miss? Devs skip this because it's "boring" -- and it's usually someone else's job. But THIS IS MY FAVORITE STEP! Using your ACTUAL MIND to figure things out. No AI. No hallucinations. Just write, refine, iterate. It's coding without code. 2. 🟡 YELLOW LIGHT: Feed requirements to AI (but scrutinize everything) Then I fire up Claude Code for the coding work that's now obsolete for humans. Here's where everyone screws up -- they think AI writes perfect code on the first shot. The first output might be great. But it also might be garbage. But it doesn't matter because this is the YELLOW LIGHT phase 🟡 No vibing here. I review every. single. line. Check coding standards, design patterns, everything. I push back constantly before accepting anything. You wouldn't just blindly accept a PR, so you don't want to do it here either. The idea is to use the AI as an assistant to your brain. This is the step that requires maximum brain power. You're teaching the AI how to write YOUR code, like a senior guiding a junior (which is what it is, since no one hires juniors anymore). 3. 🟢 GREEN LIGHT: Auto-accept (after foundation is set) Long coding session? Now I vibe a little 😅 Full green 🟢 Once it knows my standards and patterns, I shift+tab in Claude Code and grab coffee. Maybe it runs 5 minutes. Maybe 15. But this is where the agentic process takes over. It's super scary for some devs to accept, but with the proper foundation in place, and it knowing how I code... the AI builds pretty much exactly what I want, and at a super high quality. The problem is that most developers jump straight to green. But the red and yellow phases are what create AWESOME results. But you can't get to this pure-genius-level-vibe-coding-rockstar level unless you already know how to code, know some design patterns, and understand programming fundamentals. This is why very senior-level developers, solution architects, and technical PMs will be safe for many, many years (maybe forever?). But it's also why I think every position below is in immediately grave danger. So... what do you think of stoplight engineering? If you've "vibed," did you get crappy code when you didn't push back? 👇
-
Just published: "The Prompt Engineering Playbook for Programmers" My latest free write-up: https://lnkd.in/g9Kxa7hG ✍️ After working with AI coding assistants daily, I've learned that the quality of your output depends entirely on the quality of your prompts. A vague "fix my code" gets you generic advice, while a well-crafted prompt can produce thoughtful, accurate solutions. I've distilled the key patterns and frameworks that actually work into a playbook covering: ✅ Patterns that work - Role-playing, rich context, specific goals, and iterative refinement ✅ Debugging strategies - From "my code doesn't work" to surgical problem-solving ✅ Refactoring techniques - How to get AI to improve performance, readability, and maintainability ✅ Feature implementation - Building new functionality step-by-step with AI as your pair programmer ✅ Common anti-patterns - What NOT to do (and how to fix it when things go wrong) The article includes side-by-side comparisons of poor vs. improved prompts with actual AI responses, plus commentary on why one succeeds where the other fails. Key insight: Treat AI coding assistants like very literal, knowledgeable collaborators. The more context you provide, the better the output. It's not magic - it's about communication. Whether you're debugging a tricky React hook, refactoring legacy code, or implementing a new feature, these techniques can turn AI from a frustrating tool into a true development partner. #ai #softwareengineering #programming