How to Support Developers With AI

Explore top LinkedIn content from expert professionals.

Summary

Supporting developers with AI involves creating structured workflows, clear communication, and strategic use of tools to enhance productivity and code quality. By focusing on planning, prompting clarity, and iterative processes, developers can turn AI from a basic assistant into a true collaborative partner.

  • Start with structure: Develop a clear plan, including project requirements and modular documentation, before integrating AI into your workflow.
  • Communicate clearly: Use specific, concise prompts when interacting with AI tools to ensure accurate and relevant output.
  • Break tasks into steps: Divide larger projects into smaller, manageable chunks to improve debugging and maintain consistent progress.
Summarized by AI based on LinkedIn member posts
  • View profile for Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    40,822 followers

    Most developers treat AI coding agents like magical refactoring engines, but few have a system, and that's wrong. Without structure, coding with tools like Cursor, Windsurf, and Claude Code often leads to files rearranged beyond recognition, subtle bugs, and endless debugging. In my new post, I share the frameworks and tactics I developed to move from chaotic vibe coding sessions to consistently building better, faster, and more securely with AI. Three key shifts I cover: -> Planning like a PM – starting every project with a PRD and modular project-docs folder radically improves AI output quality -> Choosing the right models – using reasoning-heavy models like Claude 3.7 Sonnet or o3 for planning, and faster models like Gemini 2.5 Pro for focused implementation -> Breaking work into atomic components – isolating tasks improves quality, speeds up debugging, and minimizes context drift Plus, I share under-the-radar tactics like: (1) Using .cursor/rules to programmatically guide your agent’s behavior (2) Quickly spinning up an MCP server for any Mintlify-powered API (3) Building a security-first mindset into your AI-assisted workflows This is the first post in my new AI Coding Series. Future posts will dive deeper into building secure apps with AI IDEs like Cursor and Windsurf, advanced rules engineering, and real-world examples from my projects. Post + NotebookLM-powered podcast https://lnkd.in/gTydCV9b

  • View profile for Matt Palmer

    developer relations at replit

    15,801 followers

    Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key. Effective prompting isn't magic; it's about structure, clarity, and iteration. Here are 10 principles to guide your AI interactions: 🔹 Checkpoint: Build iteratively. Break down large tasks into smaller, testable steps and save progress often. 🔹 Debug: Provide detailed context for errors – error messages, code snippets, and what you've tried. 🔹 Discover: Ask the AI for suggestions on tools, libraries, or approaches. Leverage its knowledge base. 🔹 Experiment: Treat prompting as iterative. Refine your requests based on the AI's responses. 🔹 Instruct: State clear, positive goals. Tell the AI what to do, not just what to avoid. 🔹 Select: Provide focused context. Use file mentions or specific snippets; avoid overwhelming the AI. 🔹 Show: Reduce ambiguity with concrete examples – code samples, desired outputs, data formats, or mockups. 🔹 Simplify: Use clear, direct language. Break down complexity and avoid jargon. 🔹 Specify: Define exact requirements – expected outputs, constraints, data formats, edge cases. 🔹 Test: Plan your structure and features before prompting. Outline requirements like a PM/engineer. By applying these principles, you can significantly improve your collaboration with AI, leading to faster development cycles and better outcomes.

  • View profile for Ado Kukic

    Community, Claude, Code

    5,369 followers

    I've been using AI coding tools for a while now & it feels like every 3 months the paradigm shifts. Anyone remember putting "You are an elite software engineer..." at the beginning of your prompts or manually providing context? The latest paradigm is Agent Driven Development & here are some tips that have helped me get good at taming LLMs to generate high quality code. 1. Clear & focused prompting ❌ "Add some animations to make the UI super sleek" ✅ "Add smooth fade-in & fade out animations to the modal dialog using the motion library" Regardless of what you ask, the LLM will try to be helpful. The less it has to infer, the better your result will be. 2. Keep it simple stupid ❌ Add a new page to manage user settings, also replace the footer menu from the bottom of the page to the sidebar, right now endless scrolling is making it unreachable & also ensure the mobile view works, right now there is weird overlap ✅ Add a new page to manage user settings, ensure only editable settings can be changed. Trying to have the LLM do too many things at once is a recipe for bad code generation. One-shotting multiple tasks has a higher chance of introducing bad code. 3. Don't argue ❌ No, that's not what I wanted, I need it to use the std library, not this random package, this is the 4th time you've failed me! ✅ Instead of using package xyz, can you recreate the functionality using the standard library When the LLM fails to provide high quality code, the problem is most likely the prompt. If the initial prompt is not good, follow on prompts will just make a bigger mess. I will usually allow one follow up to try to get back on track & if it's still off base, I will undo all the changes & start over. It may seem counterintuitive, but it will save you a ton of time overall. 4. Embrace agentic coding AI coding assistants have a ton of access to different tools, can do a ton of reasoning on their own, & don't require nearly as much hand holding. You may feel like a babysitter instead of a programmer. Your role as a dev becomes much more fun when you can focus on the bigger picture and let the AI take the reigns writing the code. 5. Verify With this new ADD paradigm, a single prompt may result in many files being edited. Verify that the code generated is what you actually want. Many AI tools will now auto run tests to ensure that the code they generated is good. 6. Send options, thx I had a boss that would always ask for multiple options & often email saying "send options, thx". With agentic coding, it's easy to ask for multiple implementations of the same feature. Whether it's UI or data models asking for a 2nd or 10th opinion can spark new ideas on how to tackle the task at hand & a opportunity to learn. 7. Have fun I love coding, been doing it since I was 10. I've done OOP & functional programming, SQL & NoSQL, PHP, Go, Rust & I've never had more fun or been more creative than coding with AI. Coding is evolving, have fun & let's ship some crazy stuff!

  • View profile for Addy Osmani

    Engineering Leader, Google Chrome. Best-selling Author. Speaker. AI, DX, UX. I want to see you win.

    234,906 followers

    Just published: "The Prompt Engineering Playbook for Programmers" My latest free write-up: https://lnkd.in/g9Kxa7hG ✍️ After working with AI coding assistants daily, I've learned that the quality of your output depends entirely on the quality of your prompts. A vague "fix my code" gets you generic advice, while a well-crafted prompt can produce thoughtful, accurate solutions. I've distilled the key patterns and frameworks that actually work into a playbook covering: ✅ Patterns that work - Role-playing, rich context, specific goals, and iterative refinement ✅ Debugging strategies - From "my code doesn't work" to surgical problem-solving ✅ Refactoring techniques - How to get AI to improve performance, readability, and maintainability ✅ Feature implementation - Building new functionality step-by-step with AI as your pair programmer ✅ Common anti-patterns - What NOT to do (and how to fix it when things go wrong) The article includes side-by-side comparisons of poor vs. improved prompts with actual AI responses, plus commentary on why one succeeds where the other fails. Key insight: Treat AI coding assistants like very literal, knowledgeable collaborators. The more context you provide, the better the output. It's not magic - it's about communication. Whether you're debugging a tricky React hook, refactoring legacy code, or implementing a new feature, these techniques can turn AI from a frustrating tool into a true development partner. #ai #softwareengineering #programming

  • View profile for Mark Shust
    Mark Shust Mark Shust is an Influencer

    Founder, Educator & Developer @ M.academy. The simplest way to learn Magento. Currently exploring building production apps with Claude Code & AI.

    25,224 followers

    Every developer I know is "vibe coding" with AI -- and they're doing it completely wrong. We need a new term for AI-assisted coding. Something that isn't "vibing" (which sounds like someone dosing on some shrooms over in a van down by the river). I propose "stoplight engineering" 🚦 because it's how I build apps these days. Here are the steps: 1. 🔴 RED LIGHT: Write requirements (without AI) For my custom exam app, I wrote requirements for the ENTIRE thing before coding. It took a month! I thought through every feature I want to add. Dreamed up scenarios that may happen. Follow some edges-cases in my head, as much as I could. How does this feature affect that one? What happens when someone clicks *here*? What did my "back of the napkin" notes miss? Devs skip this because it's "boring" -- and it's usually someone else's job. But THIS IS MY FAVORITE STEP! Using your ACTUAL MIND to figure things out. No AI. No hallucinations. Just write, refine, iterate. It's coding without code. 2. 🟡 YELLOW LIGHT: Feed requirements to AI (but scrutinize everything) Then I fire up Claude Code for the coding work that's now obsolete for humans. Here's where everyone screws up -- they think AI writes perfect code on the first shot. The first output might be great. But it also might be garbage. But it doesn't matter because this is the YELLOW LIGHT phase 🟡 No vibing here. I review every. single. line. Check coding standards, design patterns, everything. I push back constantly before accepting anything. You wouldn't just blindly accept a PR, so you don't want to do it here either. The idea is to use the AI as an assistant to your brain. This is the step that requires maximum brain power. You're teaching the AI how to write YOUR code, like a senior guiding a junior (which is what it is, since no one hires juniors anymore). 3. 🟢 GREEN LIGHT: Auto-accept (after foundation is set) Long coding session? Now I vibe a little 😅 Full green 🟢 Once it knows my standards and patterns, I shift+tab in Claude Code and grab coffee. Maybe it runs 5 minutes. Maybe 15. But this is where the agentic process takes over. It's super scary for some devs to accept, but with the proper foundation in place, and it knowing how I code... the AI builds pretty much exactly what I want, and at a super high quality. The problem is that most developers jump straight to green. But the red and yellow phases are what create AWESOME results. But you can't get to this pure-genius-level-vibe-coding-rockstar level unless you already know how to code, know some design patterns, and understand programming fundamentals. This is why very senior-level developers, solution architects, and technical PMs will be safe for many, many years (maybe forever?). But it's also why I think every position below is in immediately grave danger. So... what do you think of stoplight engineering? If you've "vibed," did you get crappy code when you didn't push back? 👇

  • View profile for Swami Sivasubramanian
    Swami Sivasubramanian Swami Sivasubramanian is an Influencer

    VP, AWS Agentic AI

    169,983 followers

    The pace of innovation led by generative AI is unprecedented. We’re seeing new use cases emerge across every industry that would not be possible without this technology. So, how can you help every developer build with GenAI in this rapidly changing environment? Here is the advice I shared during my keynote yesterday at #VivaTech:  🟠 Start your GenAI journey on Amazon Bedrock and give developers access to the broadest selection of first- and third-party LLMs and FMs from leading AI companies like Anthropic, Cohere, Meta, Mistral, and more. 🟠 Your organization’s data is the key differentiator between generic GenAI applications and those that know your business and customers deeply. Use enterprise data to customize foundation models and maximize their value. 🟠 Tackle repetitive coding tasks with Amazon Q Developer and adopt the use of autonomous agents to remove the heavy lifting from tasks like coding, writing tests, app upgrades, and security scanning. These assistants can also help employees use the right information to do their work better. 🟠 Build responsibly with safeguards for model outputs and receive model evaluation support with Guardrails for Amazon Bedrock. How teams invent today and tomorrow will have a profound impact on the world. That’s why we’re making generative AI accessible to customers of all sizes and technical abilities.

  • View profile for Pan Wu
    Pan Wu Pan Wu is an Influencer

    Senior Data Science Manager at Meta

    49,023 followers

    Large Language Models (LLMs) possess vast capabilities that extend far beyond conversational AI, and companies are actively exploring their potential. In a recent tech blog, engineers at Faire share how they’re leveraging LLMs to automate key aspects of code reviews, unlocking new ways to enhance developer productivity. At Faire, code reviews are an essential part of the development process. While some aspects require deep project context, many follow standard best practices that do not. These include enforcing clear titles and descriptions, ensuring sufficient test coverage, adhering to style guides, and detecting backward-incompatible changes. LLMs are particularly well-suited for handling these routine review tasks. With access to relevant pull request data—such as metadata, diffs, build logs, and test coverage reports—LLMs can efficiently flag potential issues, suggest improvements, and even automate fixes for simple problems. To facilitate this, the team leveraged an internally developed LLM orchestrator service called Fairey to streamline AI-powered code reviews. Fairey processes chat-based requests by breaking them down into structured steps, such as calling an LLM model, retrieving necessary context, and executing functions. It integrates seamlessly with OpenAI’s Assistants API, allowing engineers to fine-tune assistant behavior and incorporate capabilities like Retrieval-Augmented Generation (RAG). This approach enhances accuracy, ensures context awareness, and makes AI-driven reviews genuinely useful to developers. By applying LLMs in code reviews, Faire demonstrates how AI can enhance developer workflows, boosting efficiency while maintaining high code quality. As companies continue exploring AI applications beyond chat, tools like Fairey provide a glimpse into the future of intelligent software development. #Machinelearning #Artificialintelligence #AI #LLM #codereview #Productivity #SnacksWeeklyonDataScience – – –  Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts:    -- Spotify: https://lnkd.in/gKgaMvbh   -- Apple Podcast: https://lnkd.in/gj6aPBBY    -- Youtube: https://lnkd.in/gcwPeBmR https://lnkd.in/deaMsxZy 

  • View profile for Dylan Davis

    I help mid-size teams with AI automation | Save time, cut costs, boost revenue | No-fluff tips that work

    5,309 followers

    90% of code written by developers using Windsurf’s agentic IDE is now generated by AI. This isn't science fiction. It's happening today. In 2022, auto-complete was revolutionary at 20-30% of code. Now we've entered the age of AI agents in software development. 7 ways agentic development environments are transforming coding today - with glimpses of tomorrow: 1️⃣ Unified Timeline (Now): Today's AI agents operate on a shared timeline with you, understanding your actions implicitly - viewing files, navigating code, and making edits without conflicting with your changes. 2️⃣ No More Copy-Paste (Now): Modern agent-based IDEs eliminate copy-pasting from chat windows. The agent lives where you work, seeing your context without you needing to explain it repeatedly. 3️⃣ Terminal Integration (Now): Commands run directly in your existing environment. When the agent installs a package, it goes to the same environment you're using - no more separate sandboxes. 4️⃣ Auto-Generated Memories (Now & Evolving): Leading AI development tools build memory banks of your preferences. Tell it once about your project architecture, and it remembers. By 2025, experts predict 99% of rules files will be unnecessary. 5️⃣ Implicit Documentation (Now & Evolving): Modern agents automatically detect your packages and dependencies, then find the right documentation without you needing to specify versions. 6️⃣ Beyond Context Prompting (Now & Evolving): The old '@file' and '@web' patterns are becoming obsolete. Today's advanced agents dynamically infer relationships between code and documents most of the time. 7️⃣ Future Vision (Coming Soon): Soon, agents will anticipate 10-30 steps ahead, writing unit tests before you finish functions and performing codebase-wide refactors from a single variable edit. The most striking realization: this isn't the future. It's happening now. When developers have agents that understand their implicit actions, remember their preferences, and improve with advancing models, productivity explodes. --- Are you still copy-pasting from ChatGPT, or have you embraced agentic development tools in your workflow? [Insights inspired by Kevin Hou's presentation at the AI Engineering Summit] --- Enjoyed this? 2 quick things: - Follow me for more AI automation insights - Share this a with teammate 

  • View profile for Muazma Zahid

    Data and AI Leader | Advisor | Speaker

    17,613 followers

    Happy Friday everyone, this week in #learnwithmz let's dive into something close to every developer's heart: 𝐀𝐈 𝐂𝐨𝐝𝐢𝐧𝐠 𝐓𝐨𝐨𝐥𝐬 As AI revolutionizes the way we write, debug, and manage code, it's important to identify which tools truly deliver value. Over the course of two weeks, I tested some of the most popular options by building a full-stack app prototype with each tool. Here's a quick breakdown to help you find the best fit for your specific needs: 🏆 Best Overall: GitHub Copilot Seamless integration with your IDE. Great for inline suggestions and debugging. New Copilot Chat feature allows conversational debugging. Learn more: https://lnkd.in/g4mdv4Ej 💡 Best for Non-Technical Users: Vercel V0 Intuitive and beginner-friendly. Component-specific editing via AI makes prototyping easier. Learn more: https://vercel.com/ 💻 Best for Full-Stack Cloud Development: Replit Ghostwriter Great for collaborative, cloud-based projects. Comes with built-in hosting capabilities. Learn more: https://replit.com/ 🚀 Emerging tool to Watch: Cursor Excellent Copilot alternative. Ideal for agent-driven workflows. Learn more: https://www.cursor.com/ 💎Notable mention: Cline Completely open-source and free alternative to Cursor + Windsurf, available as a lightweight VS Code extension. Enables agent-driven coding with advanced tool integrations. Produces cleaner code with fewer errors and improved self-correction capabilities. Lacks inline chat functionality Learn more: https://lnkd.in/gzESqien 𝐎𝐭𝐡𝐞𝐫𝐬 𝐰𝐨𝐫𝐭𝐡 𝐞𝐱𝐩𝐥𝐨𝐫𝐢𝐧𝐠 - Codeium: Strong AI assistant for codegen and refactoring. https://codeium.com/ - Bolt: Provides cloud-based development https://bolt.new/ - Tempo: PRD-to-Code workflows for designers and devs. Focused on REACT. https://www.tempolabs.ai/ 𝐖𝐡𝐲 𝐀𝐈 𝐂𝐨𝐝𝐢𝐧𝐠 𝐭𝐨𝐨𝐥𝐬 𝐦𝐚𝐭𝐭𝐞𝐫 These tools save time, reduce cognitive load, and empower developers to focus on creative problem-solving. However, the right choice depends on your use case, whether it's prototyping, debugging, or full-stack development. Which AI coding tools are you using? Let me know in the comments, and if you'd like a deeper comparison post! #AI #CodingTools #Developers #TechFriday #LearnAI #learnwithmz P.S. Image is generated via DALL·E

  • View profile for Aadit Sheth

    The Narrative Company | Executive Narrative & Influence Strategy

    96,580 followers

    This guide turns “chat-driven coding” into a real workflow. Here’s how you can do it too in 5 simple steps: 1. Clarify the outcome What should the app actually do on Day 1? Define the goal. 2. Break it into steps Think in mini-milestones: • Step 1.1: Add login • Step 1.2: Connect to DB • Step 2: Basic dashboard 3. Prompt AI step-by-step One task per prompt. One clean result at a time. 4. Test + Commit Early Don’t wait for a “perfect version.” Test, commit, move on. 5. Reset when stuck New chats > endless error loops. Fresh context = fast fixes. This 5-step framework turned AI into a real dev assistant. Save this, you might need it when your AI starts hallucinating features.

Explore categories