Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key. Effective prompting isn't magic; it's about structure, clarity, and iteration. Here are 10 principles to guide your AI interactions: 🔹 Checkpoint: Build iteratively. Break down large tasks into smaller, testable steps and save progress often. 🔹 Debug: Provide detailed context for errors – error messages, code snippets, and what you've tried. 🔹 Discover: Ask the AI for suggestions on tools, libraries, or approaches. Leverage its knowledge base. 🔹 Experiment: Treat prompting as iterative. Refine your requests based on the AI's responses. 🔹 Instruct: State clear, positive goals. Tell the AI what to do, not just what to avoid. 🔹 Select: Provide focused context. Use file mentions or specific snippets; avoid overwhelming the AI. 🔹 Show: Reduce ambiguity with concrete examples – code samples, desired outputs, data formats, or mockups. 🔹 Simplify: Use clear, direct language. Break down complexity and avoid jargon. 🔹 Specify: Define exact requirements – expected outputs, constraints, data formats, edge cases. 🔹 Test: Plan your structure and features before prompting. Outline requirements like a PM/engineer. By applying these principles, you can significantly improve your collaboration with AI, leading to faster development cycles and better outcomes.
How to Use AI for Manual Coding Tasks
Explore top LinkedIn content from expert professionals.
Summary
Discover how AI can transform manual coding tasks into streamlined, efficient processes by acting as a collaborative assistant for developers. From clear communication to iterative problem-solving, AI tools enhance coding workflows while reducing repetitive tasks.
- Define clear requirements: Provide specific instructions and break tasks into manageable steps to ensure AI generates accurate and high-quality code.
- Incorporate testing early: Allow AI to write tests before code implementation, using them as checkpoints to ensure functionality and reliability.
- Collaborate with AI: Treat AI as a junior developer by reviewing its output, providing feedback, and resetting context when needed to improve outcomes.
-
-
Stop letting AI write your code randomly. I learned this lesson the hard way. Here's the 3-part process that works: Most engineers let AI run wild. They give it vague prompts. And hope for the best. That's why they keep hitting the same walls. Getting the same errors. Running in circles. Here's what actually works: 1. End-to-end phases - Build small, complete features first. - Connect all layers - frontend, backend, APIs, etc. - Then scale up complexity gradually. 2. Test first - AI writes the tests before the code. - Tests become the strict teacher. - The model iterates until everything passes. 3. Micro-checkpoints - Break each phase into tiny steps. - Use checkboxes in markdown. - Help AI maintain context through complexity. This isn't theory. It's battle-tested across enterprise codebases. And it's transforming how top teams build software. The old way: Hope AI figures it out The new way: Guide AI systematically — Enjoyed this? 3 quick things: - Follow Dylan Davis for more AI automation insights - Share this a with teammate - Book a free 15-min discovery call (link in bio) if you need help automating your internal processes with AI —
-
Recently, I adopted a coding tip from the Anthropic team that has significantly boosted the quality of my AI-generated code. Anthropic runs multiple Claude instances in parallel to dramatically improve code quality compared to single-instance workflows. How it works: (1) One Claude writes the code, the coder - focusing purely on implementation (2) A second Claude reviews it, the reviewer - examining with fresh context, free from implementation bias (3) A third Claude applies fixes, the fixer - integrating feedback without defensiveness This technique works with any AI assistant, not just Claude. Spin each agent up in its own tab—Cursor, Windsurf, or plain CLI. Then, let Git commits serve as the hand-off protocol. This separation mimics human pair programming but supercharges it with AI speed. When a single AI handles everything, blind spots emerge naturally. Multiple instances create a system of checks and balances that catch what monolithic workflows miss. This shows that context separation matters. By giving each AI a distinct role with clean context boundaries, you essentially create specialized AI engineers, each bringing a unique perspective to the problem. This and a dozen more tips for developers building with AI in my latest AI Tidbits post https://lnkd.in/gTydCV9b
-
I've been using AI coding tools for a while now & it feels like every 3 months the paradigm shifts. Anyone remember putting "You are an elite software engineer..." at the beginning of your prompts or manually providing context? The latest paradigm is Agent Driven Development & here are some tips that have helped me get good at taming LLMs to generate high quality code. 1. Clear & focused prompting ❌ "Add some animations to make the UI super sleek" ✅ "Add smooth fade-in & fade out animations to the modal dialog using the motion library" Regardless of what you ask, the LLM will try to be helpful. The less it has to infer, the better your result will be. 2. Keep it simple stupid ❌ Add a new page to manage user settings, also replace the footer menu from the bottom of the page to the sidebar, right now endless scrolling is making it unreachable & also ensure the mobile view works, right now there is weird overlap ✅ Add a new page to manage user settings, ensure only editable settings can be changed. Trying to have the LLM do too many things at once is a recipe for bad code generation. One-shotting multiple tasks has a higher chance of introducing bad code. 3. Don't argue ❌ No, that's not what I wanted, I need it to use the std library, not this random package, this is the 4th time you've failed me! ✅ Instead of using package xyz, can you recreate the functionality using the standard library When the LLM fails to provide high quality code, the problem is most likely the prompt. If the initial prompt is not good, follow on prompts will just make a bigger mess. I will usually allow one follow up to try to get back on track & if it's still off base, I will undo all the changes & start over. It may seem counterintuitive, but it will save you a ton of time overall. 4. Embrace agentic coding AI coding assistants have a ton of access to different tools, can do a ton of reasoning on their own, & don't require nearly as much hand holding. You may feel like a babysitter instead of a programmer. Your role as a dev becomes much more fun when you can focus on the bigger picture and let the AI take the reigns writing the code. 5. Verify With this new ADD paradigm, a single prompt may result in many files being edited. Verify that the code generated is what you actually want. Many AI tools will now auto run tests to ensure that the code they generated is good. 6. Send options, thx I had a boss that would always ask for multiple options & often email saying "send options, thx". With agentic coding, it's easy to ask for multiple implementations of the same feature. Whether it's UI or data models asking for a 2nd or 10th opinion can spark new ideas on how to tackle the task at hand & a opportunity to learn. 7. Have fun I love coding, been doing it since I was 10. I've done OOP & functional programming, SQL & NoSQL, PHP, Go, Rust & I've never had more fun or been more creative than coding with AI. Coding is evolving, have fun & let's ship some crazy stuff!
-
Just published: "The Prompt Engineering Playbook for Programmers" My latest free write-up: https://lnkd.in/g9Kxa7hG ✍️ After working with AI coding assistants daily, I've learned that the quality of your output depends entirely on the quality of your prompts. A vague "fix my code" gets you generic advice, while a well-crafted prompt can produce thoughtful, accurate solutions. I've distilled the key patterns and frameworks that actually work into a playbook covering: ✅ Patterns that work - Role-playing, rich context, specific goals, and iterative refinement ✅ Debugging strategies - From "my code doesn't work" to surgical problem-solving ✅ Refactoring techniques - How to get AI to improve performance, readability, and maintainability ✅ Feature implementation - Building new functionality step-by-step with AI as your pair programmer ✅ Common anti-patterns - What NOT to do (and how to fix it when things go wrong) The article includes side-by-side comparisons of poor vs. improved prompts with actual AI responses, plus commentary on why one succeeds where the other fails. Key insight: Treat AI coding assistants like very literal, knowledgeable collaborators. The more context you provide, the better the output. It's not magic - it's about communication. Whether you're debugging a tricky React hook, refactoring legacy code, or implementing a new feature, these techniques can turn AI from a frustrating tool into a true development partner. #ai #softwareengineering #programming
-
I used an AI coding agent with $𝟱𝟬𝟬 in credits to build a yugioh card game engine with documentation in 1 week (side project hours). To build this I used: AI Coding Agent ➜ Claude Code CLI AI Architect ➜ Google Gemini Pro 2.5 Everything was built by prompting Claude Code through my CLI I used Google Gemini Pro 2.5 to throw the entire codebase into an LLM and discuss architectural design patterns for complex tasks. 𝗧𝗼𝗽 𝟯 𝗟𝗲𝘀𝘀𝗼𝗻𝘀 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝗼 𝗶𝗻𝗰𝗿𝗲𝗮𝘀𝗲 𝘁𝗵𝗲 𝗿𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆, 𝘀𝗽𝗲𝗲𝗱 𝗮𝗻𝗱 𝗰𝗼𝘀𝘁 𝗻𝗲𝘅𝘁 𝘁𝗶𝗺𝗲 𝗮𝗿𝗼𝘂𝗻𝗱: 1. In every new session, provide "Architectural principles to code by", which is a set of clean code practices that the LLM should follow. Restart your session frequently to re-input these principles if the context gets too long. 2. Always tell the LLM "Do not code" and have it come up with an approach then explain why it works before allowing it to code. When fixing bugs tell the LLM "Do not code, investigate and report back with rationale for whats broken and how to fix it" 3. Use another LLM (such as Gemini) to ideate on a concrete architectural design before having the coding agent tackle any refactors or implement complex features. Example prompt contexts that I used can be found in the comments
-
Most developers think AI-generated code is automatically low quality. I used to think the same thing... until I built my entire course platform and exam portal (based on Laravel) by having AI write 99% of its code. The code quality isn't the problem; it's how you collab with the AI that makes all the difference. Here's what changed my perspective: I don't type code anymore. I speak my requirements into Claude Code using a Whisper transcription app, then I review every single line the AI generates, ...and push back when something doesn't align with my architectural vision. The result is that the dev cycle is so fast, the code is more consistent, and it even accounts for edge cases that myself, as a fallible human, would normally miss. The key is treating AI as a skilled junior developer who needs clear direction and to go throught a thorough code review every single time -- and not viewed as a magic solution. The biggest misconception about coding with AI is that it means blindly accepting the output (aka "vibe coding"). That's NOT collaboration -- that's an abdication of responsibility. Real AI-assisted development is about maintaining architectural control, while also eliminating all of the tedious keyboard typing that is a complete waste of time. What's your biggest concern about incorporating AI into your dev workflow? I'm documenting my entire process of dev'ing with AI. Learn how I do it at https://go.m.academy/mh3
-
This guide turns “chat-driven coding” into a real workflow. Here’s how you can do it too in 5 simple steps: 1. Clarify the outcome What should the app actually do on Day 1? Define the goal. 2. Break it into steps Think in mini-milestones: • Step 1.1: Add login • Step 1.2: Connect to DB • Step 2: Basic dashboard 3. Prompt AI step-by-step One task per prompt. One clean result at a time. 4. Test + Commit Early Don’t wait for a “perfect version.” Test, commit, move on. 5. Reset when stuck New chats > endless error loops. Fresh context = fast fixes. This 5-step framework turned AI into a real dev assistant. Save this, you might need it when your AI starts hallucinating features.