The AI Coding Playbook: Tactical Guide for Developers This playbook is part of my AI coding series. Read Part 1: The Trap and Part 2: The Escape for the full story
The Problem With How We’re Using AI
Remember the productivity trap from Part 1? Where I spent several weeks letting AI write code I couldn’t debug, fix, or even understand? The root issue wasn’t the AI model. It was my approach. I was treating my AI coding assistant like a magic code generator: throw in a request, get code back, hope it works. When it didn’t, I’d ask it to fix the bug. I’d spiral into debugging hell, unable to troubleshoot because I didn’t deeply understand the code. The escape wasn’t learning better prompts. It was learning to simulate working with a team.
I put a lot of work into refining and documenting this approach – informed by the workplace psychology we use at Workplace Labs to guide individuals and teams towards effective adoption of AI.
The Trio: Your Personalized AI Development Team
Instead of one generalist AI trying to do everything, I started role-playing with three specialists:
- Researcher – Finds latest best practices, documentation, and patterns, then teaches you what you need to know
- Mentor – Challenges your assumptions, helps you plan architecture, pushes back on bad ideas
- Builder – Takes research and plan, implements code you actually understand
⠀Why this works: You already know how to collaborate with a research assistant, a mentor, and a builder. You don’t need to memorize prompting techniques—you just need to talk to them like you would real teammates. Warning: AI assistants are not your teammates. They don’t understand what they are saying and they are not your friends. I encourage anthropomorphismas a technique. Role playing like this helps you naturally write prompts that help the AI perform at a high level and makes good prompting easier. The magic? By going through research and planning, you’re creating the perfect context AI needs to perform well AND building your ability to evaluate its output. When AI makes mistakes (and it will), you can catch them because you understand the architecture. It’s hard to write good prompts, hard to know what context to give AI, and crucially hard to evaluate output quality. The Trio approach solves all three by leveraging how you already know how to collaborate.
My Results
Before:

After learning how to code with AI:

- GitHub contributions: 415 in 2024 → 1,126 in 2025
- The experience has not been without some pitfalls and now with this new approach:
- I am writing code I can actually debug and maintain
- Learning more, not less
- Having fun building
I wrote this guide partially to discipline myself. It’s too easy to fall back into managing multiple agents and hoping the next prompt fixes everything. These five phases keep me honest. If you’re fighting the same battle, maybe they’ll help you too. Now, let’s get into the tactics, prompts, and workflows that took me from AI productivity trap to genuine acceleration.
5 Phase Overview
Each phase maps to one of the three roles:
Phase 1: Research → Talk to your Researcher
Phase 2: Learn → Your Researcher teaches you
Phase 3: Plan → Collaborate with your Mentor
Phase 4: Code → Direct your Builder
Phase 5: Review → Builder preps for PR
The key insight: Most of your time will be spent in research, learning, and planning. Embrace this. That’s how you know you’re getting the most out of AI. When you feel like you’re spending too much time planning and not enough coding, remember: this is what prevents the productivity trap. This is what turns AI from a chaos generator into a force multiplier.
Phase 1: Research
When you open your AI chat, you’re talking to a generalist. Tell it what it’s an expert at, and it becomes a specialist. For complex coding tasks, you want to simulate working with a Trio of specialists, starting with a research assistant. Start each coding session imagining you are instucting a junior developer to compile some research for you to better prepare for the task ahead.
The Standard Research Prompt
Please research the latest best practices related to [framework/concept] and the task: Task: [description of the goal of the coding session – a feature, bug fix, refactor…] Use Context7 and Perplexity to research.
For research I want to reference in the future I will add:
Create a guide with examples that pull from the codebase so we can learn and reference this knowledge in future conversations.
Why This Works
- Forces grounding: AI must reference current documentation, not just predict from often outdated training data
- Creates reusable artifacts (optionally): Documentation you can reference in future chats
- Gives you learning materials: Tailored to your specific project
- Builds evaluation skills: You’re creating the context you need to judge AI’s output quality
⠀Recommended Tools For Framework-Specific Research:
- Context7 MCP – When you need official docs for a specific framework (React, Next.js, etc.), this pulls directly from GitHub repos and documentation. Use this when you’re asking “What’s the right way to do X in this framework?”
⠀For General Best Practices:
- Perplexity MCP – When you need broader research comparing approaches, synthesizing multiple sources including blogs, forums, and community discussions
⠀When AI can’t find the answer:
- Sometimes the research will not find the solution. In this case, you can have it draft a question to ask on a relevant Discord/Slack/Reddit community.
⠀The research phase creates a foundation that both of us could reference for this coding session or a document we can reference throughout development.
Phase 2: Learn
This is where the magic happens. Instead of jumping straight to code after completing the research, I discipline myself to actually review the research and learn from it. Five minutes of learning saves hours of debugging. I learned this the expensive way.
How It Works
1. Review – Actually read the research. If it’s too detailed or not deep enough, give your AI coding assistant feedback so that it can tailor it for you. It’s worth a few seconds and spending the tokens to have the research in a format you want to review. 2. Ask questions – Be curious and ask it to teach you in ways that you learn from most quickly. For instance:
- Ask for more examples using your project as context
- Ask for visuals (mermaid diagrams)
- Ask to turn it into a podcast and then listen to it on a walk
- Ask “why” repeatedly—AI never gets tired of explaining
Example Learning Conversation
You: “How does Agno handle parallel agent coordination? Any examples from their documentation?”
AI: [provides explanation and relates it to your specific context]
You: “What are the tradeoffs between parallel vs. sequential agent execution for our use case?”
AI: [explains tradeoffs with project-specific examples]
What Makes This Powerful
Just-in-time learning: You’re learning exactly what you need, when you need it, with examples tailored to your project.AI improves too: By documenting a mini course for you, AI reinforces its own understanding of the context. Knowledge share: You’re not trusting AI to keep all the knowledge (it won’t when you least expect it). You’re actively maintaining knowledge share so that you can catch mistakes before they lead the AI down dead end paths that might appear good for hours or days before you realize where things went wrong.
Key Insight
You’re not leaning on AI to make decisions here—you’re leaning on it to read documentation, pull out relevant bits, and teach you what you need to architect properly. AI is fast and knowledgeable, but it makes hard-to-catch mistakes that can completely derail productivity gains. Additionally, it’s not sustainable or fun when you’re not learning and growing. By prioritizing personal growth, you’re not only improving yourself, but you’re doing context engineering without spending hours staying up to date on all the latest prompting techniques.
Phase 3: Plan (Your AI Mentor)
With solid understanding in place, planning becomes collaborative rather than directive. In this phase, it helps to kick the AI out of being so agreeable and sycophantic by telling it to act as your mentor. Why mentor instead of project manager? Because having AI take on the role of mentor makes it less sycophantic and helps you think critically. It pushes back on bad ideas instead of just agreeing with you.
The Planning Prompt
Act as my mentor. Based on what we’ve learned, create a plan for implementing [feature]. Break it down into small, reviewable chunks.
This prompt starts a conversation about the architecture and implementation. Once I’m happy with it, I’ll ask it to be critical, which tends to do a surprisingly good job of finding issues before we start building.
What is wrong with this plan?
Why This Works
- Planning before building has been shown in numerous studies and AI coding guides to improve code quality. Many coding tools have a planning mode built in because of this.
- Asking it to act as a mentor helps it avoid going along with bad ideas
- Asking it to make it reviewable reduces your cognitive load so that you can review the plan, ask questions, point out issues
- Part of the planning process might be to allow your AI coding assistant to go ahead and build something. Feel free to do this, learn from it, and abandon the code. The learning is worth the few minutes it takes to build and the tokens being spent. It will likely result in fewer tokens in the long run.
⠀What Good Planning Looks Like
- Small chunks: Each step is reviewable
- Architectural clarity: You understand why it’s structured this way
⠀Avoiding Premature Optimization AI coding assistants are happy to introduce complexity with abstractions and premature optimizations. My original failed solution became too complex for me to troubleshoot efficiently partially because my AI coding assistant was convinced it was a good idea to introduce multi-threading. When I rewrote it, I opted to keep it simple: single-threaded and stateless. Why?
- This solution will only have a few hundred users weekly for the near future
- Easily handled with a single thread and async
- I know we can scale with load balancing (stateless Docker container)
⠀Lesson: Don’t let AI aimlessly introduce complexity. With AI coding, failing fast is a valuable learning tool. I absolutely hate throwing away my hand-written code. I’m learning to be more okay with having AI code something as a way for me to explore a direction, learn from it, and trash the code if it’s not the right direction. With Cursor, you can actually run more than one agent simultaneously on the same task so that you can select the best one and trash the others.
Feeling like you’re spending too much time planning?
Good. That’s the point. This is how you avoid the productivity trap where AI generates code faster than you can understand it.
Phase 4: Code/Build (Your AI Builder)
Now you and your AI coding assistant have the perfect context to build something great. At this point I will switch to a faster model, Composer 1 in Cursor, Grok Code Fast.
Take this research and plan, plan your TODOs and get to work. Please work iteratively by writing tests and running them after you complete each iteration. If something becomes unclear, ask perplexity then confirm with me, especially for architectural decisions.
I have put most of this into my cursor rules or root agents.md, and so my prompt looks like this:
Take this research and plan, plan your TODOs and get to work
Why this works:
- It feels good to watch your plan come together and watch your AI coding assistant get most things right the first time
- As the coding assistant makes mistakes, it’s easy for you to guide it because you understand the context and made the architectural decisions
- The common pitfall: AI output often has hard-to-find flaws. By going through the Trio process, you’ve built your ability to spot these issues
- By using a faster, less powerful model, it does 3 things: it keeps me engaged – we are building together. Second: it helps me keep tasks smaller and inline with the AI’s capabilities which has this unexpected benefit of making the work easier for me to review. Third: the speed allows me to go through multiple iterations in the time it might take Sonnet 4.5 to go through 1.
⠀
Phase 5: PR-Ready Review
Why This Matters
AI doesn’t care about technical debt—it will ship working-but-fragile code and call it done. Before calling the coding session done, have AI review and prep for a PR.
Great work on this! Now, reflect on the changes we made and look for things we can clean up, simplify, refactor, or finish before opening a PR. Remember that this app is in early development and we can refactor without backwards compatibility. It is important to follow the latest best practice and avoid workarounds. Research any latest best practices you are not sure about by using Context7 or Perplexity. If something is unclear, ask me.
Should I review the code?
I have two modes for reviewing code: Low-stakes projects: If this is a marketing website or something that doesn’t require a high level of security, I might only do a few spot checks. Production applications: If this is a production application with users and requires some level of security, I discipline myself to review every line of code that involves business logic. The only code I might skim over is CSS and depending on the framework, HTML.
Debugging Tactics
Even with this methodical approach, I still hit bugs regularly and have developed the following debug prompt:
The Debug Prompt
Take a breath, break this down, think outside the box, and find the root issue. Avoid workarounds. Help me understand.
Other Debugging Tricks
1. Revert to Known Good State If I’m in debugging hell, I’ll revert to the last working commit and approach the problem differently. Fight the feeling of holding onto code as if you hand wrote it. 2. Question AI’s Assumptions
What assumptions are you making about how this works? Walk me through your reasoning step by step. What could we be missing?
3. Search forums
Ask perplexity if others are running into similar issues.
When to Give Up on AI
Sometimes AI just can’t solve it. How to recognize when you’ve hit this point and avoid wasting your time? Watch for:
- Same bug for 3+ iterations
- AI keeps reverting to the same wrong solution
- You’re not learning
⠀When this happens:
- Step away from AI
- Go for a walk
- Research the problem yourself
- Then bring AI back in to implement a potential fix you’ve discovered
Security Practices
Warning: My AI coding assistant has:
- Removed authentication during troubleshooting and “forgotten” to add it back
- Introduced security holes while “fixing” something completely unrelated
- Suggested insecure patterns because they’re “simpler”
- Exposed sensitive data in logs or error messages
For production software that needs to be secure, due diligence means reviewing practically every single line of code AI writes. They only need one security hole. More on security here: Vibe Coding’s Hidden Risk: Security Holes You Can’t See ⠀
Quick Reference Checklist
Starting a New Feature
- Work in chunks (one major feature per session/conversation)
- Have AI research latest best practices
- Review research, ask questions, go deeper
- Create tailored documentation
- Switch to mentor and plan architecture together
⠀Coding
- switch to builder, have it create TODO list and get to work
- write tests and run tests after each change
- Push back on premature optimization
⠀Before Calling it Done
- Run the PR-ready prompt
- Check for security issues
- Confirm you understand the code
⠀When Debugging
- Use the debug prompt
- Know when to step away from AI
⠀Security Checklist
- Review all business logic changes
- Don’t trust AI’s security review
⠀
The Bottom Line
This playbook is about escaping the productivity trap I described in Parts 1 and 2. The core principle: simulate working with three specialists instead of treating AI like a magic code generator or trying to manage multiple AI agents. It’s hard to write good prompts, hard to know what context to give AI, and crucially hard to evaluate output quality. The Trio approach solves all three by leveraging how you already know how to collaborate.
Updates
I plan to keep this guide updated as my tactics, tools and practices improve. One area I’m experimenting with right now is how to incorporate Agile Scrum into my planning phase. So far micro sprints, user stories, and acceptance criteria are proving to be very helpful.
What tactics are working for you? Share them in the comments on the blog or tag me on LinkedIn. I’m always learning.
Want to discuss AI adoption for your team? At Workplace Labs Neil and I are combining technical expertise with workplace psychology to help teams adopt AI more effectively.
Part of the AI Coding Series: Part 1: The Trap | Part 2: The Escape