One Sentence Reality Check
AI lets you code at lightspeed, but to ship anything meaningful, you now have to play roles that used to be handled by an entire team , from QA to research to product strategy.
The Shift: AI Changes What It Means to "Build"
For years, the bottleneck in software development was always code: writing it, debugging it, shipping it. But today, thanks to AI tools like GitHub Copilot, GPT agents, Cursor, and Claude, the bottleneck has shifted. Code is easy to generate. What's hard now is knowing what to build, how to specify it, and how to ensure it works in the real world.
This shift means that developers can no longer afford to be narrowly focused on writing syntax-correct code. Instead, you’re now part strategist, part tester, part researcher, and part product thinker. You're responsible for managing AI assistants, curating context, and making sure the resulting software doesn't just compile, but actually solves a problem.
The tools are powerful, but they aren't autonomous. They're accelerators , and what they accelerate depends entirely on how well you understand the system you're building.
The Reality: What Building LEBRA Taught Us About Real AI Dev Workflows
LEBRA (Let’s Be Real About) started off like magic. I used GPT-4, Claude, and Cursor to scaffold backend endpoints, define routes, structure the database, and even generate basic UI flows. Within a few days, I had something that looked and felt like a working app. Design tools like Motiff and Figma AI helped get wireframes off the ground in minutes.
For early-stage founders or solo builders, this kind of velocity is unprecedented. It's like having an army of interns and a handful of senior engineers available around the clock.
While the prototype came together in under a week, shipping an alpha version took more than a month , not because of time spent writing code, but time spent untangling AI-generated assumptions, patching gaps in context, and rebuilding fragile configurations.
But once the prototype was done, the reality set in.
- AI-generated configs were fragile, often missing crucial details such as environment variables or specific deployment paths. For example, a Cloudflare Pages deployment broke because the AI-generated build script didn't correctly specify the production build directory, causing the deployment pipeline to fail silently.
- Copilot frequently hallucinated function names or returned outdated documentation.
- My dev environment (initially VS Code) struggled with indexing and managing AI context across files.
- Transpiling issues popped up in unexpected places, especially when deploying to Cloudflare Pages.
Debugging these issues wasn't straightforward. I had to deeply understand the AI's logic, reverse engineer its assumptions, and rewrite large sections just to make the app production-ready.
It became clear that AI can suggest a structure, but it doesn't understand your architecture. That responsibility falls on you.
The Opportunity: A New Developer Stack, A New Skillset
The developers who thrive in this new era are those who can go beyond writing functions. They:
- Think like product managers.
- Organize like project leads.
- Research like documentation authors.
- Test like QA engineers.
You're not being replaced. You're being expanded. And if you embrace that, you'll move faster than teams three times your size.
- Design Prototyping
- Motiff, UXPilot, and Figma AI are great for accelerating early-stage ideation.
- AI Dev Platforms
- Lovable and Replit provide excellent examples of how AI is shaping full-stack workflows. Study how they onboard and how the tooling aligns with user flows.
- AI models are improving immensely, and honestly, you can iterate right in your editor with screenshots and examples, which models such as Sonnet 4.5, GPT 5.2, and Opus 4.5 can take and create design language and components for you
- Editor Setup
- Highly recommend Cursor and Visual Studio Code
- Cursor offers a smooth developer experience, especially with features like intelligent context-awareness that suggests code based on more relevant, surrounding context, presence tracking that doesn’t stall or prompt unnecessarily, and lightning-fast indexing that’s noticeably better for large codebases. For instance, while VS Code occasionally froze when jumping between deeply nested files, Cursor maintained fluid navigation and responsive AI assistance.
- Visual Studio Code works very well with clear pricing, usage, and provides a better cooperative experience with the developer acting as a senior engineer. The Copilot Plus and Pro subscriptions offer a ton of value that lets you make it through the month, getting a lot done
- The space is continually improving with a ton of competition, so really use what works for you
- CLI tools
- Claude Code is an extremely useful AI development tool. You will need to be on
- Open AI Codex
- Hybrid Prompting
- Alternate between Ask and Agent modes. Some models are only available in one or the other.
- Multi-IDE Strategy
- Sometimes I use two editors in tandem to work on different layers of the codebase. It reduces friction and lets me parallelize tasks.
- Pause Before You Accept Suggestions
- The same prompt can produce contradictory results. Always validate and compare before integrating.
- Watch for Hallucinations
- Especially in lesser-used libraries or bleeding-edge frameworks. Always cross-check docs.
- Provide Real Context
- Use #fetch in VS Code (details here) or @url in Cursor (documentation here) to embed docs directly into your coding environment, ensuring your AI agent has accurate and timely context. Don’t assume the AI knows what you see.
- Challenge Confirmation Bias
- AI will agree with you. That’s not always a good thing.
- Enforce Rule-Based Responses
- Require the AI to explain why it made a change. Adds clarity and reduces blind spots.
- Use AI Code Reviews
- I cannot understate how important this step is. Always, always, always use branching strategies and run PR reviews to validate implementation
- GitHub Copilot Review is surprisingly effective at spotting regressions and bad patterns. You can activate it directly from your GitHub pull requests or via the GitHub extension in your IDE. See the official guide here to get started.
- Gemini code review and Sentry Seer also offer free tiers that you can plug into your organization on GitHub
- Take Breaks … Seriously
- It’s easy to hit flow and stay there. But your brain needs space to think critically. Burnout will kill your momentum.
- Develop Meta-Skills
- Understand the bigger picture. Your agent is a tool, not a partner. You are the architect.
::AppCTA{title="Download the NITM AI Assisted Dev Toolkit" to="https://github.com/ninjasitm/ai-assisted-dev-toolkit" primary-action="Download" size="sm"} ::
- Claude 4.5 (Opus and Sonnet): Best for planning, aligning, and coding tasks for implementing features, making small adjustments, and implementing frontend work
- I find that Opus is best for planning, while Sonnet is good for implementing, given the costs
- GPT- (5.1, 5.2)-codex: Best at contextual understanding and long-form debugging. It does what it needs and only that
- Gemini 3.0: A decent model, but still behind the others above.
Test across platforms. Same model, different tools = very different outcomes.
The New Skillset: The Developer Plus Model
- Research
- Shallow searching isn’t enough. Use NotebookLM to aggregate documentation effectively, organize research notes, and dive deeper into complex technical topics by creating structured, searchable notes that boost understanding and retention. Learn deeply. Read source code.
- Use Plan mode in your favorite editor. Alternatively, use the best prompts. See our AI-Assisted Toolkit for a drop-in set of instructions and prompts that will help you bootstrap and set up your codebase for success
- Organization
- Let your agent generate documentation automatically. Set alarms, timebox deep work, and preserve your cognitive space.
- Creativity
- Use the time AI saves to ideate. Learn design. Take a prompt engineering class. Sketch more. Prototype often.
- Communication
- You have to communicate with humans and agents now. That means clear instructions, shared context, and the ability to debug both people and machines.
Closing Thought: It’s Not Cheating. It’s the New Normal.
We’re living through a once-in-a-generation shift in how software gets built. AI is changing the landscape, and it’s happening faster than most people can track.
If you’re waiting to see where it all lands, you’ll miss the opportunity. If you engage now, experiment, adapt, and stretch your skills, you’ll have an edge in a world where AI is the baseline.
AI won’t replace you. But someone who knows how to use it might.
🔗 Coming soon: My full breakdown of how I built and deployed LEBRA on Cloudflare, with real-world AI workflows, architecture, and deployment lessons.
Further Reading
- GitHub Copilot Labs – for advanced usage beyond autocomplete
- Cursor.dev Docs – official guide to rules, context, and workflows
- OpenAI Cookbook – for model prompting best practices
- Prompt Engineering Guide – well-maintained directory of tips for structured prompts
- AI Developer Personas (Anthropic) – Claude’s own example of persona modeling, good for rules files
- What “working” means in the era of AI apps | https://a16z.com/revenue-benchmarks-ai-apps/
- The Illusion of Thinking: Understanding the Limitations of Reasoning LLMs [pdf] | https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
Attributions