Luna Interfaces

Blog Details

How to Use AI to Build Software Without Breaking It

Luna Interfaces
AuthorLuna Interfaces
May 6, 2025
10 min read
How to Use AI to Build Software Without Breaking It

The promise was simple: AI writes the code, you ship the product. No more staring at blank files, no more debugging for hours, no more Stack Overflow rabbit holes. Just describe what you want and watch it appear. And for a while, it feels exactly like that. Then, a few weeks in, something shifts. The codebase starts feeling heavy. A small change breaks three unrelated things. Functions are doing too much. Values are hardcoded everywhere. You ask the AI to fix a bug and it introduces two more. What went wrong? The AI didn't fail you. You just didn't know what your job was supposed to be.

You're the Architect. AI is the Worker.

Think about how a building gets built. The architect doesn't lay bricks. But without the architect's blueprints, the workers would build something that collapses — or at best, something nobody wants to live in. The architect thinks in systems: load distribution, spatial flow, materials, long-term maintainability. The workers execute with precision inside that plan. This is exactly the relationship you need with AI. Your job as a developer didn't disappear when AI arrived — it moved up a level of abstraction. You're no longer the one writing every line. You're the one deciding what gets written, why, and how it fits into the whole. The moment you hand that responsibility to the AI, you're not a developer using a tool. You're a bystander watching your codebase get built by someone who's never seen your requirements.

AI Doesn't Think — It Pattern-Matches

Here's something most tutorials won't tell you: AI doesn't reason about your code. It doesn't understand your architecture, your business logic, or why that module needs to stay decoupled. What it does is extraordinarily well — it recognizes patterns from millions of codebases and generates the statistically most likely continuation of what you gave it. That's powerful. But it also means that if your prompt is vague, it fills the gaps with whatever pattern felt most common during training. That's why AI-generated code often looks correct at first glance but breaks under edge cases, violates the conventions of your project, or quietly introduces coupling you didn't ask for. It's not making bad decisions — it's not making decisions at all. It's guessing, very confidently. Understanding this changes how you interact with it fundamentally.

The Analysis Step Didn't Die

Before AI, every non-trivial feature started with thinking. What are the inputs and outputs? What are the edge cases? Does this need to be extensible? Where does it fit in the existing structure? That process was mandatory because you were about to spend hours writing the code. Now that the writing takes minutes, it's tempting to skip straight to the prompt. Don't. The analysis is more important now, not less — because the AI will fill any gap you leave with its best guess, and that guess propagates instantly into real code. The difference is that before, you analyzed and then wrote. Now, you analyze and then delegate the writing. Your thinking is still the foundation. Remove it and you're building on sand, just faster.

AI as a Source of Inspiration, Not a Decision Maker

There are cases where you genuinely don't know the best solution. Maybe it's an unfamiliar domain, an architectural decision with real tradeoffs, or a problem you've never faced before. This is completely valid — and AI can be extraordinarily useful here. But the key is changing the task you're giving it. Instead of asking it to build, ask it to propose. Ask it to explain options and tradeoffs before a single line of code is written. This forces two things: you have to understand the solution before it exists in your codebase, and you get to decide which direction is actually right for your context. The AI doesn't know your team's conventions, your performance constraints, or your roadmap. You do.

Prompt: Ask for options, not code

prompt.txttext
Don't write any code yet.

I need to solve [problem]. Explain 2-3 different approaches
with the tradeoffs of each one: complexity, maintainability,
performance implications, and which scenarios each fits best.

I'll decide which direction to take before we implement anything.

Build Your Criteria: Clean Code as Your Filter

To evaluate what the AI proposes, you need a lens. That lens is built from the principles of clean code — not as abstract theory, but as practical questions you ask every time you review generated code. Is this function doing one thing or five? Are there hardcoded values that should be configurable? Could this module be reused, or is it tightly coupled to this one context? Can a teammate read this in 30 seconds and understand what it does? Is it solving the actual problem, or a slightly different one the AI assumed? These aren't optional polish questions. They're the difference between a codebase that stays maintainable at scale and one that becomes a liability within months. Resources worth investing in: Clean Code by Robert C. Martin, the SOLID principles, and anything on separation of concerns. The better your internalized criteria, the better your collaboration with AI becomes.

Always Review AI-Written Code — Disable Auto-Approvals

This is the most practical piece of advice in this article: turn off auto-approvals for file edits and writes in whatever AI coding tool you use. Claude Code, Cursor, Copilot — most of them have a mode where changes are applied automatically. Disable it. The AI is prone to specific failure patterns: it overcomplicates solutions, it hardcodes values that should be dynamic, it duplicates logic instead of reusing what already exists, it adds dependencies you didn't need, and it sometimes silently modifies things outside the scope of your request. None of these are catastrophic in isolation. But if you let them propagate unchecked across dozens of interactions, your codebase accumulates a kind of AI technical debt that's genuinely painful to untangle. Review every change. Keep your standards explicit. Correct violations immediately before they become the established pattern.

Plan Mode vs. Build Mode — Use the Right Agent

Most AI coding tools now have distinct modes for planning and building. Using the right one for the right situation is one of the highest-leverage habits you can develop. Use Build Mode when you know exactly what you want and can give precise instructions — the what, the where, and the how are all clear. Use Plan Mode when you're figuring out the approach: you have a goal but you're unsure about the implementation path, or you want to think through the architecture before committing to it. The mistake most developers make is defaulting to Build Mode for everything, then trying to steer the AI mid-construction. That's the equivalent of changing the blueprint after the walls are up. Plan first when there's any ambiguity. Build only once the plan is solid.

Build Mode: precise, scoped instruction

build-prompt.txttext
In `src/services/auth.ts`, add a `refreshToken` function.
It should:
- Accept a userId: string parameter
- Call the existing `tokenRepository.getByUserId()` method
- Return a new signed JWT using the existing `signToken()` utility
- Throw an AuthError if the user is not found

Do not modify any other files.

Plan Mode: explore before committing

plan-prompt.txttext
I need to add rate limiting to our API endpoints.
I'm using Express and we have Redis available.

Before writing any code, propose 2-3 approaches.
For each one explain:
- How it works at a high level
- What the implementation complexity looks like
- Tradeoffs vs the other options
- Whether it fits a stateless, horizontally scaled environment

We'll pick one before building anything.

Force the AI to Search, Not Guess

When you bring a concrete problem to an AI — an error, a library incompatibility, a configuration that won't work — its default behavior is to generate the statistically likely fix based on its training data. That fix is a hypothesis. It might be right. It might be a pattern that applied to a slightly different version of the library, or a deprecated approach that hasn't existed for two years. Most AI tools have web search capabilities built in. Use them explicitly, and tell the AI to use them. When a problem is concrete enough that the real answer exists somewhere on the internet — a GitHub issue, a forum post, official docs — searching gives you a validated solution that worked for real people in real projects. That's categorically different from a confident-sounding guess.

Prompt: validated fix, not a hypothesis

search-prompt.txttext
Search the web before answering this.

I'm getting this error: [paste exact error here]
Environment: [library name + version, runtime, relevant config]

I want a solution that has been validated by real people
who hit this same issue — not a guess. Check GitHub issues,
docs, or community threads and tell me what actually worked.

"The ratio of time spent reading versus writing is well over 10 to 1. We are constantly reading old code as part of the effort to write new code. Making it easy to read makes it easier to write."

R
Robert C. MartinAuthor of Clean Code

Your Job Got Better, Not Easier

Here's the honest summary: AI is genuinely better at writing code than you are — in terms of speed, syntax recall, and pattern coverage. That's not a threat, it's a tool. But software quality has never been about who writes the fastest. It's about who thinks the clearest. The analysis, the architecture, the judgment about what's reusable versus overengineered, the decision to search for a real answer instead of accepting a plausible guess — those are yours. The developers who understand this are shipping better software, faster, with less burnout. The ones who hand full control to the AI are accumulating technical debt at AI speed. The role didn't disappear. It got more important. You're the architect. Don't put down the blueprints.

L
U
N
A

BUILD WHAT MATTERS.