Image
Nate Craddock Headshot

Nate Craddock

Media Creator, Electronics Hobbyist, Developer, Leader, and Speaker

"Let's just add AI."

"We'll have the AI do that."

"I'll just get an AI to build it."

These phrases have become reflexive across the industry. The 10x productivity narrative is everywhere, and it's created a widespread assumption that AI has fundamentally changed the equation.

They're wrong, but not in the way they expect.

AI coding tools are genuinely powerful. I use them daily. They've changed how I work in ways I wouldn't have predicted eighteen months ago. But the productivity gains aren't coming from where most people think.

Here's the thing: AI makes architected work faster. Unarchitected work just fails faster.

The Aha Moment

My shift happened while debugging authentication.

We had login implementations across three platforms — web, iOS, and Android. Something was diverging, but I couldn't pinpoint where. The code lived in three different languages: TypeScript, Swift, and Kotlin.

Here's the thing: I can read Swift and Kotlin, but I'm no expert. The old way to solve this would be getting three developers in a room (or more realistically, scheduling three separate calls across time zones with our offshore team) to walk through what each implementation was actually doing. That's a day of coordination, minimum. Maybe two.

Instead, I pulled all three implementations into context and asked Claude to compare them side by side. Within an hour or two, I had the divergence identified.

That's when I realized this wasn't about writing code faster. It was about understanding systems faster — especially the parts I couldn't hold in my head alone.

The insight wasn't just about my own productivity. It was about what becomes possible when anyone with the right mindset gets this kind of leverage.

From debugging to blueprinting

Once I saw what was possible with that kind of system-wide context, I started using the same approach for feature planning.

My planning documents start messy. Pseudo-code, placeholder ideas, vague gestures at how something might work. I expect to drive into details as the architecture comes into focus — validate my assumptions, question myself, question the AI's interpretation, throw edge cases at the proposed solution to see if it holds.

After many rounds of this, what started as a rough concept becomes a sequenced project plan with clear work breakdown.

I think of it like planning poker, except instead of estimating effort, you're stress-testing architecture. Every perspective shapes the final solution — mine, the AI's, the edge cases I throw at it. The AI isn't designing the system. It's another voice in the room helping me find the holes before we write a line of production code.

The Methodology

Full-stack visibility is non-negotiable

My local development environment treats our entire product as a single workspace, even though it's technically separate repositories — one for each mobile app, one for the web frontend, one for the backend. They have their own lifecycles, but when I'm planning or building, Claude sees everything at once. (Maybe it's a "virtual monorepo"? I haven't found the right term yet, but the point is full visibility.)

This matters because our system has users in native apps, admins working in a totally different context on the web, and a backend orchestrating all of it. When I'm planning a feature, I need to understand how changes ripple across the entire stack.

This became concrete last August when we added Canadian currency support and a Canada-only gift card supplier. The feature touched every surface: both mobile apps, the web admin, and the backend. Every implementation had to behave identically. Every edge case had to be handled the same way.

I started with a preplanning document — rough concepts, known constraints, open questions. Then I pulled that context into my architecture sessions, driving toward a full impact assessment across every part of the product. When it came time to execute, I could make synchronized changes across the whole stack, get iOS working first, then port those patterns to Android with confidence that I wasn't introducing drift.

Without full-system visibility, that kind of coordination takes endless meetings and inevitably someone misses something. With it, I caught inconsistencies before they became bugs.

Real data context, not guesswork

Architecture in a vacuum is fiction. I use MCP connections to see what our database schemas actually look like — sometimes just confirming that our backend ORM is set up correctly, other times planning how new fields and structures should fit into existing schemas.

My engineer uses Figma and Playwright integrations during implementation, pulling design specs directly into context and validating frontend work against real browser behavior. Six months ago she wouldn't have known where to start with this. Now she's pushing the tooling forward faster than I am. (More on that later.)

But importantly, this starts in the planning phase. We're not guessing what the data looks like or what the design requires. We're working with the real thing from day one.

Living documentation

The planning document isn't a spec that gets handed off and forgotten. It evolves throughout execution.

When implementation reveals something we didn't anticipate — and it always does — we update the doc. This keeps the architecture aligned with reality, and more importantly, it lets us see when a change in one area impacts other parts of the plan. The document becomes a map we're navigating, not a contract we're defending.

From there, it flows into Confluence for team visibility and Jira for execution tracking. But the source of truth stays in that working document until the feature ships.

Why This Requires Seniority

Here's what I think would happen if a junior engineer tried to run my workflow without years of context: all of it would break.

They'd accept bad AI output because they don't know what good looks like yet. They'd miss edge cases because they haven't been burned by edge cases enough times to instinctively ask "what happens when this is null?" They wouldn't know what questions to ask — and asking good questions is a skill built through experience, not tutorials.

I've spent years accumulating knowledge across domains — sometimes as an expert, usually just deep enough to be dangerous. I'm thinking of the time I had to figure out how RHEL software collections could work in our production environment and get systemd configured in a way that wouldn't break on every update. That wasn't in my job description. It was just the problem in front of me, and someone had to solve it.

That kind of accumulated context is what makes AI useful. When Claude suggests an approach, I know enough to ask why. When something doesn't feel right, I push back. When the explanation doesn't hold up, I reject it. I treat AI the same way I'd treat a junior engineer explaining their solution to me — I'm not accepting the first answer, I'm pressure-testing it.

My advice to anyone using these tools: always ask why the change is being made and what it actually does. Ask about edge cases. If you don't understand something, ask it to explain before you accept it. The AI is a collaborator, not an oracle. Treat it like a mentor you're learning from — curious, engaged, but not blindly trusting.

One more thing: this space moves fast. Any tutorial you find on YouTube is probably outdated by the time you watch it. I see tips and tricks videos that become irrelevant a month or two later. The methodology matters more than the specific techniques.

Systems thinking is the multiplier

The skill I didn't realize was rare until I watched others struggle with AI tools: holding the whole system in your head at once.

Not everyone can imagine how a change in the iOS app affects the backend affects the admin dashboard affects the database schema. That kind of systems thinking — or maybe it's strategic thinking versus tactical? — is what makes architecture possible in the first place. AI amplifies that skill. It doesn't create it.

If you can't see the whole board, AI just helps you move pieces faster without understanding the game.

But here's the thing: if you know you struggle with systems thinking and you're willing to partner, AI can help fill that gap. Use it to ask "what else does this change affect?" or "walk me through how data flows from here to there." It won't replace the instinct, but it can scaffold the thinking until you build the muscle yourself. The key is knowing you need the help and being willing to ask for it.

Collaboration, not replacement

I set the strategic direction on our team that we lean heavily into AI tooling. The more we can leverage, the more effective we become. But it's not top-down dictation.

My engineer takes a ball and runs with it. She's the one who started leveraging MCP integrations before I did — connecting to Figma, running Playwright validations. I pioneered the architectural document workflow because my background in solution architecture made it obvious that a well-thought-out plan prevents rework. But we're working in coordination, learning from each other, building on what works.

And the AI is a full collaborator too — not just a tool we point at code. I use it to brainstorm approaches, ask how other teams solve similar problems, explore patterns I haven't used before. What's the best way to handle optimistic UI updates? How do other companies structure their feature flags? What are the tradeoffs between these two caching strategies? I don't accept everything it suggests, but I accept it as a jumping off point. It's like having a well-read colleague who's always available to riff on ideas.

When I hired her, she was junior by any traditional measure. But I wasn't hiring for what she already knew — I was hiring for how she thinks.

That's the actual productivity gain. Not "AI replaces developers." It's experienced people using AI to amplify their judgment, while teaching others how to use these tools better. And increasingly, I'm watching those others level up faster than any traditional career path would suggest.

The Implication

There's a lot of talk about AI replacing junior developers. I don't think that's what's actually happening.

What I'm seeing is different: AI compresses the journey from junior to senior. It might be eliminating the mid-level plateau entirely.

My engineer came in as a definite junior. I hired her for problem-solving ability and self-motivation, not stack expertise. The explicit goal was for her to become my lieutenant — not just execute tickets, but eventually think architecturally alongside me.

With AI tools and this methodology, she's leveling up faster than any traditional timeline would predict. She's not just completing tasks. She's contributing to architecture discussions, pioneering integrations I hadn't explored yet, and starting to mentor others using the same approach.

That's not replacement. That's acceleration.

The question for leadership isn't "can we use AI to need fewer engineers?" It's "are we hiring people who can grow into this new way of working, and are we giving them the mentorship to get there?"

AI still needs an architect. But it might also be the fastest way to grow new ones.


Next: AI isn't replacing junior devs — it's eliminating mid-level. What I hire for now that the equation has changed.