Image
Nate Craddock Headshot

Nate Craddock

Builds engineering teams and weird creative projects, sometimes at the same time.

a Code is a commodity now. Here's what actually matters.


The interview industrial complex has been broken for a long time. But it's broken in different ways depending on where you sit.

At the FAANGs, it over-optimized. Leetcode prep became its own industry. Candidates spent months memorizing algorithm patterns they'd never use on the job, all to pass whiteboard rituals designed to filter volume, not identify talent. "Implement a binary search tree." "Reverse a linked list." The whole system tested whether someone could perform under artificial pressure, not whether they could build real things.

At startups, it is underdeveloped,. Solid engineering thinking was never cultivated in the first place. Companies hired whoever could ship fast and hoped for the best. No architecture. No mentorship. Just velocity and technical debt.

Both approaches missed the same thing: code was never the hard part. Knowing what to build and why was always the hard part.


The Other Side of the Table

I've built a team from zero — started as the engineering department at a startup, then hired every person who came after. But I've also spent years on the other side of the interview equation, sitting in rooms wondering why companies hire based on things that don't actually indicate the impact someone will have on their infrastructure.

This is partially self-serving. I'll admit it.

I've never liked those kinds of interviews. I've turned down opportunities when "live coding" was listed as an interview step. Not because I can't code, butheads while making a decision about a single because I don't perform well in artificial scenarios where someone watches me type. And I've learned that says almost nothing about how I'll actually contribute.

"Walk me through how you'd optimize this SQL query."

Maybe I don't think that way on command. But give me an hour with the actual system, and I'll give you an elegant solution. The interview tested the wrong thing.

I developed a philosophy: hire for how people think, not how they solve a specific problem you hand them under pressure. Look for reasoning through ambiguity. Look for the questions they ask before they start solving. Look for whether they can hold the full context of a system in their , piece of it.

Some people thought I wasn't being rigorous enough. That I was just rationalizing my own weaknesses.

Then AI showed up.


The Vindication

Everything I believed about hiring just got validated by a trillion-dollar industry shift.

"Implement a binary tree from memory."

That's what AI does now. AI can produce code. AI can reverse a linked list. AI can bang out a React component or a REST endpoint in seconds. The skill we spent decades filtering for — producing code under pressure — just became a commodity.

What AI can't do is understand why a particular architecture fits a business problem. It can't navigate ambiguity. It can't push back on a product requirement that doesn't make sense. It can't mentor a junior engineer through their first production outage. It can't sit in a room with a CEO and translate business urgency into a technical roadmap.

The thinkers are leveraging AI to multiply their output. They know what to ask for. They can architect a solution, prompt their way through implementation, and validate the output against the system they hold in their head.

The code-producers — the ones who could crank out implementations but needed architects to tell them what to build — are now competing with tools that do the same thing faster and cheaper. They're just producing more code. Faster, sure. But unarchitected code that still needs someone to review, redirect, and often rewrite.

I didn't adapt my hiring philosophy to the AI era. The AI era caught up to my hiring philosophy.


The Startup Math

This matters most where I live: startups and headcount-constrained environments.

Big companies can absorb inefficiency. They can hire specialists. They can employ layers of people whose job is to producethe code that architects designed. They have slack in the system.

Startups don't. 

When you're building a team of three to five engineers, every single person needs to operate across the whole stack and the whole problem space. You're not hiring "a React developer" or "a backend engineer." You're hiring someone who can look at a business problem, understand the constraints, architect a solution, and execute it.

Here's the new math: One thinker with AI tools now does what used to take a thinker plus two implementers. But one implementer with AI tools just produces more code that still needs an architect to review and redirect.

If you only get five hires, you need five people who can think.

AI didn't change that math. It made it brutal.


What I Hire For Now

The fundamentals haven't changed — they've just become non-negotiable:

Architectural thinking. Can they hold a system in their head? Do they understand how a decision in one layer affects three others? When I describe a problem, do they ask about constraints and tradeoffs before they start solving?

Judgment over output. I care less about how fast someone can produce code and more about whether they can evaluate code — including AI-generated code. Can they spot when a solution is technically correct but architecturally wrong?

AI fluency as baseline. Not "have you used ChatGPT" but "do you understand that prompting is architecture?" The best candidates I see now treat AI as a thinking partner, not a code generator. They front-load the reasoning.

Comfort with ambiguity. Startups don't hand you clean specs. Can they operate when the requirements are half-formed and the priorities are shifting?

Communication under pressure. Can they explain their reasoning? Can they push back on me — on the engineering leader — when they think I'm wrong? That's more valuable than any technical skill.


The Interview Shifts

I've stopped asking people to perform. Instead:

"Here's a real problem we're facing. Walk me through how you'd approach it." Then I watch how they think. Do they jump to code or do they ask questions? Do they consider the business context or just the technical puzzle?

"Tell me about a time you disagreed with an architectural decision. What did you do?" I want to see if they can advocate for their position without being a jerk. That skill matters more than ever when you're working with AI and need to override its suggestions.

"How do you use AI tools in your workflow?" This isn't a trick question. I want to understand their mental model. Are they using AI as autocomplete, or as a collaborator? Do they trust it blindly or verify its output?

The goal isn't to see if they can solve a puzzle I already know the answer to. It's to see how they'd actually work.


The Uncomfortable Implication

If code is becoming commodity, what happens to the traditional junior-to-senior pipeline?

I'm watching it compress in real-time. Juniors with strong fundamentals and AI fluency are leveling up faster than any traditional timeline. The mid-level plateau — that long stretch where developers can produce but haven't yet learned to architect — might be collapsing.

That's good for the juniors who can make the leap. It's brutal for the ones who can't.

The path forward isn't to produce more code. It's to think better. To architect better. To use AI as a lever while developing the judgment that AI doesn't have.

And for the people doing the hiring: stop testing for commodity skills. Start hiring for thinking.

The tools have caught up. Make sure your team has.


I built an engineering team from scratch at a startup and have spent years on both sides of the hiring equation. I write about engineering leadership, AI workflows, and why the old playbooks need updating.