What the middle looks like — and what it tells you about hiring engineers in 2026.
I just wrapped up hiring four engineers for a team I'm building from scratch. Forty-plus candidates screened. Multiple rounds. Take-home challenges that required AI-assisted development. And a discovery I wasn't expecting: AI proficiency has become the single most revealing signal in our entire interview process — more telling than years of experience, more predictive than pedigree, and almost impossible to fake.
We made AI fluency a non-negotiable from day one. Not preferred. Not a bonus. Required. That decision alone filtered our applicant pool in ways I didn't anticipate. But the real surprise wasn't who it filtered out — it was the spectrum it revealed among the people who remained.
All four engineers we hired shipped PRs in their first week. That outcome — and the process that produced it — is what this post is about.
The Five Strata
After weeks of interviews, take-home reviews, and presentations, a clear taxonomy emerged. Every candidate we evaluated fell into one of five categories.
Stratum 1: Non-Users
These candidates don't use AI tools at all. Some by choice, some by unfamiliarity. We didn't interview them. This isn't a value judgment on their engineering ability — it's a practical one. Our workflow is built around AI collaboration. An engineer who doesn't use AI tools would be working at half the velocity of the rest of the team from day one, and they'd be learning a new paradigm while simultaneously learning our codebase, our product, and our culture. That's too many ramps at once.
Stratum 2: The Reluctant Adopters
"I use it for boilerplate and tests."
This was the most common answer in early screens, and it's a tell. These engineers have tried Copilot or ChatGPT. They've asked it to write a unit test or scaffold a component. But they haven't integrated AI into how they think about building software. It's a convenience tool sitting next to their IDE, not a collaborator shaping their architecture decisions.
The reluctance sometimes comes from legitimate concern — about code quality, about understanding what's being generated, about intellectual honesty. I respect that. But in practice, these engineers are leaving 60-70% of AI's value on the table, and they don't know it yet.
Stratum 3: The Green Users
These candidates use AI more aggressively — Copilot autocomplete, ChatGPT for debugging, maybe Claude for longer explanations. But they lack the architectural thinking to guide the tool effectively. The AI suggests something, they accept it. It goes off the rails, they don't know why. They can't distinguish between a good suggestion and a plausible-sounding bad one.
This is the most dangerous stratum for hiring because it looks productive. The code ships. The take-home gets submitted. But when you ask them to walk through a design decision, you get silence or a restatement of what the AI told them. The human isn't driving — they're just holding the steering wheel.
One candidate submitted a take-home that was clearly AI-generated in a single pass. No iteration, no architectural decisions, no evidence of human judgment. The output was functional. It was also completely undifferentiated from what any other candidate could produce by typing the same prompt. That's not engineering. That's transcription.
Stratum 4: The Collaborators
This is who we hired.
These engineers use AI as a genuine thought partner. They prompt it with architectural questions and evaluate the options it presents. They accept suggestions they agree with, modify the ones that are close but not right, and reject the ones that don't serve the design. They can articulate why they made each of those decisions.
One of our strongest candidates — a fresh college graduate — submitted a take-home with a dedicated slide documenting his AI workflow: what he accepted, what he modified, what he rejected, and the reasoning behind each. He used Claude to generate architectural options, chose the ones that fit the domain (a financial ledger that needed auditability and idempotency), and pushed back when the AI suggested unnecessary complexity. The output wasn't just good code — it was evidence of good thinking.
That's the signal. Not "do you use AI?" but "can you think clearly while using it?"
Stratum 5: The Vibe Coders
On the far end of the spectrum are candidates who've gone all-in on AI but lost the plot entirely. They can prompt their way to a working prototype, but they can't explain the architecture. They can't debug without AI assistance. They can't make a design tradeoff because they've never had to — the AI made all the decisions and they just kept saying yes.
Vibe coding produces impressive demos and fragile systems. In a startup where every engineer's decisions compound into the foundation everything else gets built on, that's unacceptable. We need people who understand what they're building, not just people who can describe what they want built.
The Gap That Matters
The critical gap in this taxonomy is between Stratum 3 and Stratum 4. That's where most candidates live, and it's the hardest transition to screen for. Both groups use AI regularly. Both produce working code. The difference is in the relationship between the human and the tool.
Stratum 3 engineers follow the AI. Stratum 4 engineers lead it.
You can't detect this difference from a resume. You can barely detect it from a traditional technical interview. The only reliable way we found to surface it was a take-home challenge that explicitly required AI usage and a follow-up presentation where candidates walked through their process — not just their output.
When a candidate shows you their AI workflow and you can see the moments where they exercised judgment — where they chose Option A over Option B because it better served the domain, where they caught an AI suggestion that was technically correct but architecturally wrong, where they rejected a recommendation and explained why — that's when you know you've found a Stratum 4 engineer.
The Enterprise Lag
Most large organizations are still working out their relationship with AI development tools. Legal teams are reviewing IP implications. Security teams are evaluating data exposure. HR departments haven't updated job descriptions to reflect a world where AI fluency is a meaningful axis of engineering capability.
That's not a criticism — those concerns are legitimate, and bigger orgs have more surface area to protect. But it does create an asymmetry. Smaller teams that build AI-native hiring processes now will have a meaningful head start on talent acquisition. Not because they have bigger budgets or better brands, but because they're asking better questions.
The Stratum 4 engineers exist. They're shipping right now. The question is whether your hiring process can recognize them.
The Hiring Framework
For engineering leaders thinking about this, here's what worked for us:
Make AI fluency a requirement, not a preference. Put it in the job posting. Mention it in the screening call. Make it clear this isn't optional. This alone will filter your applicant pool in useful ways.
Design take-home challenges that require AI collaboration. Don't just allow it — require it. And ask candidates to document their process. The documentation is more valuable than the code.
Evaluate the workflow, not just the output. A polished app built by a Stratum 5 vibe coder looks identical to a polished app built by a Stratum 4 collaborator — until you ask them to present their decision-making process.
Look for the reject signal. The strongest indicator of a Stratum 4 engineer isn't what they accepted from AI — it's what they rejected and why. Anyone can say yes to a good suggestion. It takes real understanding to say no to a plausible one.
AI Still Needs an Architect
The throughline of everything I've learned in this hiring process comes back to a simple principle: AI is an extraordinary tool, but tools don't make decisions. Architects do.
The engineers I hired aren't replaceable by AI. They're amplified by it. They use AI to move faster, explore more options, and ship with higher quality — but the judgment, the taste, the domain understanding, the ability to say "this is the right approach for our product and our users" — that's irreducibly human.
For now.
And "for now" is exactly why you want engineers who can think clearly with AI rather than just use it. As these tools get better, the gap between Stratum 3 and Stratum 4 won't shrink. It will widen. The engineers who learn to lead AI today will be the ones capable of leading whatever comes next.
The ones who just follow it will be following something else.
Nate Craddock is CTO at CasaPerks, where he built an engineering team from scratch with AI fluency as a core requirement. He writes about engineering leadership, AI-assisted development, and the intersection of human judgment and machine capability at natecraddock.com.