The Rise of the AI-Native Candidate
Mar 08, 2026
4 min read
TL;DR: A new hiring benchmark is quietly forming in ANZ tech. Companies aren’t just looking for people who use AI, they want people who’ve rebuilt how they work around it. Here’s what that actually means, the five stages of getting there, and how to make sure the right people can see it.
I. The thing I had to Google
A few weeks ago I was talking to a colleague, another recruiter, and they said something that stopped me mid-sentence. They were describing what their clients kept asking for. “We need AI-native candidates.” Not AI-aware. Not AI-proficient. AI-native.
I didn’t want to admit I wasn’t sure what that meant, so I waited until after the call and typed it straight into ChatGPT.
What came back was interesting. But what stayed with me was the conversation afterwards, when I called a few more people in the industry and heard the same phrase coming up unprompted. Hiring managers were using it in briefings. Recruiters were filtering for it without always knowing how to define it. Something had shifted and most candidates, I suspected, had no idea it was happening.
If I didn’t know what it meant, and I’ve spent a decade on the inside of this process, the odds that it’s on your radar are pretty low. That’s why I’m writing this.
II. What AI-native actually means
It’s not about having ChatGPT bookmarked. It’s not about listing AI tools in your skills section. Every candidate is doing that now, which means it registers as noise rather than signal.
An AI-native professional is someone who has rebuilt their workflows from the ground up with the assumption that AI is always in the room. Their outputs are shaped by it. Their speed is shaped by it. The way they approach a problem is shaped by it. You can’t separate the work from the tool, because the tool has become part of how they think.
The best analogy I’ve come across is language fluency. There’s a difference between someone who studied French and someone who dreams in it. The first person translates. The second person just thinks. AI-native is the second person. The work doesn’t feel like it was assisted. It feels like it came from somewhere different entirely.
III. Why this is happening right now
Zapier made AI fluency a hiring requirement across every role, not just engineering. Canva now requires candidates to use AI during coding interviews. Not as an option. As a requirement. Shopify and Duolingo have baked it into their product and hiring processes at every level.
These aren’t outliers. They’re the leading edge of something moving quickly through the ANZ tech market, and if history is any guide, what the Atlassians and Canvas of the world normalise today becomes the baseline expectation everywhere else within 12 to 18 months.
Here’s the uncomfortable part. Two candidates walk into the same process with identical experience, identical tenure, identical skills on paper. One has rebuilt how they work around AI. The other hasn’t. They will not be evaluated the same way. The hiring manager may not even be able to articulate why one feels sharper than the other. But the gap is real, and it’s already showing up in shortlists.
IV. The five stages (most people are stuck at two)
This isn’t a checklist. Think of it more like a progression, the kind where each stage only becomes visible once you’ve actually lived the one before it.
Stage 1. Foundations. AI is always within reach. You’ve replaced Google for most queries. Claude or ChatGPT is pinned in your browser. You use voice input. You’ve downloaded the mobile apps. The habit exists, even if it’s still shallow.
Stage 2. AI as your coach. You stop using it just to do things and start using it to think better. You feed it your meeting transcripts and ask what you missed. You ask it to interview you about your own role and surface where you’re wasting time. It becomes a thinking partner, not just a task executor. Most people who think they’re “good with AI” are somewhere between Stage 1 and here.
Stage 3. AI as your worker. You start applying what I call the 10-80-10 rule. You do the first 10%, the direction, the context, the judgment call about what matters. AI handles the middle 80%, the execution. You own the final 10%, the quality check, the vibe check, the thing that requires taste. Your output volume goes up. Your standards don’t drop, because your taste is sharper than the AI’s defaults and you know it.
Stage 4. AI as a system. You’ve built a personal prompt library. You test different models for different tasks. You stop reinventing the wheel every time you sit down to work. Your AI usage is repeatable and intentional, not reactive and random.
Stage 5. AI as infrastructure. You automate. Workflows run in the background. Meeting recordings get transcribed, analysed, and summarised without you lifting a finger. You’ve moved from using AI to operating it. This is where the real leverage lives, and it’s where the gap between you and most of the market becomes very difficult to close.
The people who reach Stage 4 or 5 are the ones hiring managers are starting to talk about in briefings. Not in a theoretical way. In a “do you know anyone like this” way.
V. “But my company uses Copilot and that’s it”
Here’s the objection I hear constantly. My company has restricted tools. We’re on Copilot only. IT won’t approve anything else. I don’t have the opportunity to go deep on AI at work.
Here’s what that objection misses. Nobody is assessing what tools your employer gave you access to. They’re assessing what you did with whatever you had.
Build your AI fluency outside of work hours. Use your personal time to reach Stage 3 and 4 on your own projects, your own writing, your own research. Rebuild a workflow you own completely. Document what changed. That’s the story you bring into an interview, and it’s more compelling than anything your company’s IT policy allowed you to do.
The candidates who stand out aren’t the ones with the most corporate access. They’re the ones who were curious enough to figure it out on their own time.
VI. Being seen is a separate problem
Getting there is one thing. Making sure the right people can see where you are is another, and it requires a different kind of effort.
The mistake most candidates make is adding “AI tools” to their CV and calling it done. That’s Stage 1 behaviour described in Stage 1 language. It tells a recruiter or hiring manager nothing they can act on.
Show your workflow, not just your output. In interviews, don’t just describe what you built. Describe how you built it. “I used Claude to synthesise 40 pages of user research into five themes, then validated those with the team” is a story. And here’s why it lands: interviewers are no longer testing what you know. They’re testing how you think with AI. When you explain your process, you’re giving them actual proof that you’ve done it enough times to work through the hallucinations, catch the slop, and know when to override the output. That’s what separates someone with real workflow experience from someone who’s read about it.
Update your LinkedIn to reflect AI collaboration in outcome terms, not tool terms. Not “proficient in ChatGPT.” Something like: “Redesigned our sprint retrospective process using AI to surface patterns across six months of team feedback, reducing prep time by 60%.” The reason this works is that it signals something specific to a recruiter scanning your profile. It’s no longer about the AI tool. It’s about what you did with it, how you moved the needle, how you improved a process. Hiring managers are now weaving AI assessment into seemingly standard interview questions (check out the interview guys article who go in depth on this topic here) , and your LinkedIn is often where the impression forms before you even get on a call. Outcomes and specifics are what they’re looking for.
Build a prompt vault and mention it. Tell interviewers you maintain a personal library of prompts for recurring tasks. It signals that your AI usage is intentional and systematic. A simple starting point: pick three prompts you use every week and save them in a Notion doc. That’s the vault. Start there.
Narrate your judgment, not just your usage. The candidates who land best in interviews are the ones who can describe a time they overrode the AI’s output and explain why. That’s the signal that separates someone with taste from someone who just hits generate.
VII. Where do you actually sit?
Here’s the honest version of this question. Not the one you’d answer in an interview. The one you’d answer at 11pm when nobody’s watching.
Have you rebuilt any workflow in the last three months because AI made it possible? Could you walk into an interview tomorrow and give three specific examples of AI collaboration, with outcomes attached? Do you have a prompt library, or are you starting from scratch every time you sit down?
If the answers are shaky, you’re not behind yet. But the window is closing faster than most people realise, and the gap between those who are experimenting now and those who are waiting to see how it shakes out is not linear. It’s compounding every month.
The AI-native candidate isn’t a future archetype. They’re already in the market. They’re already in the rooms where shortlists get made. The question is whether you’re one of them, or competing against them.
Cheers,
Eli
P.S. If you want a practical place to start, I put together a framework that walks you through exactly how to position yourself in the current market. You can check it out at careersycoaching.com/the-careersy-connection-framework
Stay Sharp Between Applications
Join 1,000+ ambitious tech pros and get one practical, recruiter-backed career tip every Sunday to help you land interviews, negotiate offers, and grow in your role.
No fluff. No spam. Just real advice from inside the hiring room.
We hate SPAM. We will never sell your information, for any reason.