There's a moment every engineering team hits where AI stops being a cool experiment and starts being load-bearing infrastructure. You don't plan for it. You just notice one day that your velocity assumptions have shifted, your code review posture has changed, and the humans on your team are making different decisions than they were eighteen months ago.

We've hit that moment at Rokt. We're not done hitting it. But we've been through enough of the journey that I can describe what the stages actually look like from the inside. That's different from what they look like in blog posts and conference talks.

Stage 1: Autocomplete

This is where almost everyone starts, and where a surprising number of organizations quietly stop.

At this stage, AI is a productivity hack for individuals. Tab completion that understands context. Boilerplate generation. The occasional function body that comes out almost exactly right. Engineers who invest time in learning to prompt accurately get faster. Engineers who don't, don't.

The organizational structure doesn't change. Review processes don't change. Architecture doesn't change. You're getting individual gains from a tool, the same way a faster laptop gets you individual gains.

The mistake teams make here is calling this transformation. It's not - it’s efficiency. Those aren't the same thing.

Stage 2: Co-Pilot

The shift to co-pilot is less about tooling and more about the conversation.

At this stage, AI participates in design discussions. You're not just asking it to fill in a function body. You're asking it to review your approach before you build it, identify edge cases in your spec, push back on your architecture decisions. The engineers who get good at this move noticeably faster than the ones who don't.

The visible signal is a growing gap between team members who treat AI as a thought partner and those who treat it as a code generator. Both camps are using the same tools. The difference is what they're asking for.

Prompt craft becomes a real skill divide here. "Write me a function that does X" is a code generator prompt. "Here's my current approach, here's the constraint I'm working under, here's what I'm uncertain about: what am I missing?" is a co-pilot prompt. The outputs are in different leagues.

The quality of your specs also starts to matter more. Garbage in, garbage out applies in both directions. The engineers who write the clearest context get the most useful output. The ones who want AI to figure out the requirements from vague inputs get vague code back.

Stage 3: Co-Author

This is the stage where the identity shift happens, and it's the most uncomfortable one.

At co-author stage, AI agents participate in the full development lifecycle: suggesting, implementing, running test suites, iterating on failures, generating PRs. The human role starts shifting from construction to review, from writing to directing.

Most engineering organizations stall here, not because the tooling doesn't work, but because the culture hasn't caught up. Engineers who've spent their careers valuing craft (the elegant function, the clean abstraction, the well-turned loop) suddenly find that the thing being evaluated isn't their code. It's their judgment about the code someone else wrote. Something else. They suddenly start looking a lot more like leads  and a lot less like their mental model of an individual contributor.

The organizations that get through this stage fastest are the ones that give explicit cultural permission for the role shift. At Rokt, that meant being direct with our teams: your job isn't to write code anymore. It's to express intent clearly, verify output rigorously, and own the outcomes of what ships. The code is downstream of that. Code becomes just one tool amongst many others to generate outcomes for customers.

The architectural prerequisites turned out to matter enormously here too. Services that deploy independently, own their data, and expose contracts rather than internals are architecturally safe for agentic development. Tightly coupled systems aren't. You can't safely let agents operate in a codebase where a change in one place can cascade unpredictably into ten others. We'd already built toward encapsulation, and that investment paid off in ways we didn't fully anticipate when we made it.

Stage 4: Supervisor

This is where the developer role converges with the engineering manager role, and it changes what senior engineering talent actually looks like.

At supervisor stage, you're not reviewing lines of code. You're governing a fleet of agents: setting intent, reviewing direction, enforcing standards, watching for drift. The skills that matter are the ones that have always mattered for good engineering management. Clear communication of what good looks like. Judgment about what to accept and what to send back. Awareness of systemic risk.

The engineers who excel here aren't necessarily the ones who wrote the best code. They're the ones who communicate with the most precision. Who can look at 400 lines of generated output and quickly identify the three things that are wrong. Who understand the system holistically well enough to catch a technically-correct solution that's architecturally wrong. The ones who can pit two agents against each other and quickly judge the winning output.

Claire Southey, our Chief AI Officer, recently put it well:

The defining skill of future software teams won't be how much code they write, but how clearly they express intent, move from ambiguity to clarity, and govern the impact of what they build.

That's a job description for a supervisor. It describes the senior ICs and leads on our fastest teams right now. Our journey on this transition is to make it apply to everyone, from the rawest college graduate all the way up to our executives.

What Makes the Transitions Stick

A few things have mattered more than we expected.

A paved road. Approved tooling with good defaults removes friction and removes the "what tool should I even use" decision from every team. We didn't want fifteen different AI coding setups across engineering. We made the good path the easy path.

Shared learnings. The engineers who pushed into each stage first learned things the rest of the organization needed. We built mechanisms to surface those learnings quickly, through async channels rather than quarterly offsites, so people could see what was working in real time.

Measurement. We applied the same rigor to internal AI adoption that we apply to product decisions: instrumentation, control groups, clear success criteria. "It feels faster" is not a metric. We measured AI-assisted output (cycle time, review cycles, defect rates) so we could see whether we were actually improving or just feeling like we were.

The transitions between stages aren't automatic. They require deliberate decisions about how work gets structured, what skills you're hiring for, and what you're asking engineers to become. Organizations that treat the transition as passive (buy the tools, wait for the culture to follow) get stuck between stages and wonder why the ROI isn't materializing.

The tools are fine, and they keep getting better. The bottleneck is almost always the organizational decisions around them. At this stage, technology is moving faster than culture. It’s my job as an engineering leader to ensure the culture can keep up. 

At Rokt, getting those decisions right has compounded. The same engineering culture that made agentic development viable is what lets us move as fast as we do on the product side: shipping models that get smarter with every transaction, building personalization infrastructure that operates in milliseconds, iterating on the things that matter to our customers without the drag of organizational inertia. The stages above aren't abstract. They're what we built on.

Melissa Benua is VP of Developer Experiences at Rokt.

No items found.
Précédent
Suivant