2026-03-08|
AIOpinionIndustryStrategy

AI Won't Replace You, But It Will Redefine What 'Enough People' Means - On Citrini Research's 2028 GIC

hero.png
AI Won't Replace You, But It Will Redefine What 'Enough People' Means - On Citrini Research's 2028 GIC

AI Won't Replace You , But It Will Redefine What "Enough People" Means

My thoughts on Citrini Research's 2028 GIC analysis, the power loom, and why the future isn't fewer people, it's fewer people per project, with a lot more projects.

,-

A few months ago, Citrini Research published a piece on the 2028 Global Investment Conference that landed a line I haven't been able to shake:

"A competent developer working with Claude Code or Codex could now replicate the core functionality of a mid-market SaaS product in weeks."

Not perfectly. Not with every edge case handled. But well enough that the CIO reviewing a half-million-dollar annual renewal started asking the obvious question , what if we just built this ourselves?

I read that and thought: yeah, that tracks. I literally just built a suite of developer tools using AI assistance in a fraction of the time it would have taken me solo. The barrier to shipping functional software hasn't just lowered, it's been taken out back and quietly decommissioned.

But here's where I think the conversation goes sideways. People read a line like that and jump straight to "so we need fewer people." And I think that's exactly half-right, which makes it the most dangerous kind of wrong.

The Power Loom Didn't Kill the Weaver

Let me take you to 18th-century India for a moment.

For centuries, India was the undisputed cotton textile powerhouse of the world. Skilled artisans , hand-spinners and handloom weavers , produced some of the finest muslin and calico fabrics on the planet. It was painstaking, manual, deeply human work. A single weaver, one loom, one piece of cloth at a time.

Then Edmund Cartwright patented the power loom in 1785. The machine could replicate the core motions of weaving , shedding, picking, battening , through mechanical power. One trained operator could now monitor ten to thirty looms simultaneously. The output-per-human ratio didn't just improve; it was obliterated.

So did the weavers all disappear?

No. India's textile industry today employs over 35 million people. It is the second-largest employment-generating sector in the country, right after agriculture. India is the world's largest producer of cotton and the second-largest exporter of textiles globally.

The work didn't vanish. It transformed. Fewer hands per loom, yes , but dramatically more looms. The mechanization of weaving didn't reduce demand for textiles. It exploded it. Fabrics that were once luxury goods became everyday commodities. New markets opened up. New products became possible. The pie got so much bigger that even a smaller slice per person still meant more work overall.

Sound familiar?

Fewer People Per Project, But a Lot More Projects

This is the part of the AI conversation I think most people are getting wrong.

Yes, HITL , human-in-the-loop , will always be important. AI will fail. Systems will break. Edge cases will surface at 2 AM on a Friday, as they are contractually obligated to do. You will always need a human who can look at the situation and say, "Ah, right, that's the thing we didn't think about."

But the number of humans you need to handle a given unit of work? That's dropping. Fast. The Citrini piece makes this point about KTLO , Keep The Lights On , and I have to admit, they're right. Maintaining a running system with AI assistance could easily become a 1-person job where it used to take 10. That's not speculation; I've seen it in practice.

Here's the twist, though. Companies don't just want to keep the lights on. They want to innovate. And to innovate, you have to experiment. And experimentation has historically been expensive , expensive in time, expensive in headcount, expensive in opportunity cost.

AI changes that equation. Dramatically.

When you can spin up a prototype in days instead of months, the cost of a failed experiment drops to nearly zero. And when the cost of failure drops, rational organizations don't experiment less. They experiment more. A lot more. They test ideas they would have killed in the planning phase. They build internal tools they would have bought off the shelf. They explore market segments they would have ignored.

The demand for work doesn't shrink. It expands. It just gets distributed differently.

Amazon Already Knows This

If you want a case study in relentless experimentation, look at Amazon.

This is a company that has been running the "experiment constantly, fail cheaply, double down on winners" playbook since before it was fashionable. Two-pizza teams. Single-threaded ownership. A bias for action that borders on institutional restlessness. They didn't build one of the world's largest technology companies by carefully planning every move , they built it by trying a thousand things and keeping the ones that worked.

AI doesn't change Amazon's strategy. It accelerates it. Faster prototypes. Quicker validation cycles. Lower cost of failure. If you already believe that experimentation is the engine of innovation , and Amazon clearly does , then AI is premium fuel for that engine.

The companies that will struggle are the ones that look at AI and think, "Great, we can do the same work with fewer people." The companies that will thrive are the ones that think, "Great, we can do more work with the same people , or explore entirely new work we couldn't afford to touch before."

That's not a subtle distinction. That's a strategic fork in the road.

But Experts Still Matter , Especially at Scale

Here's where I push back on the more breathless AI takes.

Yes, a developer with AI tools can replicate a mid-market SaaS product in weeks. But "replicate the core functionality" and "run it reliably at scale in production" are two very different sentences. One of them fits in a demo. The other one keeps you up at night.

Large systems are not just code. They're webs of cross-service dependencies, security boundaries, compliance requirements, performance constraints, and organizational context that no AI model currently holds in its head. When your billing service talks to your auth service talks to your notification pipeline talks to your data warehouse , and something breaks at the intersection , you need a human who understands the whole picture. Not just the code. The architecture. The history. The trade-offs that were made three years ago and why.

These experts , the ones with wide, deep context across systems , are not going anywhere. If anything, they become more valuable in an AI-augmented world, because the volume of systems being built goes up, and someone still needs to make sure they all meet quality standards.

Here's the power loom analogy one more time, because I can't resist. Even after mechanization, the trained weaver in a textile mill was expected to walk along the cloth side of the looms, gently touching the fabric as it came from the reed, feeling for broken picks. The machine did the weaving. The human did the quality assurance. The expertise didn't become obsolete , it became more concentrated and more critical.

The Maintenance Paradox

The Citrini piece raises a fair point about the "build vs. buy" calculus shifting. When AI makes building cheap, the rational move is to reconsider that $500K SaaS renewal. Maybe you can just build it yourself.

But , and this is a big but , building it is only half the story. Maintaining it is the other half. And maintenance is where dreams of in-house replacement quietly go to die, AI or no AI.

That said, I'll concede something I initially resisted: AI does make maintenance dramatically cheaper. KTLO tasks that required a team can increasingly be handled by one person with good tooling. The monitoring, patching, updating, and minor firefighting that used to justify entire headcounts? AI compresses that.

So where does that leave us?

More software will be built , because AI makes building cheap. That software will be maintained by leaner teams , because AI makes maintenance cheaper. But those teams will still be overseen by experts with deep context , because AI doesn't understand your system, your users, or your business the way a seasoned human does.

The Bottom Line

The future of human capital in tech isn't a story of replacement. It's a story of leverage.

The power loom didn't eliminate the need for human skill in textile production. It changed the ratio of humans to output. It made each person dramatically more productive. And because that productivity unlocked new markets and new possibilities, the industry grew to employ more people than ever, just in different roles, with different skills, at a different scale.

AI is doing the same thing to software. Fewer people per project. More projects per company. More companies building things they never could before. And at the center of it all, the humans who understand the systems deeply enough to keep them running well.

HITL isn't going away. The "H" just gets a much bigger lever.

These are entirely my own thoughts, sparked by Citrini Research's 2028 GIC analysis. I'd recommend reading the original piece , it's sharp, well-researched, and worth your time even where I disagree with it.