‹ Insights

AI-First Is a Structure, Not a Feature

5 min read

Replacing 100 SDRs with agents is not AI-first. It is a restructuring branded as innovation. Knowing the difference is what this piece is about.

In January 2026, Monday.com announced it had replaced its entire 100-person SDR team with AI agents. Response times dropped from 24 hours to 3 minutes. Conversion rates went up. The story spread everywhere. Proof that AI was reshaping the enterprise.

Except it probably isn’t what it looks like.

Marc Andreessen made the point bluntly on 20VC a few weeks later: companies that hired recklessly during the zero-rate COVID years are now overstaffed by 25 to 75 percent. AI is the “silver bullet excuse” to cut the bloat. If you can fire 100 SDRs and performance improves, the question isn’t how smart the agents are. It’s why you had 100 SDRs in the first place.

That’s not AI-first. It’s a mature SaaS business running a restructuring and branding it as innovation. A different thing entirely. Knowing the difference is what this piece is about.


The wrapper trap

Sixty percent of YC’s W26 batch is AI companies, up from 40% two years ago. Fourteen of them hit $1M in annual revenue by Demo Day, the fastest cohort ever.

The uncomfortable question: how many survive the next model release?

When Anthropic shipped Claude Cowork with its legal and security plugins, LegalZoom dropped nearly 20% and Cloudflare fell 8% in a single session. Not because Claude targeted them. Because it made capabilities they’d been selling as premium features available to anyone with an AI assistant. Days later, OpenAI’s GPT-5.3-Codex landed with similar capabilities, compounding the pressure.

This is what happens when your product is a layer on top of a model. The model improves. Your layer becomes a commodity. The company that built the model ships a better version of your product as a feature inside their platform.

Klarna understood this early. They replaced their enterprise CRM with an AI system they built in-house. The interesting detail: Klarna didn’t add AI to their existing workflow. They rebuilt the workflow around what AI could do natively. That’s a structural decision, not a feature decision.


Selling the work vs. selling the tool

Harvey helps lawyers draft documents faster. Crosby drafts NDAs for companies directly, no lawyer needed for routine agreements. Both use the same underlying models. Harvey sells the tool. Crosby sells the work.

When the model improves, Harvey’s users get slightly faster drafting. Crosby’s service gets cheaper and more reliable. One is racing against the model. The other is riding it.

Sequoia put numbers on this: for every dollar companies spend on software, they spend six on services. The autopilot founders, the ones delivering finished work rather than better tools, are going after a market six times larger.

But not every “autopilot” is real. Quanta raised $20 million to replace QuickBooks with AI-driven accounting. Their approach is honest: AI handles the mechanical work, humans bridge the last mile for reliability. That’s genuinely AI-first because the system can’t function without the model at its core. Compare that to a startup that bolts GPT onto a spreadsheet and calls itself “AI-powered accounting.” Same label. Completely different architecture. The second one dies when the spreadsheet platform adds the same integration.

The test is simple. Remove the AI. Does the product still exist? If yes, it’s a feature. If no, it’s a structure.


What the small team looks like

Josh Mohrer runs Wave AI. Solo founder. $7 million in annual revenue. Roughly $3 million in profit. He does all engineering and support himself using AI agents.

It’s tempting to treat this as a feel-good story about the democratisation of building. It’s more useful to ask: what is structurally different about Mohrer’s company versus a traditional SaaS startup at the same revenue?

The answer is that every function, from support to engineering to operations, runs through AI as the default path, not the exception. He doesn’t use AI to augment a team. There is no team. The AI is the operational structure. Adding a human to this system would be a regression, not an improvement, for the tasks the agents handle.

Dan Shipper’s Every runs five AI products at $1.3 million in revenue growing 45% per quarter. Their internal culture has an interesting norm: if you’re not using AI first to code, write, or design, people ask you why. That’s a cultural structure, not a feature adoption.

Giga started as a handful of engineers out of YC. They built an internal agent called Atlas that helped them close DoorDash and other large enterprise customers, beating companies many times their size. Again: the AI isn’t enhancing a traditional sales process. It replaced the need for one.

These companies aren’t succeeding because they’re using AI. They succeed because they’re structured around it. The difference sounds semantic. It’s not. It determines whether your team gets more capable as models improve, or whether it just gets slightly faster at the same tasks.


Where the gap really is

Software engineering accounts for 50% of all AI agent activity today. Healthcare, law, finance, logistics: each is still under 5%.

The models are capable. The deployment isn’t there. These industries run on undocumented workflows, institutional memory, and human judgment that has never been formalised. MIT research found that 95% of enterprise AI pilots fail to reach production, not because the technology doesn’t work, but because nobody mapped the gap between a controlled demo and the chaos of a real operation.

The 5% that succeed share a trait: they embed engineers with the customer to surface the unwritten rules that no dataset captures. They treat the distance between demo and deployment as the product challenge, not an afterthought.

This is where the press-release version of AI and the reality diverge most sharply. The headlines say “AI replaces 100 SDRs.” The reality is closer to “AI works brilliantly in a controlled environment and breaks in production because nobody accounted for the seventeen exceptions that the human team handled without thinking about it.”

Better models won’t close that gap. Better judgment will. Knowing where models work, where they don’t, and how to build the scaffolding that makes them reliable when real money and real regulations are involved.

That judgment, not the AI itself, is what makes a company AI-first.

Read next: The Constraint Moved