Asher Cohen
Back to posts

What's Your Backup Plan When AI Gets Too Expensive?

Everyone is debating whether AI will replace engineers. Fewer people are talking about what happens when AI becomes too expensive, too restricted, or simply disappears from your workflow.

A few days ago, I came across a post asking software engineers a provocative question: "What's your backup plan if Artificial Intelligence writes better code than you in 2 years?"

It's the kind of question designed to trigger fear—fear that coding will become commoditized, fear that engineers will become obsolete, and fear that the value we've spent years building will disappear overnight.

But my immediate reaction was the opposite.

I commented: "I'm more worried about the opposite: what's your backup plan when LLM infrastructure becomes so expensive that companies will take it away from you?"

That comment struck a nerve. While everyone is debating whether AI will replace engineers, fewer people are talking about the economic reality behind the tools we're using today.

We May Be Living in the "Cheap AI" Era

Right now, many developers are experiencing AI during its most accessible phase:

  • Low-cost or "free" access to incredibly capable models
  • Bundled AI features inside tools we already use
  • Startups burning VC money to acquire users
  • Cloud providers subsidizing adoption to gain market share
  • Enterprise licenses hiding the real cost from individual teams

All of this creates the illusion that AI is cheap.

But history says otherwise.

We've seen this pattern before: first comes adoption, then dependency, then monetization. The economics eventually catch up.

GitHub is already doing it.

AI Is Infrastructure, Not Magic

A lot of the conversation around AI treats it like magic. It isn't. It's infrastructure.

Behind every AI-generated line of code sits a massive stack of real-world costs: GPU clusters, electricity, cooling, networking, storage, inference costs, API gateways, compliance layers, and regional hosting requirements. Every prompt, completion, and automation has a real operational price behind it.

This reminds me of the early cloud era. At first, cloud felt almost free—generous free tiers, startup credits, aggressive enterprise discounts, and rapid onboarding with very little governance. Then came reality: FinOps meetings, cost optimization initiatives, quotas, budget approvals, and architectural reviews.

LLMs may follow the same trajectory. Today's "just use Copilot" may become tomorrow's "justify your monthly inference budget."

AI Dependency Is Becoming an Operational Risk

Teams are already reshaping workflows around AI. We use it for writing code, reviewing pull requests, generating tests, documenting systems, answering support tickets, searching internal knowledge bases, and onboarding developers faster.

This creates massive leverage. But it also creates dependency.

What happens if:

  • Pricing increases 5x?
  • Your company hits usage caps?
  • Compliance blocks external model usage?
  • Latency makes workflows painful?
  • A vendor changes terms overnight?
  • The tool becomes unreliable during peak demand?

Suddenly, productivity assumptions break.

The more "AI-native" your workflow becomes, the more fragile it may be.

This isn't an argument against AI. It's an argument against building brittle systems—and brittle habits.

The Best Engineers Were Never Paid to Type Fast

This is where the original post got one thing right. In my experience, the hardest problems in software engineering were rarely about writing code.

They were about:

  • Defining the right problem when requirements are incomplete or conflicting
  • Aligning stakeholders across product, design, and business
  • Navigating trade-offs between speed, quality, and cost
  • Working within legacy constraints
  • Making decisions with incomplete information
  • Coordinating across teams and dependencies

AI can help with implementation. It can accelerate execution.

But it doesn't own accountability.

It doesn't negotiate trade-offs in a boardroom. It doesn't carry context across political or organizational boundaries. It doesn't take responsibility when the wrong decision ships to production.

The highest-value engineers will continue to create value long after "writing code" becomes cheaper.

Build Anti-Fragile Engineering Workflows

The future is not "ignore AI," and it's not "depend entirely on AI." The better path is to use AI while remaining resilient without it.

That means:

  • Keeping your core engineering fundamentals sharp
  • Maintaining manual debugging and architecture skills
  • Understanding your stack deeply enough to work without autocomplete
  • Designing workflows that degrade gracefully
  • Exploring self-hosted or open-source alternatives where sensible
  • Measuring ROI instead of assuming AI is always worth the cost
  • Avoiding vendor lock-in where possible

The strongest engineers in the next decade may not be the ones who use AI the most. They may be the ones who can use it effectively without becoming dependent on it.

Ask the Better Question

Maybe we're asking the wrong question.

Instead of asking, "What happens when AI writes better code than me?" we should also ask, "What happens when AI becomes too expensive, too restricted, or disappears from my workflow?"

Because the future doesn't belong to engineers who fear AI, and it doesn't belong to engineers who blindly depend on it.

It belongs to engineers who can adapt—engineers who understand systems, engineers who understand trade-offs, and engineers who can create value with AI or without it.

Don't just ask whether AI can replace your skills.

Ask whether your skills still work when AI disappears.

#ai #engineering #software #architecture #career