Preface

Preface

5 min read

Software development is industrializing. The craft era — where skilled individuals wrote code by hand — is being replaced by a factory model where humans design production systems and autonomous agents operate them. This shift commoditizes features, restructures teams around infrastructure rather than individual productivity, and compresses the human role to judgment: what to build, why, and whether the output is good enough.

That paragraph took me about a year to write.

Not because the words were hard. Because the argument underneath them kept growing. I started in early 2025 writing what I thought would be a single LinkedIn article about how AI was changing product strategy. That article became two. Then four. Then a 14-source industry report. Then a unified theory document I kept returning to at odd hours, testing each claim against a new piece of evidence, finding that the predictions kept holding.

The ideas in this book developed the way most honest arguments develop — not as a grand vision delivered whole, but as a series of encounters with specific problems that turned out to be connected. I noticed that features were getting easier to copy and wrote about what that meant for defensibility. I tried to scale my personal AI setup to a team and discovered the structural reasons it broke. I read the engineering disclosures from Stripe, Spotify, OpenAI, and Ramp and realized they'd all converged on the same architecture independently — which meant the pattern was stable, not accidental. I watched organizations adopt AI tools and saw the same failure modes repeat across companies that had never talked to each other.

Each piece went deeper than the last. Each one pulled on threads from the previous ones. At some point I stopped thinking of them as separate articles and started thinking of them as chapters in a cumulative argument about what happens when building software costs nothing.

That is what this book tries to lay out.

Who this is for

This is written for product leaders, engineering leaders, and founders who are past the "should we use AI?" conversation and into the harder one: what does AI's impact on software production mean for our strategy, our team structure, and our business model?

If you manage a product roadmap, the first two chapters will probably be uncomfortable. They argue that the thing most roadmaps optimize for — feature delivery — is becoming the least defensible category of work. Not irrelevant, but no longer strategic in the way it used to be.

If you lead an engineering organization, the middle chapters deal with the infrastructure and organizational shifts required to move from individual AI productivity (which doesn't scale) to team-level AI systems (which do, but demand different investments than most leaders expect).

If you're a founder or executive, the later chapters address the business model pressure that arrives when your competitors can rebuild your feature set in a quarter — and what the durable alternatives look like.

What this is not

This is not a how-to guide for AI coding tools. I won't walk you through prompt engineering or tell you which model to pick. The tools are changing too fast for that kind of advice to age well, and there are better resources for it anyway.

This is not a prediction about AGI timelines. I don't know when or whether we get artificial general intelligence, and I'm skeptical of anyone who claims to. The arguments here are grounded in what's already happening — production systems running today at organizations willing to publish their numbers. The thesis doesn't require any capability breakthroughs beyond what already exists.

This is not cheerleading or dooming. I'm not here to tell you AI will save software or destroy it. The industrialization of software is a structural shift, like the ones that hit manufacturing, media, and logistics before it. Structural shifts create winners and losers. They reward people who understand the mechanism and punish people who pretend it isn't happening. I'm trying to describe the mechanism clearly enough that you can decide what to do about it.

How to read this

The book is organized in three acts. Act I diagnoses what AI is doing to software — the commoditization of features, the hidden complexity of scaling AI-assisted development, the shift in the human role, and the emergence of the factory model. Act II stress-tests the diagnosis against eight canonical strategy frameworks and finds that every one of them reaches the same conclusion. Act III turns prescriptive — what to do about it, and where the business model goes next.

The chapters build on each other. The defensibility argument in Chapter 1 sets up the team infrastructure argument in Chapter 2, which sets up the role shift in Chapter 3 and the factory model in Chapter 4. You can read them independently — they started as standalone articles, and each one still works on its own — but the cumulative case is stronger than any individual piece.

I've kept the evidence specific. When I cite Stripe merging a thousand agent-written pull requests per week, or OpenAI's three-engineer team producing a million lines of code in five months, those numbers come from public engineering disclosures that you can verify. When I describe architectural patterns, I'm drawing from seven organizations that built these systems independently and converged on the same design. The convergence matters more than any single data point.

One last thing. The thesis itself is an example of its own argument. These chapters are content — features, in the language of Chapter 1. What's harder to replicate is the analytical framework underneath them, the operational experience that shaped them, and the judgment about which evidence matters and why. I've tried to make that judgment visible throughout. You'll have to decide whether it holds up.