Eight frameworks, one conclusion

Chapter 5

18 min read

The previous chapters described what's happening: features are commoditizing, the factory model is replacing traditional development, the human role is compressing to judgment. This chapter asks whether any of that holds up under scrutiny.

Hamilton Helmer spent a career cataloging what makes companies durable. He identified seven sources of power. I ran the software industrialization thesis against all seven, expecting to find that most of them crumble when AI collapses the cost of building software. Five of them survive. Two weaken. And the two that weaken are exactly where most product teams spend their roadmap.

That gap between where defensibility lives and where companies invest is the core of what I've been writing about for the past several months. But saying "features are commoditizing" is an assertion. Running it through Helmer, Porter, Wardley, Christensen, Thompson, Perez, McGrath, and Grove is a stress test. Eight canonical strategy frameworks, built over decades by people who've never heard of background agents or CLAUDE.md files. If the thesis only survived one of them, you could chalk it up to cherry-picking. It survived all eight.

Here's what that means.

The convergence

Each framework approaches the same question from a different angle. Thompson's Aggregation Theory explains how value migrates when supply gets commoditized. Wardley Mapping tracks how components evolve from novel to commodity along a predictable axis. Christensen's Disruption Theory describes what happens when a new production method enters an industry. Perez's Technological Revolutions places the current moment in a 250-year pattern of how economies absorb radical technological change. Porter's Five Forces maps the structural pressures reshaping industry profitability. Helmer's 7 Powers catalogs what makes a moat a moat. McGrath's Transient Advantage argues that durable moats are the exception, not the rule. Grove's Strategic Inflection Points identifies the signals that distinguish an incremental shift from a structural one.

Different thinkers, different decades, different intellectual traditions. They converge on the same conclusion: when the cost of building software approaches the cost of describing it, feature-based competition becomes structurally unwinnable. Not "harder" or "less effective." Structurally unwinnable. The way competing on typesetting speed became structurally unwinnable after desktop publishing.

What Wardley sees

The most direct connection is Wardley Mapping, because the thesis is a Wardley argument whether or not you draw the map.

Wardley's central claim is that all components in a value chain evolve through four stages: Genesis (novel, hand-built), Custom Built (growing, expensive), Product (stable, multi-provider), and Commodity (ubiquitous, utility). Supply and demand competition pushes everything rightward. You can slow the movement. You can't stop it.

Software features have been moving rightward for years. What AI did was stomp on the accelerator. When Stripe merges a thousand agent-written PRs a week, features aren't in the Product stage anymore. They're approaching Commodity. The cost structure collapsed, the replication timeline compressed from quarters to hours, and the differentiation window is closing behind it.

Wardley also identifies a pattern called "efficiency enables innovation": when a component commoditizes, it creates a platform for new higher-order activity. Commodity computing enabled SaaS. Commodity SaaS is now enabling something above it. The thesis calls that something "factory infrastructure" -- the orchestration layers, verification systems, and context engineering that direct agents toward the right problems. In Wardley's terms, factory infrastructure sits at the Genesis end of the evolution axis while feature production slides toward Commodity. The value moved up the stack.

If you drew the map -- features, code generation, orchestration, verification, context engineering, judgment, each placed on the evolution axis -- you'd see all five predictions as one coherent rightward movement. That map might be the single most useful visual artifact for the thesis, because it replaces assertion with spatial logic. Where a verbal argument can feel slippery ("everything commoditizes eventually, so what?"), a Wardley map pins each component to a specific position and forces you to confront the relationships between them. Factory infrastructure depends on commoditized code generation the same way SaaS depended on commoditized cloud compute. The dependency chain makes the claim falsifiable: if code generation stalls at the Product stage instead of reaching Commodity, the factory model doesn't fully arrive. But everything we can observe -- pricing trends, capability curves, competitive behavior among model providers -- points rightward.

What Helmer reveals

Helmer's 7 Powers is a taxonomy of defensibility. Each power has a benefit (how it creates superior economics) and a barrier (what prevents competitors from neutralizing it). Walk all seven against the thesis and the picture is specific:

Scale Economies collapse. When a three-person team with factory infrastructure matches the output of fifty engineers, volume-based cost advantages in development evaporate. The barrier side breaks too -- the factory blueprint is open-source.

Network Economies survive intact. You can't point a factory at a social graph. Spotify's 600 million listening histories, Meta's two billion daily active users -- these compound in ways code replication can't touch.

Counter-Positioning becomes the critical transitional power. AI-native companies building with factory infrastructure while incumbents optimize per-seat SaaS models -- that's textbook Helmer. The incumbent can't adopt the factory model without cannibalizing seat-based revenue. But the window is narrow. Once the factory blueprint commoditizes, counter-positioning evaporates because everyone can adopt the model.

Switching Costs bifurcate. Deep enterprise integration survives. Ripping out Salesforce costs more in organizational disruption than any AI-native replacement could save. But surface-level product switching costs vanish when an agent can rebuild an app's functionality in a weekend.

Branding weakens at the feature layer. You can't build brand loyalty to a feature that six competitors clone by Friday. Branding survives at the entity level -- trust in Apple's privacy posture, Stripe's developer experience philosophy -- but it detaches from features and reattaches to values.

Cornered Resource stays potent. Proprietary data, regulatory positions, physical infrastructure. Google's search ranking corpus. TSMC's fabs. The defensibility matrix maps cleanly here.

Process Power gets interesting. The factory makes process explicit -- CLAUDE.md files, verification gates, orchestration specs. In theory, this should weaken Process Power by making it copyable. In practice, the organizations running sophisticated factories first are accumulating operational knowledge that the open-source spec doesn't transfer. The blueprint is commodity. The judgment about how to run it is not.

Five of seven powers survive or strengthen when production costs collapse. Most companies invest their roadmaps in the two that weaken. Helmer gives that strategic error a taxonomy. The thesis gives it a mechanism.

Where Christensen breaks

Not everything maps cleanly. Disruption Theory is the framework with the most productive tension, and I think being honest about where a framework doesn't fit matters more than forcing a clean narrative.

Christensen's core insight is that incumbents fail because a cheaper, simpler product enters at the low end of the market and improves along an S-curve until it's good enough for the mainstream. The incumbent ignores the threat because the disruptor's initial market is unattractive.

Background agents don't follow this path. They didn't enter at the low end with an inferior product for underserved customers. Copilot shipped inside VS Code. Claude Code runs in the same terminal senior engineers use. The entry point was the mainstream market, not a segment below it. There's no period of rational incumbent blindness -- everyone can see what's happening.

What's actually happening is incumbent paralysis, not incumbent blindness. Organizations see the shift and can't reorganize fast enough. Their hiring, incentives, culture, and career ladders assume features are hard to build. The value network is optimized for a world that's disappearing. That part of Christensen maps perfectly. The market-entry mechanics don't.

The zero-marginal-cost dynamic also breaks the framework's economic logic. Classic disruption assumes the entrant has a different cost structure that lets it profit in markets the incumbent finds unattractive. Background agents don't have a "different cost structure." They approach zero marginal cost for production. When building a feature costs about as much as describing it, the supply-and-demand segmentation that disruption theory models stops applying. There's no price gradient left to exploit -- no low-end market where the disruptor can grow undisturbed, because the cost floor for everyone dropped to roughly the same level simultaneously.

Where Christensen still cuts deep is the value network concept. Companies measuring feature velocity, sprint throughput, and DORA metrics while the competitive axis shifts to non-code defensibility. That's measuring success by the criteria of the world being displaced. Every organization celebrating its shipping speed while ignoring its moat position is living the innovator's dilemma in real time. They just got there through a different door than Christensen's model predicted.

The turning point

Carlota Perez's framework provides historical grounding that the other frameworks can't. She's identified five great technological revolutions since the 1770s, and each one followed the same two-act structure. Installation: financial capital floods in, speculative infrastructure gets overbuilt, a bubble forms and pops. Deployment: production capital takes over, the infrastructure becomes cheap and ubiquitous, and the economy reorganizes around the new paradigm.

Between the two acts sits a turning point. Financial crash, institutional recomposition, and the beginning of broad-based adoption.

The thesis prediction that the factory blueprint commoditizes maps directly to Perez's turning point. When Symphony and Paperclip make the orchestration layer an npx command, that's the moment infrastructure built during the frenzy becomes cheap enough for production capital to absorb. The parallel to overbuilt railway track is structural: speculative investment in AI (the models, the tooling, the orchestration experiments) is creating the substrate on which a reorganized software economy will run.

Perez also raises a question the thesis currently underplays: is AI-driven software industrialization a new revolution, or the deployment phase of the ICT revolution that began in 1971? If it's the latter, the implication shifts. The dot-com crash was the frenzy peak. The current AI buildout is the beginning of deployment, where fifty-five years of installation infrastructure finally reorganizes the real economy. And if that's right, the venture-funded AI startup model is an installation-era artifact. The winners in deployment eras aren't the startups -- they're the incumbents who integrate the now-cheap infrastructure into existing operations.

Either way, Perez adds something the other frameworks miss: the institutional dimension. The thesis describes the factory. But factories need labor relations, quality standards, liability frameworks, and governance structures. If agents write the code, who's liable for defects? How do employment contracts change? What happens to open-source norms when a factory can consume and replicate any public codebase in hours? Perez's framework insists that deployment requires institutional recomposition, and that this takes a decade even after the technology is ready.

The value chain inverts

Porter's Five Forces and Value Chain analysis produced the most operationally useful insight.

In the traditional software value chain, operations -- writing code -- is the dominant primary activity. It absorbs the most resources and creates the most differentiation. The factory model demotes it to commodity logistics. What were support activities become primary. Context engineering moves to the center of value creation. Verification design becomes the activity that separates defensible output from copyable output. Human resource management transforms from "hire engineers who write code" to "hire people who exercise judgment about what agents should build and whether the output is good enough."

This inversion is a diagnostic. Look at where a company invests. If they're optimizing developer velocity and code output, they're optimizing an activity that's already sliding toward commodity. If they're investing in context engineering, verification infrastructure, and judgment capacity, they're building the new primary activities. The first group is running a better factory for the wrong product. The second group is designing the factory itself.

Porter's Five Forces are all moving at once. Barriers to entry are collapsing (a motivated engineer with a weekend). Buyer power is increasing (your customer's alternative is now "build it ourselves with the same tools"). The threat of substitutes has a new category -- the customer builds their own replacement. Supplier power is concentrating upstream in the model providers (few alternatives, high switching costs, no credible backward integration). Rivalry is reshaping around who can direct the factory at the right problems fastest, not who can build faster.

The supplier power concentration is probably the most underappreciated force in the set. When three or four model providers control the substrate on which every software factory runs, and switching between them requires re-engineering your context layer, those providers hold structural power that the rest of the industry hasn't fully priced in. Porter would recognize the shape immediately: a concentrated supplier base selling to a fragmented buyer market with high switching costs and no credible threat of backward integration. The model providers are the new Intel, and most of the software industry is in the position of PC OEMs circa 1995 -- dependent on a component they can't build themselves.

The acceleration and the transient edge

Two frameworks remain that I haven't given dedicated sections: Grove's Strategic Inflection Points and McGrath's Transient Advantage. They both address the same question from opposite sides -- Grove asks how to detect the shift, McGrath asks what happens after you've detected it.

Grove's 10X force test is the simplest diagnostic in the set. When a single factor in your competitive environment changes by an order of magnitude, you're at a strategic inflection point. The cost of producing a working software feature has dropped by more than 10X for organizations that have adopted factory infrastructure. By Grove's own standard, this qualifies. But Grove also assumes that navigating the inflection point creates a new period of stability -- that you cross the valley, find solid ground on the other side, and build from there.

The thesis suggests something less comfortable. The factory blueprint commoditizes fast enough that there may not be a stable post-inflection position. First movers get a temporary edge, then the infrastructure becomes available to everyone, and the advantage dissipates. This is where McGrath picks up what Grove puts down. Her framework was built for exactly this pattern: there is no stable endpoint, only a portfolio of advantages managed through continuous reconfiguration. The companies that survive aren't the ones that find the right position and defend it. They're the ones that treat every position as temporary and invest in the organizational capacity to move.

McGrath's framework also explains something the other seven don't: why smart companies fail to respond even when they can see the shift coming. It's not ignorance. It's that their exploitation machinery -- the teams, processes, metrics, and incentive structures optimized for the current advantage -- actively resists the transition. The organization is optimized for where value was, not where value is going. Grove tells you to detect the inflection point. McGrath tells you that detection is the easy part.

What's missing from the thesis

Running eight frameworks against the predictions didn't just validate them. It surfaced three gaps -- places where the thesis, as stated in the previous chapters, leaves important ground uncovered.

Process Power needs to be named. The defensibility matrix identifies four durable categories: networks, data flywheels, relationships, and physical infrastructure. But the framework analyses revealed a fifth: the accumulated organizational knowledge of how to run the factory well. Stripe's operational sophistication isn't captured in the open-source spec. This is Helmer's Process Power applied to the production system itself, and it deserves explicit recognition even though it's more fragile than the other four.

Why more fragile? Because Process Power in traditional industries -- Toyota's production system, TSMC's yield engineering -- is embedded in decades of tacit knowledge, supplier relationships, and organizational muscle memory. Factory-era Process Power is younger and more legible. The specs are written down. The orchestration patterns are open-source. What isn't transferable is the judgment layer: knowing which verification gates matter for your domain, knowing when to override the factory's defaults, knowing how to structure context so agents produce output that fits your product's specific quality bar. That judgment accumulates through operation, not through reading the blueprint. But it accumulates faster than traditional Process Power did, and it's probably easier to replicate. Calling it a moat is generous. Calling it a transitional advantage is more honest -- significant during the period when most organizations haven't built factories at all, diminishing as factory operation becomes a standard organizational competency.

Healthy disengagement is the missing prescription. The thesis tells companies what to do: invest in non-code moats. It doesn't address why they can't. Rita McGrath's Transient Advantage framework names the organizational problem -- teams, incentives, roadmaps, and identities are organized around feature delivery. Letting go of that is harder than adopting the factory model.

The diagnosis is "features commoditize." The prescription is "invest elsewhere." The bridge between them -- the organizational discipline of recognizing when a position is eroding and redeploying resources before it becomes a liability -- is what McGrath calls healthy disengagement, and the thesis needs it. This isn't just a strategy problem. It's an identity problem. Product managers who've built careers on shipping features, engineers whose status comes from code contributions, design teams whose value is measured in screens shipped -- all of them face a world where those activities generate less defensible value every quarter. The organizational challenge isn't knowing what to do. It's letting go of what you were. McGrath argues that companies capable of healthy disengagement share a common trait: they treat advantages as having lifecycles, not as permanent positions. They budget for exit from current activities the way they budget for entry into new ones. Most software companies, built during an era when features were hard and shipping speed was a competitive advantage, have no muscle for this. They'll need to develop it, and the ones that develop it first will reallocate resources to non-code moats while their competitors are still optimizing sprint velocity.

The speed of evolution may break traditional strategy timing. Wardley's evolution axis unfolds over decades. McGrath's advantage lifecycles are measured in years. Grove's strategic inflection points give companies months to respond. The thesis describes feature commoditization happening in days. If evolution speed is itself accelerating, the window for strategic response may be too narrow for the deliberate processes these frameworks prescribe.

This is probably the most uncomfortable implication of the analysis. Every framework here assumes that organizations have some window -- however narrow -- to detect a shift, deliberate about it, and respond. Grove gives you months. McGrath gives you quarters. Wardley gives you years to watch a component slide rightward before it reaches commodity. But when a competitor can replicate your latest feature release over a weekend using the same tools available to everyone, the detection-to-response window collapses toward zero. The frameworks still describe the dynamics correctly -- they just may not leave enough time for the responses they prescribe. Strategy itself might need to operate at a different clock speed, closer to operational tempo than annual planning cycles. None of the eight frameworks fully accounts for this, and I'm not sure anyone has a good answer yet.

So what

The convergence across eight frameworks eliminates one category of objection entirely. "Features are commoditizing" is not an assertion anymore. It's a structural claim supported by demand-side economics (Thompson), evolutionary dynamics (Wardley), moat taxonomy (Helmer), production method analysis (Christensen), historical pattern matching (Perez), industry structure analysis (Porter), advantage lifecycle theory (McGrath), and inflection point detection (Grove).

The strategic question sharpens: what do you have that can't be rebuilt in a quarter?

If you're a product leader, the audit is specific. Map your assets to Helmer's seven powers -- which ones do you actually have? Run your value chain through Porter's inversion -- are your investments in the old primary activities or the new ones? Place your key components on Wardley's evolution axis -- are you optimizing something that's already moved to commodity?

If you're an individual contributor, the question is where your judgment creates the most value. The factory doesn't eliminate human work. It compresses it to the parts machines can't do yet: problem selection, constraint definition, drift correction, and taste. The engineers thriving in this model think about the production system, not the code.

If you're a founder, the counter-positioning window is real but closing. The moment the factory blueprint commoditizes -- and it's commoditizing now -- the advantage of having built one evaporates. What remains is the non-code moat. Networks, data, relationships, operational knowledge, atoms. Everything that isn't a prompt.

Eight frameworks built over six decades by people who studied railroads, steel mills, semiconductor fabs, and internet platforms. All of them looking at what AI is doing to software and reaching the same conclusion. Features are the least defensible investment a software company can make. The interesting question is what you're going to do about it.