Owning the Context Layer

Where product value lives in the agent era

Abstract editorial illustration of layered context flowing between isolated tools and an AI agent, with geometric network nodes in muted green and gray tones

Where product value lives in the agent era.


A few weeks ago I started using Granola differently. Granola is a Mac app that captures and summarizes meeting transcripts, using your microphone and calendar to handle notes automatically. It was free for a long time, so I used it for meetings and never felt much urgency to rethink that. Then they moved to paid plans, and I had to decide.

While I was thinking through it, I found myself talking into Granola from my kitchen while doing the dishes. No meeting. Just me working through an idea out loud, wanting something to hold it. That moment told me more than any feature comparison would have.

It also crystallized something I'd been circling around for a while about where product value lives in the agent era.

What agents can't access on their own

LLMs are trained on the internet. They're good at the things the internet already contains. But a lot of what drives outcomes at work is never written down. Decisions made in meetings, feedback from design reviews, the reasoning behind the call you made on Tuesday — none of that is online. It never was. It lives in moments that either get captured somewhere or disappear.

This is the gap agents can't close. You can point a capable reasoning model at your email, your calendar, your codebase. But until someone captures the meeting where your PM changed direction, or the customer call where the real objection surfaced, the agent is guessing at context that matters.

Products that own a context slice

Some products have built themselves around this problem. Granola owns meeting context. When it works well (and for me it does), capturing that context is close to frictionless. You show up, it records, you get a clean summary you can edit and link out from. I now use it in a slightly weirder way too, since apparently I give it voice memos while doing chores. Either way the mechanic holds, with low-friction capture and searchable output.

Readwise is a similar bet in a different category. It's a reading app that saves highlights, notes, and clippings from articles, books, and newsletters, keeping them indexed and surfaced over time. I've been a Readwise user for years. When LLMs started becoming more capable, I remember feeling quietly vindicated about having kept all that reading context in one place.

What triggered this article wasn't the reading features. Readwise recently announced a CLI built for agents and shipped an MCP server in beta.1 MCP, or Model Context Protocol, is an open standard that lets AI tools pull context from external sources. That's a deliberate move toward portability. It makes Readwise's context easy to plug into whatever agent setup you're running. Smart read of where the value goes.

Novel context

There's a specific type of context that makes this more interesting than just "have a searchable knowledge base." It's context that didn't exist before someone created it.

Consider meeting transcripts, design review feedback, a voice note from a walk where you worked something out, or the call where a customer told you what the product meant to them. None of these can be derived from what the model already knows. They're reactions to existing things, decisions made in specific moments, things that only exist because a person generated them at a particular point in time.

Agents can reason over your Notion docs, your email, your GitHub history. What they can't do is know what happened in the room. Products like Granola and Readwise capture the slice of your reality that wouldn't otherwise exist in any form an agent could access.

Portability

Owning context only creates durable value if the context stays accessible.

I've been building a few of my own tools in the same space. Margin, a reading and annotation app I've been working on, does some of what Readwise does. I also have a small clipping tool that functions like Readwise's save button. I built these because I wanted more control over how my reading context integrates with my personal agent setup.

But I'm not replacing Readwise. Probably won't. The history is too valuable, the product still works well for me, and a full rebuild isn't worth it. The new MCP announcement made me more likely to stay.

That dynamic holds as long as the friction stays reasonable. If Readwise had tripled in price or locked down export in a way that made the context hard to get out, the calculus shifts. I'm already halfway to having built a replacement. The activation energy to finish it is not that high.

This is the tension products in this category have to manage. The people most likely to pay for a context-capturing product are often the people most capable of building an alternative if the deal sours. Portability is what makes the value durable rather than captive.

If you're building something here

Two questions worth sitting with:

How frictionless is the capture experience? Readwise's browser extension is fast. Granola just runs. Products that work in this space slot into a behavior that already exists, rather than asking users to form a new habit. The harder you make it to generate context, the less of it you'll have to offer.

How portable is the context you hold? MCP is becoming a real distribution channel for this. So is direct API access, native integrations, clean export. If users can route their context to their agent setup without fighting the product, they have less reason to build a replacement. That's a retention mechanism worth thinking about, separate from the product's core value prop.

The sweet spot varies. You still need the product to be worth using on its own terms. But keep friction low on both ends — capture and access — and the context keeps flowing, and the product keeps earning its place in the stack.

Part two

This is one part of a bigger dynamic I want to write about. The other side is what the AI companies themselves are doing and why there's gravity pulling them toward owning more of the context layer, rather than just building interfaces on top of it.

That's its own piece. But if owning context creates durable value in the agent era, the companies with the most to lose from a fractured context layer are the ones providing the models. The incentives pulling toward more vertical integration are strong.

More on that soon.

Footnotes

  1. Anthropic published the MCP specification in late 2024. Adoption spread quickly — most major AI coding tools added support within months. It's becoming the default handshake between agent environments and external context sources.