Context as Product

A builder's guide to owning a context layer in the agent era

Abstract editorial illustration of context flowing between isolated tools and a central agent, with geometric network nodes and data streams in muted green and gray

A builder's guide to owning a context layer in the agent era.


I wrote last week about products that own specific context slices (Granola for meetings, Readwise for reading) and why that position creates durable value in the agent era. That piece was mostly observation: here's a pattern that exists, here's why it works, here's the portability tension.

This piece is the other half. If you're building something and wondering whether you're sitting on a context layer opportunity, or whether you're missing one, here's how to think through it.

What makes a context type worth owning

Not all context is equally valuable to capture. The types that matter most share a few properties.

They're novel. The context didn't exist before someone created it.1 A meeting transcript didn't exist before the meeting happened. A reading highlight didn't exist before someone stopped and underlined something. A design critique didn't exist before a reviewer left a comment. This is different from derivable context: facts about the world, historical records, publicly available information. Novel context is what your agent can't get anywhere else.

They're generated through existing behavior. The best context-capturing products don't ask users to do extra work. They slot into something the user was already doing. Granola runs in the background while you're in a meeting you were attending anyway. Readwise's browser extension captures highlights you were making anyway. The friction is close to zero because the collection happens as a byproduct of the real activity. Compare this to tools that require a separate logging workflow; they capture less, and what they capture is less accurate.

They compound over time. A single meeting transcript is useful. Two years of meeting transcripts, cross-referenced with project history and team decisions, is significantly more useful. The value of the context store grows faster than the number of entries. This is what makes the position defensible: a competitor can build a better capture tool, but they can't give you your history.

Rebuild threshold

The people most likely to pay for a context-capturing product are often the people most capable of building their own alternative. This creates a specific failure mode worth understanding.

When a product makes its context hard to get out (through restrictive export, high prices, closed APIs), it doesn't destroy the demand for that context. It changes the calculus on who supplies it. A developer who has been using Readwise for three years doesn't lose interest in their reading history when Readwise becomes expensive. They start thinking about what it would take to replicate the capture workflow themselves.

I've been through this. I built a small browser extension that functions like the Readwise save button. I have Margin, which I've been working on as a reading and annotation tool. If Readwise had moved in a direction that made the context inaccessible, I'd have the parts of a replacement within reach. The remaining work would be integration, not greenfield.

This is the rebuild threshold: the point at which a user's capability to self-supply exceeds the cost of switching. Products on the wrong side of it are one pricing change away from losing their most valuable users.

The threshold varies by context type. Meeting transcripts are harder to self-supply; running your own transcription infrastructure requires sustained engineering effort. Reading highlights are easier; a browser extension and a SQLite file gets you most of the way there. Code intelligence is harder still — building what GitHub Copilot or Cursor provides would require significant infrastructure investment.2 Design feedback is somewhere in the middle.

If you're building in this space, map where your context type falls. The harder it is to self-supply, the more latitude you have on portability. The easier it is, the more important it becomes that you make portability frictionless before users start calculating the cost of building their own.

Portability isn't a feature, it's retention

Most product thinking treats export and API access as late-stage features: something you add when customers ask for it, or when enterprise deals require it. In the context layer, portability is a retention mechanism that belongs on the roadmap from day one.

The reasoning: an agent that can access your context delivers meaningfully better results than one that can't. Users who have set up that integration have a reason to stay that goes beyond the product's own interface. They're not just using Readwise; they're using Readwise-as-a-context-source for everything they do with AI. That relationship is stickier than "I like the reading UI."

Readwise shipping an MCP server in beta is the right move.3 It makes their context accessible to any agent that speaks MCP, which is rapidly becoming the default integration layer for AI tooling. Granola's integrations into calendar and note-taking tools serve a similar function; the context flows out into the places where decisions get made.

The practical question for builders: what does the path from "user generates context in your product" to "agent can access that context" look like? If it requires engineering work on the user's side, you're adding friction to the most valuable part of the relationship. If it's one toggle, you've built a retention mechanism.

Where else this pattern applies

The Granola and Readwise examples are easy to see because they've built products explicitly around context. The pattern is wider.

Product decisions. Every product team generates context that agents can't derive: the rationale behind a decision, the tradeoffs that were rejected, the customer feedback that shaped a direction. This context lives in meeting notes, Loom recordings, Linear comments, Slack threads. It's scattered, unstructured, and mostly invisible to any agent trying to help.

Linear has made real progress on agent accessibility. Their API is clean, their data model is navigable, and their MCP integration makes project context queryable. An agent can trace issue history, connect decisions to outcomes, and understand what a team prioritized and why. That's a meaningful surface area of product reasoning becoming machine-readable.

Figma is the gap. The design review feedback that shapes a product direction often dies in Figma's comment threads without ever reaching a structured form.4 A comment saying "this button hierarchy feels off" contains real product context, but it carries no frame reference, no design version, and no connection to the decision it influenced. If you're building a design tool, structured capture of review context with queryable metadata is underexplored territory.

There's also a tool-ownership wrinkle here. Company-sanctioned meeting tools (Zoom Notetaker, Teams transcription) and individual prosumer tools (Granola, personal voice notes) often capture similar context but land in different places. An employee's agent can access what they captured personally. The organization's records sit elsewhere, often inaccessible to either side. Products that bridge this split (making individually-captured context portable to team workflows, and vice versa) have a position that neither camp owns yet.

Sales and customer context. CRM tools capture interaction history, but the useful context (what the customer said they were trying to solve, what objection surfaced in the call, what the competitive concern was) often lives in free-text notes that aren't structured for agent access.

Gong and Chorus record calls and generate transcripts, which is an improvement. But transcripts are dense. What an agent needs is a distillation: the real objection, the stated use case, the moment where the conversation shifted. That synthesis rarely makes it into the CRM in a form an agent can reason over.

If you're building a sales tool, the context your users most need agents to access isn't pipeline status. It's the substance of customer conversations, structured enough to be useful when an account executive asks their agent to prep for a renewal call six months later.

Learning and development. Course completion and quiz scores are tracked by every LMS. The more useful context is what someone understood, where their mental model broke down, what they connected to something they already knew. This is the context that would make an AI tutor useful in ways that generic encouragement can't replicate.

Current tools track outputs (finished, passed, failed) rather than the signals that explain those outputs. A learner who passed a quiz by guessing has different context than one who worked through it deliberately. A learner who connected a concept to a real project they're running is more likely to apply it. Almost none of this gets captured.

If you're building a learning tool, comprehension signals are the underexplored context layer. Not completion metrics, but evidence of understanding. That's also what would let an AI tutor adapt in ways that feel personal rather than personalized-by-demographic.

In each case, the opportunity is the same: identify the context being generated by existing behavior, lower the friction to capture it, and make it accessible to the agent layer.

Two questions for builders

If you're evaluating whether your product is on the right side of this shift, the questions from last week's piece still apply. Here's how to make them more operational:

On capture friction. Walk the path from "user does the thing" to "context is captured." How many steps are there? Is the capture automatic, or does it require an intentional action? How often does the context fail to get captured because the UX asks for effort at the wrong moment? Every step in this path is a leak. Treat capture friction as a product defect.

On access friction. If a user wants to route your context into their agent setup today, what does that require? If the answer is "file an API access request and wait for a sales call," you have a problem. If the answer is "click this toggle," you have an asset. The goal is a state where the integration is so easy that users don't think twice about setting it up, and once it's set up, the cost of moving away from your product goes up.

Readwise's MCP server, once configured, just runs. Granola's integrations require nothing from the user after initial setup. That's the model: the context flows out without ongoing effort, and users stop thinking of portability as a feature they're grateful for and start thinking of it as table stakes they'd notice if it disappeared.

Keep friction low on both ends and the product earns its place in whatever stack the user builds around it. Raise it, and the most capable users start doing the math on whether it's worth rebuilding.

What comes next

There's a third piece of this I haven't gotten into: what happens when the model providers decide they want to own the context layer directly. The incentive pull toward vertical integration is real. The companies with the most to gain from context are the ones providing the models, and the economics of owning a proprietary context moat are obvious.

That's a different article. But the window for independent context products to establish their position may be shorter than it looks.

Footnotes

  1. "Novel context" describes information that didn't exist before a specific human action created it. Meeting transcripts, reading highlights, design critiques, voice notes — these are reactions to existing things, generated at a specific point in time. Agents can reason over published information, but they can't know what happened in the room.

  2. The rebuild cost calculation shifts fast. Two years ago, building your own code autocomplete required a full ML team. Today, wrapping a frontier model with the right context window gets you most of the way there. The threshold moves toward "easier to rebuild" as the underlying models improve and tooling matures.

  3. MCP (Model Context Protocol) is an open standard published by Anthropic in late 2024. It defines a common interface for AI tools to request context from external sources. Adoption spread quickly; most major coding assistants and agent frameworks added support within months of release. It's becoming the default handshake between agent environments and external context stores.

  4. Figma's comment API exists but is limited; comments don't include design context (which frame, which version, what changed), making them hard to query meaningfully. The design review feedback that shapes a product direction rarely surfaces in any structured form that an agent could access.