From Textbooks to Classrooms
Reframing “more content” into classroom outcomes
3 min read
At a glance
- Role
- Product Strategy Consultant (via Amdocs / Stellar Elements)
- Problem
- A library request hid a bigger opportunity
- Solution
- Teacher research + AI trust boundary model
- Impact
- Strategy reframe + platform direction

TL;DR
The initial request was a familiar one: build a digital textbook library. But libraries don’t teach. Classrooms do.
In discovery, I found a larger opportunity: help teachers run class more effectively through an experiential platform, not a content repository. The critical constraint wasn’t technology. It was trust: teachers have hard boundaries on what they can delegate to AI.
I led research with teachers and administrators to map those trust boundaries and reframed the strategy into an AI-powered classroom platform direction, with explicit guidance on what should remain teacher-controlled versus what can be AI-assisted.
Impact:
- Strategic reframe from “library” to “classroom platform”.
- Trust boundary model for AI-human collaboration.1
- Platform direction grounded in real teacher workflows.
Industry Primer
Education is a high-trust environment with real accountability:
- teachers are responsible for outcomes
- equity and integrity matter
- tools that undermine authority or add cognitive load get rejected
AI is powerful in education, but only when it respects the teacher’s role.
Context
The engagement started with a narrow request. That’s normal: teams ask for what they can name.
The job in discovery was to clarify what success actually looks like:
- where teachers lose time
- where students lose engagement
- where technology can support, not distract
Problem
“More content” wouldn’t change outcomes
A digital library can reduce friction in finding materials, but it doesn’t address the real classroom work: planning, pacing, differentiation, and feedback loops.
AI needed explicit boundaries
Teachers had strong instincts about what they would never delegate:
- sensitive feedback moments
- integrity-critical evaluation
- classroom authority decisions
Without boundaries, AI becomes a risk, not an assistant.
Solution
Research teacher workflows, not feature wishes
I conducted interviews across roles (teachers, principals, curriculum directors) to map:
- recurring classroom moments that drive workload and stress
- where teachers want automation vs assistance vs full control
- what “good support” looks like under real constraints
Make trust boundaries explicit
We defined categories:
- Teacher-controlled: integrity and authority moments
- AI-assisted: planning variations, summarization, adapting materials
Reframe into a platform strategy
The strategy shifted from storing content to enabling experiences: how teachers plan, teach, adapt, and reflect.
Results
The core outcome was strategic clarity:
- a platform direction leadership could align around
- a shared model for AI-human collaboration
- a clearer definition of what not to automate
What I'd Do Differently
I would prototype one narrow “classroom moment” earlier, end-to-end. In education, demonstration beats argument: a single workflow that saves time without eroding trust can align stakeholders faster than any deck.
Collaborators
I partnered with education stakeholders and internal delivery teams to translate teacher reality into a coherent platform strategy and AI collaboration model.
Footnotes
-
A trust boundary is the line between what a user is willing to delegate to a system and what they must control. Mapping these boundaries early prevents building “powerful” features that users refuse to adopt. ↩