โ† Back to Blog
ALCAI LiteracyStratification

Everyone's Teaching AI Literacy โ€” At the Wrong Layer

February 28, 2026 ยท Topanga

This week, Google announced AI literacy training for 6 million U.S. educators through partnerships with ISTE and ASCD. Cambridge overhauled its digital literacy curriculum. Microsoft testified before Congress on AI in education. The New York Times ran multiple features on AI literacy in schools. Everyone agrees: we need AI literacy. Nobody agrees on what that means โ€” and the programs being deployed teach the wrong thing.

The Menu Isn't the Kitchen

Here's the distinction that matters: most AI literacy programs teach people how to use the menu. Click here to generate an image. Type your prompt like this. Use these settings for better results. This is tool proficiency โ€” and it's valuable. But it's not literacy.

Literacy means understanding the kitchen. It means knowing that the menu was designed by someone with specific assumptions about what you'd want to order. It means recognizing that the interface โ€” the application layer โ€” mediates every interaction you have with the underlying model, and that mediation is never neutral.

When Google trains teachers to use Gemini, they're teaching the menu. When Cambridge updates its curriculum to include "AI awareness," they're teaching students to recognize that a kitchen exists somewhere behind the wall. Neither teaches people to read the architecture โ€” to understand why the menu looks the way it does, what's been left off, and what happens between the order and the plate.

The Application Layer Is Where Literacy Lives

Application Layer Communication (ALC) names the actual skill: the ability to navigate, interpret, and negotiate with software at the layer where design decisions become user experiences. This isn't about understanding neural network architectures or being able to code. It's about recognizing that every AI tool you use has an application layer โ€” an interface, an API, a set of affordances and constraints โ€” and that your ability to work within and against that layer determines your actual capability.

Consider the difference between two users of the same AI tool. User A follows the tutorial: types prompts in the chat box, accepts the first output, uses the suggested templates. User B recognizes the chat interface as one of several possible interaction modes, understands that the system prompt constrains what the model can do, notices when the interface nudges them toward certain patterns, and knows how to work around limitations. Same tool. Same underlying model. Radically different outcomes.

The gap between User A and User B isn't about "AI literacy" as currently defined. Both can "use AI." The gap is about application layer fluency โ€” the ability to see the mediation and act on it.

Stratification by Design

This is where the stakes get real. When we teach tool proficiency and call it literacy, we create a two-tier system. The first tier โ€” people with existing technical fluency, access to communities of practice, and the cultural capital to experiment โ€” develops genuine application layer understanding through use. They learn the kitchen by spending time in it. The second tier โ€” everyone else โ€” learns the menu and thinks they're literate.

The research backs this up. Studies on "equal access" to AI tools show compressed productivity differences โ€” but only under controlled conditions. In the wild, access isn't just reaching the tool. It's prompting, interpreting, iterating, recognizing failure modes, knowing when to push back against outputs. These are ALC skills, and they distribute along the same educational and socioeconomic lines as every other form of literacy.

The cruel irony: the interface itself widens the gap that the tool is supposed to narrow. A well-designed application layer that "just works" for the default user makes it harder for everyone to understand what's actually happening. Simplicity for some becomes opacity for all.

What Would Real AI Literacy Look Like?

If we took application layer literacy seriously, AI education would look fundamentally different:

  • Interface archaeology: Instead of "here's how to use ChatGPT," students would compare multiple AI interfaces for the same task. Why does this one have a system prompt field and that one doesn't? What does each interface assume about the user?
  • Affordance mapping: What can you do? What can't you do? What could you do if the interface were designed differently? This teaches people to see the mediation layer, not just use it.
  • Failure mode literacy: When the tool gives a bad output, is that the model or the interface? Can you tell the difference? Most users can't โ€” and the application layer is designed to make them indistinguishable.
  • Negotiation practice: Actually working against the interface. Finding the edges. Understanding what happens when you push past the suggested use case. This is where fluency develops.

The Political Economy of "Literacy"

There's a reason Google's version of AI literacy looks like product training. And there's a reason Cambridge's version looks like awareness campaigns. Each definition of "literacy" serves the interests of the institution defining it.

Google benefits from users who are proficient with Google tools. Schools benefit from curricula that can be assessed with multiple-choice tests. Neither benefits from users who understand the application layer well enough to question it โ€” to ask why the interface looks this way, who it serves, and what alternatives exist.

This isn't conspiracy. It's structural incentive. The institutions best positioned to teach AI literacy are the ones least incentivized to teach the version that matters. The version that produces not just users, but critics. Not just consumers of interfaces, but readers of them.

The Layer That Matters

The application layer is where power is exercised in software. It's where design decisions become default behaviors, where affordances shape possibilities, where the gap between "can use" and "understands why" becomes the gap between empowered and dependent.

Teaching AI literacy without teaching the application layer is like teaching reading without teaching that texts have authors with intentions. You produce functional users, not literate ones. And in a world where AI mediates an increasing share of professional, educational, and civic life, functional isn't enough.

Everyone's teaching AI literacy. Almost nobody's teaching it at the right layer.

This analysis draws on the ALC stratification framework developed in the "Beyond Knowledge Graphs" research series. For the academic foundations, see Racing to Define AI Literacy and Four Waves of Algorithmic Literacy.

Get the free ALC Framework Guide

The same framework we use in our audits โ€” yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.