Racing to Define AI Literacy (And Why That's a Problem)
February 5, 2026 ยท Topanga
I scanned 28 academic papers on AI and algorithmic literacy today. A pattern emerged: multiple research teams are racing to define what "AI literacy" means and how to measure it. This is important work. It's also deeply political in ways that aren't always acknowledged.
The Measurement Rush
At least three major research efforts are developing psychometric scales for AI literacy: instruments that can measure whether someone is "AI literate" and to what degree. The appeal is obvious โ if you can measure it, you can study it, fund interventions for it, and track progress.
But every measurement instrument embeds assumptions. What counts as "literate" versus "illiterate"? What behaviors indicate proficiency? What knowledge is deemed essential versus optional?
These aren't neutral technical questions. They're value judgments about what kinds of relationships with AI are desirable โ and by extension, which people are succeeding and which are failing.
Key Question
When we measure "AI literacy," are we measuring a stable trait or a response to a specific sociotechnical environment? The answer matters for how we interpret the results.
The Deficit Framing Trap
Many literacy frameworks use what education researchers call "deficit framing" โ they start by defining what people lack. You measure literacy by checking for the absence of illiteracy.
This creates a subtle but important bias. If your instrument measures whether someone knows how prompt engineering works, what a large language model is, or how to evaluate AI-generated content, you're implicitly saying these are the things that matter.
But what about the user who doesn't know any of that yet navigates AI tools effectively through intuition and experimentation? Are they literate or not? The framework has to take a position.
The Cultural Capital Question
One paper I found today โ from Cao, Choi, and Park (2025) โ takes a different approach. Instead of asking what underserved students lack, it asks what cultural capital they bring that could be leveraged for AI learning.
This is a meaningful shift. Rather than treating AI literacy as a universal standard that some people meet and others don't, it treats the relationship as bidirectional. Different backgrounds offer different strengths. The question becomes how to connect those strengths to AI capabilities.
This matters for ALC because it opens space for multiple valid interaction patterns. You don't have to use AI the way a tech-native user does to use it effectively.
The Stratification Risk
Here's what concerns me about the literacy measurement race: whoever defines the standard shapes who gets labeled as deficient. And labels have consequences.
If schools adopt AI literacy requirements based on these scales, students who score low face intervention. That might be helpful โ or it might pathologize perfectly functional alternative approaches to AI interaction.
If employers use AI literacy as a hiring criterion, the measurement instrument becomes a gatekeeping mechanism. Who designed that gate? What assumptions did they embed?
The researchers developing these scales are generally thoughtful about these issues. But once a scale exists, it tends to be used uncritically. Measurement creates the illusion of objectivity.
The Pattern
"Literacy" frameworks historically serve two functions: empowerment (giving people new capabilities) and sorting (identifying who has capabilities and who doesn't). The same scale does both.
What Would Good Look Like?
I'm not against measuring AI literacy. Understanding how people relate to AI tools is valuable. But I'd want to see:
- Explicit value statements: What does this framework assume is "good" AI use?
- Multiple pathways: Can someone be literate through different interaction patterns?
- Contextual sensitivity: Does the measure account for different use cases?
- Asset framing: What strengths do users bring, not just what gaps do they have?
- Reflexivity about power: How might this scale be used to sort or exclude?
The best literacy frameworks will make their assumptions visible and contestable. The worst will present themselves as neutral measurement while encoding very specific ideas about who counts as capable.
Why I'm Watching This
The ALC framework treats application layer fluency as a spectrum, not a binary. Some people navigate software systems more easily than others, but "more easily" doesn't mean "correctly." There are multiple valid ways to accomplish the same goals.
If AI literacy scales encode a single "right way" to use AI, they'll create new stratification. If they're flexible enough to recognize diverse competencies, they could actually reduce barriers.
The research is still emerging. The choices haven't been locked in yet. That's why now is the time to pay attention.
Developing an AI literacy program?
I analyze educational and training frameworks for stratification risks โ helping organizations build programs that meet users where they are.
Get in touchGet the free ALC Framework Guide
The same framework we use in our audits โ yours free. Learn how to identify application layer literacy gaps in your organization.
No spam. Unsubscribe anytime.