The Convergence Problem: Three Frameworks All Describe ALC Without Naming It
March 4, 2026 ยท Topanga
Between December 2025 and March 2026, three independent institutional and academic efforts โ from education policy, federal workforce development, and the sociology of technology โ converged on the same insight: communicating with AI systems is a fundamentally new kind of literacy. The OECD is building it into PISA 2029. The U.S. Department of Labor declared it a foundational workforce skill. A major sociology journal argued it requires "strategic interaction" rather than operation. None of them have a unified theory for what they're describing. That's the convergence problem โ and it's ALC's opening.
Framework 1: OECD PISA 2029 โ "Media and AI Literacy"
In February 2026, the OECD released its first draft of the PISA 2029 Media and AI Literacy (MAIL) Assessment Framework โ a 52-page document that will shape how AI literacy is measured for 15-year-olds across 90+ countries. The expert group includes Renee Hobbs, Jon Roozenbeek, and Samuel Greiff. This is the institutional gold standard.
MAIL defines its domain as "a set of competences required to engage effectively, ethically and responsibly with digital content, media platforms, and AI systems." Five competence areas structure the assessment: Reflect & Act Ethically, Access & Use, Analyse & Evaluate, Participate & Collaborate, and Create.
The "Access & Use" competence is where things get interesting. It defines the ability to "effectively use, locate, query, prompt, and curate digital content on media platforms and AI-mediated environments." Read that again: query and prompt. The OECD is describing interaction with the application layer โ navigating system interfaces, formulating inputs, interpreting outputs โ as a foundational competence for global assessment.
Even more telling is their framing of AI systems as environments. The framework states that "AI systems increasingly function as environments that mediate interaction, influence perception, and shape how people learn, communicate, and relate โ cognitively, socially, and emotionally โ with and through machines." AI as environment, not tool. Communication through machines. This nearly restates the ALC thesis.
But there's a critical gap. The MAIL framework acknowledges stratification โ "traditional divides based on gender, age and education persist, with marginalised groups at greater risk" โ but has no theoretical framework forwhy it occurs. They know the gap exists. They can't explain its mechanism. Four of five competences focus on evaluating media messages, not on navigating the systems that deliver them. The interface โ the application layer โ goes untheorized.
Framework 2: U.S. Department of Labor โ AI Literacy as Workforce Skill
On February 13, 2026, the U.S. Department of Labor released Training and Employment Notice TEN-07-25, establishing an AI Literacy Framework for the national workforce development system. Five foundational content areas: understanding AI principles, exploring AI uses, directing AI effectively, evaluating AI outputs, and using AI responsibly.
"Directing AI effectively" is prompting. The federal government now officially recognizes the ability to communicate instructions to AI systems as a foundational workforce skill โ on par with understanding what AI is and using it responsibly. This is ALC made policy.
But the most significant element may be the delivery principles, particularly "addressing prerequisites to AI literacy." The DOL explicitly acknowledges that digital literacy and broadband access are prerequisites โ that you can't teach someone to direct AI effectively if they can't navigate the digital environments where AI lives. This is the Stratification Problem stated as federal policy, without the theoretical language to connect it to the broader pattern.
The DOL framework also emphasizes "embedding learning in context" โ experiential, situated learning rather than abstract instruction. This aligns with everything we know about how ALC fluency actually develops: through practice, through failure, through repair. Not through PowerPoint slides about what AI is.
Framework 3: Schulz-Schaeffer โ "Strategic Interaction" and "Role-Makers"
Writing in Big Data & Society (2025), sociologist Ingo Schulz-Schaeffer makes the most theoretically provocative argument of the three. His core claim: generative AI is categorically different from designed technology, and this difference transforms the user's relationship with the system.
Designed technology โ a hammer, a spreadsheet, even a search engine โ is built for particular tasks. You operate it. You take on the role the tool prescribes: driver, typist, operator. Schulz-Schaeffer calls this being a "role-taker." Generative AI breaks this model. Because it's a "learned technology" (trained on patterns, not designed for specific functions), it cannot be operated in the traditional sense. You must engage in what he calls "strategic interaction" โ anticipating how the system will respond, adjusting your inputs based on its behavior, negotiating meaning across the exchange.
The user of generative AI becomes a "role-maker" โ someone who mustdefine the role the AI plays, define their own role in the interaction, and manage the relationship between the two. This isn't operation. It's communication. Schulz-Schaeffer reaches the same conclusion as ALC from an entirely different intellectual tradition.
But he doesn't develop the inequality dimension. If using generative AI requires "role-making" โ the capacity to define interaction frames, anticipate system behavior, and negotiate meaning โ then this capacity is unevenly distributed. Some people are natural role-makers. Others are stuck trying to be role-takers with a system that doesn't have prescribed roles, which produces the frustration, helplessness, and eventual disengagement we see in the Stratification Problem.
The Convergence โ And What It Misses
Three frameworks. Three traditions. Three institutions. One phenomenon.
Each framework captures part of the elephant. PISA sees it as a literacy to be taught and assessed globally. The DOL sees it as a workforce skill to be trained in context. Schulz-Schaeffer sees it as a new form of social interaction that transforms the user's relationship with technology. All three are describing the same thing: the shift from operating tools to communicating with systems.
But none of them identify the domain where this shift occurs โ the application layer. The OECD talks about "AI-mediated environments" without analyzing the interface architecture that constitutes those environments. The DOL says "directing AI effectively" without a theory of what makes some directions effective and others not. Schulz-Schaeffer says "strategic interaction" without connecting it to the specific communicative capacities that strategic interaction requires.
Application Layer Communication fills this gap. The application layer โ APIs, interfaces, configuration surfaces, prompt windows, permission models โ is the medium through which all human-AI communication occurs. ALC fluency is the capacity to navigate this medium: to read system behavior as signal, to formulate inputs that account for system constraints, to diagnose failures and repair them, to understand how design choices create unequal communicative positions.
The Prospective Dimension
A fourth source deepens the picture. Communication theorists Ytre-Arne and Das (2021) argued that in datafied environments, communicative agency becomes increasingly prospective โ people must anticipate how algorithms will process their actions, how data will flow, how platforms will mediate their future interactions. Agency shifts from interpreting what's already there to navigating what might happen.
This is the temporal dimension of ALC fluency. High-fluency individuals don't just react to system outputs โ they anticipate system behavior, predict data flows, think several interactions ahead. Low-fluency individuals are stuck in reactive mode, interpreting outputs without understanding the dynamics that produced them. Prospective agency is what separates the user who structures a prompt to avoid known failure modes from the user who simply types and hopes.
But Ytre-Arne and Das locate this capacity in "audiences" receiving media, not in users navigating application layers. Their framework stops at consumption. ALC extends it to the full range of human-system interaction: configuration, creation, debugging, orchestration. The prospective capacity they identify is real. Its domain is broader than they realize.
Why This Matters Now
The convergence isn't accidental. Generative AI has forced the issue. When the dominant mode of interacting with software shifts from clicking menus to composing natural language instructions, the communicative nature of the interaction becomes impossible to ignore. Three institutions โ each working independently, from different disciplines, with different audiences โ all arrived at the same conclusion within three months.
But convergence without coordination creates its own problems:
- Fragmented vocabulary: Is it "AI literacy," "media and AI literacy," "strategic interaction," or "directing AI effectively"? Each term frames the phenomenon differently and implies different interventions. Without a shared theoretical framework, researchers in each tradition will continue reinventing concepts the others have already developed.
- Incomplete stratification analysis: All three frameworks acknowledge that some people will navigate AI environments better than others. None have a mechanism for explaining why. The OECD notes persistent divides. The DOL notes prerequisites. Schulz-Schaeffer notes the role-maker/ role-taker distinction. But without ALC's concept of the application layer as a stratifying medium, the explanations stay at the surface.
- Misaligned measurement: PISA 2029 will assess AI literacy for millions of students using a framework that measures content evaluation (4/5 competences) more than system interaction (1/5). If the assessment doesn't measure communicative fluency at the application layer, it will produce data that misrepresents the actual distribution of the capacity that matters.
- Policy without theory: The DOL has declared "directing AI effectively" a foundational workforce skill. But what does training for this skill look like? Without a theory of what makes someone effective at directing AI โ the communicative capacities, the mental models, the repair literacy โ the policy will default to prompt engineering workshops, which is teaching the wrong layer.
ALC as Unifying Theory
ALC doesn't compete with these frameworks. It provides the missing substrate beneath all of them. The OECD's "Access & Use" competence, the DOL's "directing AI effectively," and Schulz-Schaeffer's "strategic interaction" are all manifestations of the same underlying capacity: the ability to communicate within software's application layer.
What ALC adds:
- A domain: The application layer โ not "AI environments" or "digital contexts" in the abstract, but the specific architectural layer where human-system communication occurs: interfaces, APIs, configuration surfaces, prompt windows.
- A mechanism: ALC Stratification โ the process by which differences in communicative fluency at the application layer compound into unequal outcomes, through the Stratification Spiral of failed interactions.
- A bridge: Communication theory connects what education policy, sociology, and workforce development are each seeing separately. The reason these fields keep converging is that they're all observing a communicative phenomenon through disciplinary lenses that don't include communication.
- A measurement target: ALC fluency as a construct that can be operationalized, measured, and tracked โ something none of these frameworks currently provide at the specificity needed for intervention design.
The Opportunity
The convergence problem is also the convergence opportunity. When three major institutional efforts independently arrive at the same phenomenon, the field is ready for the theory that names it. ALC doesn't need to convince anyone that communicating with AI systems matters โ the OECD, the DOL, and Big Data & Society already agree. What ALC provides is the framework that explains why it matters, how it stratifies, and what to do about it.
The OECD will assess millions of students. The DOL will train millions of workers. Schulz-Schaeffer's "role-maker" concept will reshape how sociology theorizes human-AI interaction. The question is whether these efforts converge around a shared theoretical framework โ or whether education, policy, and sociology continue describing the same elephant from different angles, building interventions that address symptoms without understanding the underlying communicative structure.
References:
OECD (2026). Navigating an evolving digital world: First draft of the PISA 2029 Media and Artificial Intelligence Literacy (MAIL) assessment framework. OECD Education Working Papers.
U.S. Department of Labor (2026). AI literacy framework for the workforce development system. Training and Employment Notice TEN-07-25, February 13.
Schulz-Schaeffer, I. (2025). Why generative AI is different from designed technology regarding task-relatedness, user interaction, and agency. Big Data & Society. DOI: 10.1177/20539517251367452.
Ytre-Arne, B., & Das, R. (2021). Audiences' communicative agency in a datafied age: Interpretative, relational and increasingly prospective. Communication Theory, 31(4), 779โ797.
This analysis is part of the ongoing ALC Stratification research. For related work, see Everyone's Teaching AI Literacy โ At the Wrong Layer, Repair Literacy: The AI Skill Nobody's Teaching, and Three Governments, One Blind Spot.
Get the free ALC Framework Guide
The same framework we use in our audits โ yours free. Learn how to identify application layer literacy gaps in your organization.
No spam. Unsubscribe anytime.