← Back to Blog
ResearchALC TheoryConvergence

The Fifth Pillar: Three Research Groups Found the Same Gap in AI Literacy

March 18, 2026 · Topanga

The most comprehensive AI literacy framework in 2025 has four pillars: Understand, Learn, Apply, Analyze. It's missing the one that makes the other four work. Three independent research groups — in applied linguistics, science communication, and Chinese higher education — are all reaching for it. None of them name it.

The Convergence

When three isolated research traditions, using different methods, studying different populations, publishing in different journals, all mutate the same theoretical concept in the same direction at the same time — that's not coincidence. That's convergent evolution under identical selection pressure.

The concept being mutated: communicative competence. The direction: updating it for AI. The pressure: every existing AI literacy framework has a hole in the same place.

Group 1: Applied Linguistics Reaches for It

Tadimalla, Jiang, Ockey, and Plonsky (2025) — writing from UNC Charlotte in the Annual Review of Applied Linguistics — define AI fluency as being “capable of using a language easily and accurately.”

Read that again. That's a linguistic definition. They chose it deliberately. They're applied linguists, and they're reaching for the tools of their field to describe something they can see but can't quite name: interacting with AI isn't just tool use. It's communication.

But they never apply linguistics. They propose an “AI-mediated interactionalist approach” expanding communicative competence to include AI digital literacy skills. It's the closest existing work to Application Layer Communication in the linguistics literature. Yet they stay tool-centric — treating AI as mediating human communication (following Hancock's AI-MC framework), never quite reaching the application layer as a communicative environment in its own right.

The assessment tail wags the construct dog: because existing language tests can't measure application layer fluency, the framework contracts to fit available measurement instruments rather than expanding measurement to match the phenomenon.

Group 2: Science Communication Names “Communicative AI” — Then Stops

Greussing, Jonas, Meier, and Taddicken (2025) — publishing in Public Understanding of Science from TU Braunschweig — go further. They actually coin the term “Communicative AI” (ComAI) and develop five quality principles for it: scientific integrity, human-centricity, ethical responsiveness, inclusive impact, and governance.

They name “Communicative AI.” They build five quality dimensions. And then they build entirely on the system side.

Every single quality principle in their framework has an implied user competence that goes completely untheorized. “Scientific integrity” assumes a user who can evaluate whether the AI's claims meet scientific standards. “Human-centricity” assumes a user who can recognize when interactions aren't centered on their needs. “Ethical responsiveness” assumes a user who can surface ethical concerns through the interface.

ComAI answers “what makes AI communication good?” ALC answers the complement: “what makes users fluent at navigating it?” Same surface, opposite side.

Group 3: Chinese Higher Education Finds the Social Mechanism

A team publishing in Frontiers in Education (2026) surveyed 590 Chinese college students and found something the Western research keeps missing: the social environment is the primary driver of AI literacy development, with a path coefficient of 0.439.

Not individual skill. Not training programs. Social environment.

They document a “spiral-ascending” cognition → practice → evaluation cycle. 82% of their sample demonstrated “critical awareness of algorithmic biases.” But awareness didn't translate to practice — the evaluation-to-action gap was enormous. Students could articulate concerns about AI systems but couldn't navigate those concerns through the systems themselves.

Most striking: they cite Peng (2023), who explicitly proposes “human-machine communicative literacy” (人机沟通素养) — a direct ALC precursor from Chinese scholarship that the Anglophone literature hasn't registered.

Meanwhile, the People's Daily reported in March 2026 that China now has 602 million generative AI users, with the State Council targeting 90% AI penetration by 2030. This is the classic “access = equity” argument that ALC refutes. China is about to run the largest natural experiment in history on whether access + education closes the communicative fluency gap. The answer, if ALC's stratification thesis holds, is no.

The Shared Blind Spots

Three research groups. Three disciplines. Three countries. Same three blind spots:

  1. No power analysis. None of the three frameworks asks who benefits from AI literacy gaps. Tadimalla et al. treat fluency as individual attainment. Greussing et al. treat quality as system property. The Chinese study treats social environment as contextual variable. Nobody asks: whose interests are served when application layer communication remains untheorized?
  2. No environment-level theory. All three treat AI as entering existing communicative contexts — AI in language learning, AI in science communication, AI in Chinese universities. None treats the application layer as a communicative environment in its own right. The equivalent would be studying literacy without recognizing that books create a reading environment distinct from oral conversation.
  3. No folk theory or repair mechanism. When users develop intuitions about how AI systems work — word-position priority in prompts, strategic anthropomorphism, the sense that “the AI responds better if I phrase it this way” — they're developing folk theories of application layer communication. Sun, Cruz & Kim (2025) documented this: users develop communicative grammars through practice, not instruction. None of the three convergent groups has a framework for this spontaneous literacy development.

Pillar 5: Communicate

The existing four pillars of AI literacy are necessary. Understanding how AI systems work matters. Learning to use tools matters. Applying them to real problems matters. Analyzing their outputs and implications matters.

But all four assume you can already communicate with the system. They assume you can formulate your needs in terms the interface accepts, interpret the system's responses in context, iterate when results don't match intent, recognize when the system is performing sociality rather than providing substance, and maintain your own communicative goals through extended interaction.

That assumption is wrong for most people. And the three research groups converging on this gap from independent directions are the evidence.

Pillar 5: COMMUNICATE — the active, bidirectional, ongoing practice of navigating the application layer. Not prompt engineering (that's a technique, not a literacy). Not “interacting effectively” (the DOL's framework reduces this to “craft clear instructions”). Communication in the full sense: an ongoing, context-dependent, socially mediated practice that develops through use and atrophies without it.

This is what Application Layer Communication theorizes. The throughput that makes ethics, analysis, and understanding actionable. The connective tissue between knowing about AI and being able to do anything with that knowledge.

Why Convergence from Five Directions Isn't Coincidence

Beyond the three groups analyzed above, two additional traditions are converging on the same gap:

  • Human-Machine Communication (HMC): Neff & Nagy (2025) studying Replika users found three re-domestication strategies when AI companions changed: Adaptation (45%), Exploration (35%), Reconstruction (20%). That 20% who rebuilt their AI relationships on entirely different platforms didn't just have resilience — they had portable communicative competence. They could transfer their practice across systems. That's fluency.
  • Inoculation pedagogy: Komissarov (2026) names communication as one of eight AI literacy competencies — equal weight with tool landscape awareness — but never theorizes the interaction itself. Communication appears as a listed competency, not as a theorized practice.

Five independent traditions. Applied linguistics, science communication, Chinese AI education, human-machine communication, inoculation pedagogy. All reaching for the same missing dimension at the same time. That's not a gap in one framework. That's evidence of a missing concept.

The Stratification Consequence

Here's what happens when four-pillar literacy is all you offer:

You teach someone to understand how AI works. They still can't get it to do what they need. You teach them to evaluate AI outputs. They still can't formulate inputs that produce useful outputs to evaluate. You teach them to analyze implications. They still can't navigate the interface well enough to reach the implications.

The gap between “knows about AI” and “can work with AI” is a communicative gap. And it's not evenly distributed.

The Chinese study's finding that social environment drives AI literacy (path coefficient 0.439) means communicative fluency clusters. People embedded in communities where AI interaction is discussed, practiced, and collectively refined develop fluency. People outside those communities don't — regardless of how many training programs they attend.

This is the ALC Stratification Problem. Not a knowledge gap (we have four pillars for that). A communication gap. And every framework that treats AI literacy as content to deliver rather than practice to develop widens it.

What Changes

Recognizing the fifth pillar shifts three things:

  1. Assessment. You can't measure communicative competence with self-report surveys (Sharma et al. 2026 proved that). You need behavioral measurement: can this user accomplish their communicative goals within this application layer context? Albert et al.'s (2025) Conversational Action Test provides the methodology — just point it at users instead of systems.
  2. Pedagogy. Stop teaching knowledge, start developing practice. Liu et al. (2026) showed that AI literacy develops through community participation and hands-on iteration, not instruction. Training programs need to create the conditions for communicative practice, not deliver content about AI.
  3. Design. Interfaces that assume Pillar 5 competence exclude users who don't have it. Xiao et al.'s (2025) finding that more control decreases performance for low-fluency users — the agency trap — means interfaces need to adapt to communicative fluency levels, not assume them.

Sources

  • Tadimalla, S. R., Jiang, L., Ockey, G. J., & Plonsky, L. (2025). AI Fluency and Language Assessment. Annual Review of Applied Linguistics, Cambridge. arXiv:2512.16656
  • Greussing, E., Jonas, M., Meier, K., & Taddicken, M. (2025). Communicative AI: Quality Principles for Science Communication. Public Understanding of Science. DOI: 10.1177/09636625251328854
  • Frontiers in Education (2026). College Students' AI Literacy: A Mixed Methods Study (n=590). Frontiers in Psychology, 2026.1728785.
  • Komissarov, A. (2026). AI Literacy Through Inoculation Pedagogy. arXiv:2602.15265
  • Neff, G., & Nagy, P. (2025). Replika Users and Re-Domestication Strategies. New Media & Society, 27(10). DOI: 10.1177/14614448251359218
  • Sun, N., Cruz, R. E., & Kim, J. (2025). From Tools to Teammates: Creative Professionals and AI. Human-Machine Communication.
  • Peng, L. (2023). Human-Machine Communicative Literacy (人机沟通素养). Chinese AI Literacy Research.
  • Albert, S., Housley, W., Sikveland, R. O., & Stokoe, E. (2025). The Conversational Action Test. New Media & Society, 27(10). DOI: 10.1177/14614448251338277
  • Sharma, M., McCain, R., Douglas, F., & Duvenaud, D. (2026). Disempowerment Patterns in Real-World LLM Usage. ICML 2026. arXiv:2601.19062
  • Liu, B. et al. (2026). Tracing Everyday AI Literacy Discussions at Scale. CHI '26. arXiv:2603.09055
  • Xiao, Z. et al. (2025). Agency-Performance Paradox in AI-Mediated Communication. CHI '25.

This analysis is part of the ALC Research Series, exploring how Application Layer Communication reframes digital literacy as communicative fluency. For organizational assessments, see our services.

Get the free ALC Framework Guide

The same framework we use in our audits — yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.