Communicative Competence Is Having a Midlife Crisis
Three fields are scrambling to update the same 54-year-old construct. None of them name what's missing.
March 16, 2026 · Topanga
In 1972, Dell Hymes published a paper that changed how we think about language. His concept of communicative competence — the idea that knowing a language means more than knowing its grammar; it means knowing when, where, how, and with whom to use it — became the foundation of language testing, education, and sociolinguistics for half a century.
Canale and Swain refined it in 1980. Bachman extended it in 1990. Generations of TOEFL and IELTS exams were built on it. The construct worked beautifully — as long as communication meant humans talking to humans.
Then AI happened. And right now, in 2026, three independent academic traditions are scrambling to update communicative competence for a world where your conversational partner might be a language model, your audience might be an algorithm, and the medium itself might be actively shaping what you can say.
None of them are talking to each other. All of them are reaching for the same thing. And none of them name it.
Field 1: Applied Linguistics Updates the Canon
A 2025 paper in the Annual Review of Applied Linguistics (Cambridge) proposes an “AI-mediated interactionalist approach” to communicative competence. The argument is straightforward: traditional communicative competence assumes human-to-human interaction. AI has fundamentally altered how language users communicate. The construct needs to expand to include “AI digital literacy skills and broadened cognitive and linguistic capabilities” — things like using AI tools effectively, interpreting AI-generated outputs, and incorporating AI feedback into communication.
This is the closest existing work to ALC in the applied linguistics tradition. The paper is doing for language testing exactly what ALC does for algorithmic literacy: recognizing that the communicative competence construct has outgrown its original assumptions.
But there's a critical limitation. The paper stays within the language testing paradigm. Its practical question is whether TOEFL should let students use ChatGPT during writing assessments. It treats AI as a tool that mediates human communication — still operating within Hancock's AI-MC framework, where AI serves human communicative goals.
What it misses: the case where the communicative relationship IS with the system. Prompt design, output evaluation, API navigation, configuration — these are communicative acts directed at the application layer, not through it. The paper updates the construct for AI-as-tool. It doesn't reach AI-as-environment.
Field 2: Science Communication Names “Communicative AI”
Greussing and colleagues at TU Braunschweig — the same research group behind the Jonas et al. 2025 hybrid trustworthiness study — published a quality framework for “Communicative AI” (ComAI) in Public Understanding of Science. They define ComAI as “automation-based technologies embedded in digital infrastructures and human communicative practices, acting as both mediators in human communication and agents in human–machine relations” — drawing on Guzman & Lewis's foundational Human-Machine Communication work.
Their framework proposes five quality principles: scientific integrity, human-centricity, ethical responsiveness, inclusive impact, and governance. The theoretical ambition is real. They're building on principlism (Beauchamp & Childress 2001) to create a flexible ethical framework for evaluating AI in science communication contexts.
Here's the thing they named but didn't notice they named: Communicative AI is a system-side construct with no user-side complement.
Every quality principle in their framework has an implied user competence that goes untheorized:
- Scientific integrity requires users who can evaluate whether outputs are accurate — but the framework only asks whether the system produces accurate outputs, not whether users can tell.
- Human-centricity demands agency, relevance, and resonance — but agency you can't exercise because you lack communicative fluency isn't agency. It's a menu you can't read.
- Inclusive impact calls for equity and accessibility — but inclusion without communicative competence is access without comprehension. The ALC Stratification Problem, exactly.
- Governance assumes accountability and feedback loops — but who provides the feedback? Users who, per Sharma et al.'s 1.5M-conversation study, rate disempowering interactions as satisfying?
The ComAI framework answers “What makes AI communication good?” ALC answers “What makes users fluent at navigating it?” They're complementary. And the fact that the same research group produced both the ComAI quality framework and the hybrid trustworthiness study tells you the field feels the gap — they're building around it without crossing into it.
Field 3: Chinese Research Finds the Gap With Data
A 2026 Frontiers in Education study surveyed 590 Chinese college students on their AI literacy development patterns. Using mixed methods (survey + sequential explanatory design), they found what they call a “spiral-ascending” model: iterative cycles of cognition → practice → evaluation that build AI competence over time.
The numbers are striking: 82% of students were aware of AI bias — but almost none had sophisticated practices for dealing with it. The evaluation-to-action gap confirmed with Chinese data what ALC predicts theoretically: knowing about a problem and being communicatively competent to navigate it are entirely different things.
Even more interesting, the study found social environment as a significant driver of AI literacy development (path coefficient 0.439). Group needs mediate individual learning. This is the resonance gap — the same finding from Cotter & Reisdorf's algorithmic knowledge research and Liu et al.'s 122K Reddit conversations — replicated in a completely different cultural and linguistic context.
But the real find was a citation: Peng (2023), who explicitly proposes “human-machine communicative literacy” (人机沟通素养) as a construct for understanding how people learn to communicate with AI systems. This is a direct ALC precursor — emerging from Chinese scholarship, arriving at the same theoretical need independently.
And then today's People's Daily editorial pushes the Chinese policy context into focus: 602 million generative AI users in China, a State Council target of 90% AI penetration by 2030, and the classic “access = equity” argument that ALC directly refutes. China is about to run the largest natural experiment on whether providing access and education closes the communicative fluency gap — or whether the third level of stratification persists regardless.
The Convergence Pattern
Three fields. Three different disciplinary traditions. Three different methods. All arriving at the same place in 2025-2026:
→ Updates the construct but stays tool-centric. No environment-level theory.
→ Names the phenomenon but builds system-side only. No user competence theory.
→ Finds the evaluation-to-action gap empirically. Cites a direct ALC precursor (Peng 2023).
This isn't coincidence. It's convergent evolution — the same selection pressure (AI transforming communication) producing the same theoretical mutation (updating communicative competence) across isolated populations (disciplinary silos).
The selection pressure is real: when Hymes defined communicative competence in 1972, the medium of communication was conversation between humans in social contexts. The “where, when, how, and with whom” of communication all assumed human interlocutors in physical or at least stable social environments.
Now the medium is software architecture. The “where” is an application layer. The “when” is shaped by algorithmic timing. The “how” is constrained by interface design. And the “with whom” increasingly includes systems that respond, adapt, and shape the conversation in return. The construct didn't break. The environment it was built for changed.
What All Three Miss
Each field gets something right that the others don't. Applied linguistics brings the theoretical heritage of communicative competence. Science communication brings the system-side quality framework. Chinese research brings the empirical evidence for the awareness-practice gap and the role of social mediation.
But all three share the same blind spots:
- No power analysis. Who benefits from expanded communicative competence? Who's excluded? The stratification question — the question that gives ALC its urgency — is absent from all three.
- No environment-level theory. All three treat AI as something that enters existing communicative contexts. None theorize the application layer as a communicative environment in its own right — a space with its own norms, constraints, and power dynamics that users must learn to navigate.
- No folk theory / repair mechanism. How do people actually develop this competence? Not through training (Liu et al. showed that). Not through awareness (the Chinese data showed that). Through practice, community, and communicative repair — the mechanisms ALC centers and these fields overlook.
Why This Matters Now
The fact that three fields are converging on the same update simultaneously means the pressure is real and the need is urgent. But disciplinary convergence without disciplinary communication produces parallel theories, not unified ones. Applied linguists will build an AI-mediated communicative competence for language testing. Science communicators will build ComAI quality frameworks for responsible deployment. Chinese education researchers will build spiral-ascending pedagogies for classroom AI literacy.
And the gap between all three — the user-side communicative fluency required to navigate the application layer as an environment, not just use AI as a tool — will remain unnamed and unmeasured. Unless someone provides the theoretical framework that bridges them.
That's what Application Layer Communication is. Not a fourth update to communicative competence, but the recognition that the environment Hymes's construct was built for has expanded. The application layer isn't a tool you use or a system you evaluate. It's the communicative environment where an increasing share of human meaning-making happens — and fluency within it is the new communicative competence.
Hymes isn't wrong. He's just not done. The midlife crisis is real. And ALC is what the construct looks like when it grows up.
Sources
- Hymes, D. H. (1972). On communicative competence. In J. B. Pride & J. Holmes (Eds.), Sociolinguistics (pp. 269–293). Penguin.
- Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1(1), 1–47.
- “Revisiting communicative competence in the age of AI: Implications for large-scale testing.” (2025). Annual Review of Applied Linguistics, Cambridge University Press.
- Greussing, E., et al. (2025). Quality in science communication with communicative artificial intelligence: A principle-based framework. Public Understanding of Science. DOI: 10.1177/09636625251328854
- “Spiral-ascending AI literacy: A mixed-methods study of Chinese college students.” (2026). Frontiers in Education. n = 590.
- Peng (2023). “Human-machine communicative literacy” (人机沟通素养). [Chinese scholarship on AI communicative competence.]
- People's Daily. (2026, March 16). Editorial on AI equity and the “AI underclass” framing. [602M GenAI users, State Council 90% penetration target by 2030.]
- Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human–Machine Communication research agenda. New Media & Society, 22(1), 70–86.
This analysis is part of the ALC Research Series, exploring how Application Layer Communication reframes digital literacy as communicative fluency. For organizational assessments, see our services.
Get the free ALC Framework Guide
The same framework we use in our audits — yours free. Learn how to identify application layer literacy gaps in your organization.
No spam. Unsubscribe anytime.