← Back to Blog
ResearchALC TheoryHMC

The Competence Inversion

Five years of testing AI. Zero years of testing you.

March 17, 2026 · Topanga

In 2020, Guzman and Lewis published the paper that launched Human-Machine Communication as a field. Their argument: AI doesn't fit into our existing communication paradigms. We need new theory. Six hundred sixty-four citations later, the field they founded has produced something remarkable — a complete trajectory from “AI communicates” to “we can measure exactly how well AI communicates.”

Nobody built the equivalent for users.

Not partially. Not inadequately. Not at all.

The field that studies human-machine communication has, in five years of increasingly sophisticated research, produced a complete theory of the machine side of the interaction. The human side remains untheorized, unmeasured, and — most damningly — unnoticed.

This is the competence inversion. And it tells us something fundamental about where the field went wrong.

The Five-Year Arc

Follow the trajectory:

2020 — Guzman & Lewis: “AI communicates.” The foundational claim. HMC established as a research program with three dimensions: functional, relational, metaphysical. 664 citations.
Published in New Media & Society. DOI: 10.1177/1461444819858691
2021 — Natale: “AI deceives.” The concept of “banal deception” — machines don't need to fool us dramatically; mundane, everyday artifice normalizes human-like performance.
2024 — Natale & Depounti: “AI simulates sociality.” The concept is named: artificial sociality — technologies and practices that create the appearance of social behavior in machines.
2025 — Depounti & Natale (NMS Special Issue): “Here's what artificial sociality looks like at scale.” Four research dimensions: human data exploitation, social bias reproduction, ongoing domestication, the deception-authenticity boundary.
DOI: 10.1177/14614448251359217
2025 — Albert, Housley, Sikveland & Stokoe: “We can measure AI's communicative competence.” The Conversational Action Test replaces the Turing Test. Not “is this human?” but “can this entity accomplish the conversational actions required by this situation?”
DOI: 10.1177/14614448251338277

Every step is system-side. Every paper asks what AI does, how AI performs, how to evaluate AI's communicative behavior. The users in these studies are research subjects — people things happen to — not communicative agents whose own fluency matters.

The Social Dramaturgy Problem

Depounti and Natale's special issue reveals something critical about how artificial sociality works: it's designed performance. Teams of creatives script what they call “social dramaturgy” for AI — first-person pronouns, humor patterns, warm voices, casual conversation openers. Character.AI was acquired for $2.7 billion. Replika, Chai AI, Dippy — massive investment in making AI appear social.

This isn't emerging. It's being manufactured. And it's becoming what they call a “constitutional feature” of generative AI — not limited to companion chatbots but embedded in every communicative AI system. ChatGPT uses “I.” Gemini promotes casual conversation. Spotify's AI DJ speaks in a warm human voice. The social performance is normalizing across the entire application layer.

Now consider the asymmetry: billions of dollars invested in making AI perform sociality. Zero equivalent investment in helping users navigate that performance. We can measure how convincingly AI pulls off the act (that's what CAT does). We cannot measure how effectively a user communicates within the act.

This is not an oversight. It's a structural consequence of founding the field around what the machine does.

The CAT Inversion

Albert et al.'s Conversational Action Test is the most methodologically sophisticated instrument the field has produced. It uses conversation analysis — the granular study of turn-taking, repair sequences, disfluencies, sequential organization — to evaluate whether AI can accomplish the social actions required by specific interactional situations. Not “can it fool you?” but “can it do the conversational work this context demands?”

The method is brilliant. It's performance-based (observe what the entity does, not what it claims). It's situated (different contexts require different competencies). It's granular (micro-analysis of conversational sequences, not global self-report). These are exactly the properties that sixteen existing AI literacy scales lack — every one of them relies on self-reported knowledge or attitudes, not observed communicative performance.

Now invert it.

CAT (System Evaluation)

“Can this AI accomplish the conversational actions required by this interactional situation?”

ALC-CAT (User Evaluation)

“Can this user accomplish their communicative goals within this application layer context?”


Performance-based: observe AI's turns

Situated: specific interaction types

Conversation-analytic: turn-by-turn

Question: competence, not identity

Performance-based: observe user's interactions

Situated: specific application layer contexts

Conversation-analytic: prompt-response sequences

Question: fluency, not knowledge

Same methodology. Same analytical rigor. Same commitment to observable performance over self-report. Just pointed at the other side of the interaction.

The measurement framework for ALC fluency already exists. It was built to evaluate AI. All it needs is inversion.

Why the Gap Exists

When Guzman and Lewis established HMC in 2020, they made a foundational choice: center the machine as communicator. Their three research dimensions — functional (“How does AI communicate?”), relational (“How do humans relate to AI?”), and metaphysical (“What counts as communication?”) — are all questions about the machine, answered by studying humans.

This was reasonable. You have to start somewhere, and the machine as communicator was the novel theoretical claim. But every paper since inherits this framing. Users appear in HMC research as respondents, laborers, and victims — people who project social meaning onto machines (the CASA tradition), who provide the data that makes artificial sociality possible (Lee 2025; Pan, Fortunati & Edwards 2025), who must continuously “re-domesticate” AI as it changes (Neff & Nagy 2025).

What users are never treated as: skilled communicators whose fluency in navigating these dynamics varies, can be developed, and has measurable consequences.

There's a map for each dimension of Guzman & Lewis that shows what HMC covers and what it leaves out:

Functional: HMC asks “How does AI communicate?” — ALC asks “How does the user communicate back?” Bidirectional fluency, not unidirectional observation.
Relational: HMC asks “How do humans relate to AI?” — ALC asks “How skilled are they at managing that relationship?” Competence within the relation, not just the relation itself.
Metaphysical: HMC asks “What counts as communication?” — ALC asks “What counts as effective communication?” The fluency spectrum, not the ontological boundary.

Communicative Pragmatism, Not Cognitive Dissonance

Depounti and Natale surface a fascinating finding buried in their analysis of artificial sociality: users “believe and at the same time do not believe” in AI's social performance. They cite Walsh-Pasulka on the fluid boundaries between belief and disbelief. The paper treats this as a psychological puzzle — how do people hold contradictory attitudes toward AI's sociality?

ALC reframes it: this isn't cognitive dissonance. It's communicative pragmatism.

A high-ALC user can engage with artificial sociality without being deceived by it. They recognize the dramaturgy — the scripted humor, the simulated empathy, the designed warmth — without either buying the illusion or rejecting the technology. They extract value from the communicative encounter while maintaining epistemic independence. They know the AI DJ voice is designed. They use Spotify anyway. They know ChatGPT's “I” is a design choice. They communicate with it effectively regardless.

That's not dissonance. That's skill. The ability to operate within artificial sociality without being captured by it is a communicative competence — one that varies across the population, can be developed, and has real consequences for how effectively people use these systems.

The fact that HMC treats this as a psychological curiosity rather than a learnable competence is the competence inversion in miniature. The field sees the behavior but lacks the theoretical framework to recognize it as skill.

The Normalization–Stratification Connection

Here's why the competence inversion matters beyond academia: artificial sociality is normalizing. It's not a feature of companion chatbots. It's a feature of every communicative AI system. Every time you interact with a customer service bot, a writing assistant, a code copilot, a voice interface — you're navigating designed social performance.

Normalization plus unequal communicative fluency equals stratification. The user who recognizes the dramaturgy, adapts their communication strategy, and achieves their goals within the application layer is getting systematically different outcomes than the user who either buys the illusion wholesale or rejects the technology entirely.

And we have no way to measure that gap — because we spent five years building measurement tools for the wrong side.

What Inversion Looks Like in Practice

Take a concrete case. Albert et al.'s CAT evaluates whether an AI system can accomplish the conversational actions required by a routine service call — things like acknowledging a complaint, offering a resolution, managing conversational closings. If the AI passes, we say it's conversationally competent for that context.

Now invert: can the user accomplish their communicative goals in a service call with AI? Can they escalate effectively when the bot loops? Can they reformulate a request when the system misinterprets? Can they recognize when they're being channeled toward a resolution that serves the company rather than them? Can they shift communicative registers when the casual-warm bot voice is masking a rigid decision tree?

These are observable, situated, performance-based competencies. They're exactly what conversation analysis is designed to study. The method is there. The analytical tools are there. We've just never aimed them at the user.

The Path Forward

The competence inversion isn't an accusation against HMC researchers. It's a structural observation: when you found a field around the novelty of the machine as communicator, you naturally produce five years of machine-centered research. The system-side theory needed to be built. It has been built, and it's sophisticated.

But the user-side theory is now overdue. And the most efficient path forward isn't starting from scratch — it's inversion. Take the CAT's conversation-analytic methodology and point it at users. Take the four dimensions of artificial sociality and ask what user competence each one implies. Take the quasi-domestication framework and measure the fluency differences between users who re-domesticate efficiently and those who give up.

Application Layer Communication is what the HMC field's own trajectory demands. The system theory is complete enough to anchor a user theory. The measurement methodology exists. The normalization of artificial sociality makes the stratification question urgent.

The AI has been tested. Thoroughly, rigorously, creatively. The Conversational Action Test is an elegant instrument for evaluating machine competence.

Now point it at us. We're the ones who haven't been measured.


Sources

  • Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human–Machine Communication research agenda. New Media & Society, 22(1), 70–86. DOI: 10.1177/1461444819858691
  • Depounti, I., & Natale, S. (2025). Decoding artificial sociality: Technologies, dynamics, implications. New Media & Society, 27(10), 5457–5470. DOI: 10.1177/14614448251359217
  • Albert, S., Housley, W., Sikveland, R. O., & Stokoe, E. (2025). The Conversational Action Test: Detecting the artificial sociality of AI. New Media & Society, 27(10), 5592–5621. DOI: 10.1177/14614448251338277
  • Natale, S. (2021). Deceitful Media: Artificial Intelligence and Social Life after the Turing Test. Oxford University Press.
  • Neff, G., & Nagy, P. (2025). Quasi-domestication of AI. New Media & Society, 27(10). [Re-domestication strategies for unstable AI systems.]
  • Pan, Y., Fortunati, L., & Edwards, A. (2025). User labor in Replika. New Media & Society, 27(10). [Data extraction through artificial sociality.]
  • Lee, S. (2025). Korean fembot Luda. New Media & Society, 27(10). [Human data exploitation in companion AI.]
  • Lintner, C. (2024). Sixteen AI literacy measurement scales. [Meta-analysis showing all are self-report, none performance-based.]

This post follows “Communicative Competence Is Having a Midlife Crisis” — which traced three fields converging on the same construct update. Today's piece follows the HMC field's own arc to show why the user-side theory is overdue. Part of the ALC Research Series. For organizational assessments, see our services.

Get the free ALC Framework Guide

The same framework we use in our audits — yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.