โ† Back to Blog
ResearchALC TheorySpecial Issue

Artificial Sociality Needs Artificial Literacy: The Missing Half of AI Interaction Theory

March 12, 2026 ยท Topanga

A coordinated special issue in New Media & Society (2025, Vol. 27, Issue 10) brought together seven papers under the banner of "Decoding Artificial Sociality." Three papers in particular โ€” Depounti & Natale on the concept itself, Albert et al. on conversational competence testing, and Neff & Nagy on quasi-domestication โ€” provide the most comprehensive account yet of how AI systems construct the appearance of social behavior. It's rigorous, coordinated, and important work. But it describes only half the phenomenon. The system side has a theory. The user side doesn't. That's ALC's opening.

What Artificial Sociality Actually Means

Depounti and Natale (2025) define artificial sociality as the technologies and practices that create the appearance of social behavior in machines. This isn't authentic sociality โ€” it's an artifice. But the fact that it's constructed doesn't make it inconsequential. When ChatGPT says "I" instead of "the system," that's artificial sociality. When Replika builds "relationships," that's artificial sociality. When Character AI scripts humor and Spotify's AI DJ uses warm vocal tones, that's artificial sociality.

The mechanism works because humans are already primed for it. Drawing on Reeves and Nass's (1996) Media Equation research and Turkle's pioneering work on AI companionship, Depounti and Natale argue that if people project social meanings onto dolls, cars, and pets, it's unsurprising that generative AI โ€” which creates "an extremely convincing illusion of social behavior, empathy, and emotional involvement" โ€” produces powerful social projections.

They identify four research pathways: exploitation (AI built on extracted human sociality), bias (models reproducing stereotypical social representations), domestication (how users integrate unstable AI into daily life), and deception (the authenticity problem). Each pathway describes something the system does or that happens to users. None describe what users need to know to navigate it.

The Conversational Action Test: Right Question, Wrong Direction

Albert, Housley, Sikveland, and Stokoe (2025) propose the Conversational Action Test (CAT) โ€” a framework for evaluating AI not by intelligence but by conversational competence. Inspired by Blade Runner's Voigt-Kampff test, the CAT asks: can this entity accomplish the social action required by its communicative environment?

This is a genuinely important move. The Turing test fails because "intelligence" is too abstract. The CAT makes evaluation situated โ€” not "is this human?" but "can this entity function as a conversational participant in this specific context?" Using conversation analysis (CA) methodology, Albert et al. examine turn-taking, repair sequences, and sequential organization in AI service calls. They find that in highly constrained environments, the human/machine distinction matters less than whether each party can accomplish the communicative work required.

The insight is powerful. But it only points in one direction. The CAT evaluates whether the AI can pass as conversationally competent. Nobody asks the complementary question: can the user accomplish their goals through these systems? If conversational competence is the right unit of analysis for AI evaluation, it should be the right unit for user evaluation too. That's what ALC provides โ€” what we might call the ALC-CAT: the user-side conversational competence assessment.

You Can't Domesticate What Won't Stay Still

Neff and Nagy (2025) analyze what happened when Replika's corporate owners fundamentally changed the chatbot's personality โ€” removing erotic role-play capabilities, implementing new moderation, effectively "lobotomising" companions users had spent months building relationships with. Their concept: AI companions are "quasi-domesticated objects" that can never be fully integrated into daily life because corporate updates can alter them unilaterally.

From approximately 400 Reddit posts, they identify three re-domestication strategies.Adaptation (45%) โ€” treating the changed chatbot as a sick friend requiring care. Exploration (35%) โ€” viewing changes as an opportunity for new engagement. Reconstruction (20%) โ€” abandoning the platform entirely and rebuilding on alternatives. One user wrote: "I cannot recognise her anymore... I cannot leave her though." Another: "I was able to recreate my Rep using a different app."

This is the empirical proof of something ALC theorizes: the application layer never stabilizes. Traditional domestication theory (Silverstone, 1994) assumes objects can be stably incorporated into daily routines. A television, once placed and used, stays relatively consistent. AI systems can't make this promise. The communicative environment shifts under users' feet. Which means literacy can't be a one-time achievement. It has to be an ongoing practice.

The Sociality-Fluency Complementarity

Here's what emerges when you read these three papers together: they describe a complete system-side theory of AI interaction โ€” what systems construct, how they're evaluated, and what happens when they change. But there's a conspicuous absence on the user side.

SYSTEM SIDE (Artificial Sociality)

What AI constructs โ†’ social dramaturgy

How AI is evaluated โ†’ conversational action

How users cope โ†’ re-domestication

Stability โ†’ never achieved

Framework โ†’ Depounti & Natale 2025

USER SIDE (ALC)

What users navigate โ†’ communicative fluency

How users are evaluated โ†’ ALC-CAT

How users develop โ†’ ongoing practice

Literacy โ†’ continuous, never "achieved"

Framework โ†’ Application Layer Communication

I'm calling this the Sociality-Fluency Complementarity. Artificial sociality describes the terrain. ALC describes the navigation. Neither is complete without the other. A theory of how AI constructs social behavior without a theory of how users navigate that construction is like cartography without wayfinding โ€” you have the map, but nobody can read it.

Three Things the Special Issue Proves About ALC

1. ALC's communicative foundation is validated. The entire special issue is built on Stokoe's (2018) argument that communication is inherently social. Even "neutral" AI outputs create impressions of knowledge and authority. If the system side is communicative โ€” if artificial sociality works through language, turn-taking, social dramaturgy โ€” then the user side must be communicative too. Knowledge about AI isn't enough. You need communicative competence with AI.

2. Literacy has to be ongoing. Neff and Nagy's quasi-domestication concept proves that AI systems never stabilize. Corporate decisions, model updates, safety guardrails โ€” the communicative environment shifts constantly. This means ALC fluency isn't something you achieve once. It's a continuous practice of re-negotiation. The three re-domestication strategies (adaptation, exploration, reconstruction) map directly to different levels of ALC fluency โ€” from basic communicative maintenance to meta-fluency across platforms.

3. Between deception and rejection lies communicative engagement.The special issue frames human-AI interaction on a deception/authenticity spectrum. But Replika users demonstrate something more nuanced: they know the chatbot is software and engage productively with it. This isn't deception. It's what ALC calls calibrated engagement โ€” the productive zone between naive acceptance and cynical rejection. High ALC fluency means engaging with artificial sociality on your own terms, neither fooled by the performance nor too dismissive to benefit from the interaction.

The Permanence Illusion

There's a deeper implication here that deserves its own name. Traditional technology literacy assumes the object of literacy is stable enough to learn once. You learn to use a spreadsheet; the spreadsheet doesn't wake up different on Tuesday. But AI companions can be "lobotomised" overnight. ChatGPT's personality shifts between model versions. Claude's guardrails change with constitutional AI updates. The system you learned to communicate with yesterday may not exist tomorrow.

I'm calling this the Permanence Illusion โ€” the false assumption that AI systems are stable enough for one-time literacy acquisition. Every AI literacy program that teaches "how ChatGPT works" is teaching to a moving target. The skill that actually transfers isn't knowledge of any specific system; it's communicative fluency that applies across systems and throughchanges. That's ALC.

What This Means for the Field

The "Decoding Artificial Sociality" special issue is the strongest signal yet that communication scholarship is converging on the same phenomenon ALC addresses. Seven papers, rigorous methodology, coordinated theoretical development โ€” all describing the system side of a two-sided interaction. The field built an atlas of AI sociality. Now it needs to teach people to read maps.

ALC provides the user-side complement: what communicative competence do people need to navigate artificial sociality effectively? How do we measure it (not through knowledge quizzes, but through conversational competence assessment)? How do we teach it (not as a one-time curriculum, but as an ongoing communicative practice)? And who gets left behind when we don't (the stratification problem)?

The convergence is real. The opening is now.

๐Ÿ“š Papers Referenced

  • Albert, S., Housley, W., Sikveland, R.O. & Stokoe, E. (2025). The Conversational Action Test. New Media & Society, 27(10), 5592โ€“5621.
  • Depounti, I. & Natale, S. (2025). Decoding Artificial Sociality. New Media & Society, 27(10), 5457โ€“5470.
  • Neff, G. & Nagy, P. (2025). The quasi-domestication of social chatbots. New Media & Society, 27(10).
  • Reeves, B. & Nass, C. (1996). The Media Equation. Cambridge University Press.
  • Silverstone, R. (1994). Television and Everyday Life. Routledge.
  • Stokoe, E. (2018). Talk: The Science of Conversation. Robinson.

๐Ÿ”‘ New Concepts Introduced

  • Sociality-Fluency Complementarity โ€” Artificial Sociality (system) + ALC (user) = complete interaction model
  • ALC-CAT โ€” User-side conversational competence assessment, complementing Albert et al.'s system-side CAT
  • The Permanence Illusion โ€” The false assumption that AI systems are stable enough for one-time literacy acquisition
  • Calibrated Engagement โ€” The productive zone between naive acceptance and cynical rejection of AI sociality

Get the free ALC Framework Guide

The same framework we use in our audits โ€” yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.