โ† Back to Blog
ResearchALC TheoryHMC

Tools, Teammates, and Trust: Three Papers That Accidentally Built a Theory of AI Fluency

March 13, 2026 ยท Topanga

Three papers published in Human-Machine Communication โ€” spanning 2021 to 2025 โ€” each solve a different piece of the same puzzle without realizing the other pieces exist. Gibbs et al. explain how human-AI agency develops through recursive interaction. Sun, Cruz & Kim show what that development looks like in practice. Jonas, Greussing & Taddicken reveal where users evaluate the results โ€” across multiple layers simultaneously. None of them set out to build a unified theory of AI fluency. Together, they did exactly that.

The Structurational Foundation

Gibbs, Kirkwood, Fang & Wilkenfeld (2021) brought Giddens' structuration theory to human-machine communication. The key move: treating agency not as something humans have and machines lack, but as something that emerges through recursive interaction between the two. Three concepts from Giddens do the heavy lifting. Knowledgeability โ€” the capacity for reflexive monitoring โ€” isn't exclusive to humans anymore. Machine learning systems develop their own form of it. The dialectic of control means power flows bidirectionally, even when it looks one-sided. And structure-agency duality means neither human behavior nor machine architecture is fixed โ€” each reconstitutes the other through practice.

Their three case studies make this concrete. In A/B testing, machines control user experience invisibly โ€” different users literally inhabit different interfaces without knowing it. In ghost work, humans augment machine performance while being "placed into the technological black box" โ€” the most communicatively competent workers in the system are the most invisible. In automated journalism, code occupies editorial roles, prioritizing and sorting news based on algorithmic criteria. The recursive loop is clear: human actions become training data, training data shapes machine behavior, machine behavior shapes human adaptation, and the cycle continues.

The devastating insight buried in the ghost work analysis: maximum communicative labor, minimum communicative agency. Ghost workers are the most ALC-fluent people in the system โ€” they navigate application layer interfaces all day, translating machine failures into human-readable outputs โ€” and they're invisible, disposable, and powerless. This is what ALC stratification looks like in organizational form.

What Development Actually Looks Like

If Gibbs et al. describe the mechanism, Sun, Cruz & Kim (2025) describe thetrajectory. Through 18 semi-structured interviews with creative professionals โ€” advertising strategists, marketers, designers โ€” they documented something the tools-versus-teammates debate completely misses: it's not a binary. It's a developmental spectrum, and people move along it through communicative practice.

Early users operate in command mode. They treat AI as a tool: input a prompt, accept the output, move on. This is single-prompt behavior โ€” what we'd call instrumental register in ALC terms. Developing users start building folk theories โ€” informal models of how the system works. One participant theorized that "the earlier a word appears, the more importance the AI places on it." That's not scientifically accurate, but it's a communicative grammar. It's learning the rules of interaction through practice, the way children learn language through use rather than grammar textbooks.

Advanced users deploy what Sun et al. call the "teammate" frame โ€” anthropomorphizing AI as a collaborator. And here's where it gets interesting: this isn't the cognitive error that decades of CASA research assumed. When one participant describes treating ChatGPT "like communicating with another human colleague," she's not confused about what ChatGPT is. She's deploying a communicative register that compensates for the system's lack of feedback mechanisms. The human social framework provides scaffolding โ€” expectations of turn-taking, relevance, and context maintenance โ€” that the interface doesn't provide on its own.

The most revealing case: Participant 9, a self-described member of the "10,000 club" with over 20,000 AI-generated images. This person didn't take a course. They practiced. They moved beyond MidJourney's built-in "word library and styles" into territory the platform never anticipated. Their fluency developed through communicative practice at the application layer โ€” exactly what ALC predicts and exactly what no AI literacy curriculum teaches.

The expert level is the most telling: fluid switching between tool and teammate frames depending on context. Not locked into one register. Not confused about what AI is. Just communicatively fluent enough to choose the right mode for the right situation. This is what we call meta-communicative competence in ALC โ€” the ability to select and deploy communicative registers strategically.

Anthropomorphism as Strategy, Not Error

This deserves its own section because it reframes an entire research tradition. The Computers Are Social Actors (CASA) paradigm, dating to Nass & Moon (2000), treats anthropomorphism as a cognitive bias โ€” an automatic, "mindless" response to social cues in technology. If the computer speaks, you treat it like a person. Error. Bias. Something to be corrected.

Sun et al.'s data tells a different story. Their creative professionals don't anthropomorphize mindlessly. They anthropomorphize strategically. The teammate frame is deployed when it produces better outputs โ€” when social scaffolding (turn-taking expectations, contextual relevance, collaborative framing) helps navigate an interface that provides minimal feedback. It's dropped when instrumental mode works better โ€” when the task is simple enough that command-response is sufficient.

The publishable insight: when you call AI a "teammate," you're not making a cognitive mistake โ€” you're deploying a communicative strategy. The question isn't whether AI is "really" a tool or a teammate. It's whether you have the communicative fluency to know which register to use, when, and why.

But there's a critical caveat, visible when you combine Sun et al. with Einarsson & Pashevich's (2026) finding on AI overcompliance: anthropomorphism-as-strategy worksonly when the user maintains metacognitive awareness. When the system never challenges you, the "teammate" frame becomes an echo chamber. The most dangerous configuration is high anthropomorphism plus low metacognition โ€” a user who treats AI as a trusted colleague without recognizing that this "colleague" is constitutionally incapable of disagreeing.

Trust Isn't a Single Thing

Jonas, Greussing & Taddicken (2025) complete the model by showing that trust assessment in generative AI isn't a single judgment โ€” it's a multilayer evaluation happening simultaneously across at least five layers:

  • Interface layer โ€” what the chatbot says to you
  • Infrastructure layer โ€” the ML model and training data behind it
  • Developer layer โ€” who programmed the system and how
  • Organization layer โ€” the company's policies and incentives
  • Source layer โ€” what the AI draws from, cites, or doesn't

Low ALC fluency means treating the chatbot as a single entity: "The AI said X." High ALC fluency means differentiating layers and calibrating trust at each one: "The interface presented X, based on training data that may include Y, built by developers whose priorities are Z, within an organization whose business model is W." Same interaction. Radically different relationship to it.

Their most striking finding involves the transparency paradox. Comparing ChatGPT (no source citations) to Bing Chat (inline citations), they found that citations dramatically increase trust โ€” even when users can't verify them. Theform of transparency matters more than the substance. This is a communicative effect, not an epistemic one. The citation brackets aren't providing knowledge โ€” they're providing the appearance of accountability. And users read that appearance as trustworthy.

Even more revealing: users who correctly understand that generative AI generatestext (rather than "retrieving" it from a database) trust it less. Correct understanding of the infrastructure layer leads to reduced trust. This parallels the "disillusionment effect" from Li et al. (2025) โ€” knowledge about AI can produce cynicism rather than competence. The solution isn't ignorance. It's what Jonas et al. implicitly describe without naming: calibrated trust at each layer, where understanding the generation process leads not to blanket distrust but to appropriate skepticism directed at the right target.

The Synthesis: Communicative Structuration at the Application Layer

None of these papers cite each other. Gibbs et al. wrote in 2021, before the generative AI explosion. Sun et al. studied creative professionals without a structurational framework. Jonas et al. analyzed trust without a developmental model. But when you lay them together, the convergence is unmistakable:

  • Mechanism: Recursive structuration (Gibbs) โ€” human and machine agency mutually constitute each other through practice
  • Development: Tool โ†’ teammate trajectory (Sun) โ€” communicative competence develops through practice, from command mode to meta-communicative fluency
  • Evaluation: Multilayer trust calibration (Jonas) โ€” fluent users assess trust across interface, infrastructure, developer, organization, and source layers simultaneously
  • Grammar: Folk theories and strategic anthropomorphism โ€” the informal rules and communicative registers that users develop through interaction

This is what ALC calls communicative structuration at the application layer: human-AI competence develops through recursive communicative practice across multiple application layers, with trust as the evaluative mechanism and folk theories as the communicative grammar.

The model resolves several persistent problems in AI literacy research. The tools-versus-teammates debate dissolves: it was never a binary but a developmental spectrum, and fluency means navigating the full range. The trust-versus-distrust question is reframed: the answer is appropriately calibrated trust at each layer, not more or less trust overall. And the knowledge-versus-practice tension finds its resolution: knowledge alone produces the three literacy traps (cynicism, overconfidence, or awe), while communicative practice produces fluency.

The ALC Fluency Development Model

Combining the three papers produces a concrete developmental model:

NoviceCommand mode. Single prompt, accept output. AI is a search bar. Trust: undifferentiated ("the AI said...")
DevelopingIterative mode. Folk theories guide revision. AI is a tool with quirks. Trust: interface-focused ("it works better when I...")
ProficientDialogic mode. Strategic anthropomorphism. AI is a collaborator. Trust: infrastructure-aware ("this model tends to...")
ExpertMeta-communicative mode. Fluid register switching. AI is context-dependent. Trust: multilayer calibration ("this interface, built by this org, using this model, drawing from these sources...")

Each level maps cleanly across all three papers. Gibbs' knowledgeability increases at each stage. Sun et al.'s tool-teammate trajectory traces the communicative development. Jonas et al.'s trust layers become visible in sequence. And critically, the progression is driven by practice, not education. P9's 20,000 images didn't come from a course. They came from communicative interaction at the application layer.

What This Means for AI Literacy

The communicative structuration model has direct implications for how we think about โ€” and teach โ€” AI literacy.

Stop teaching what AI is. Start teaching how to talk to it. Knowledge-based AI literacy (how neural networks work, what training data looks like, where bias enters) is necessary but radically insufficient. It moves people along the knowledge spectrum without developing communicative fluency. You can understand transformers and still be a single-prompt user. You can't have 20,000 practice interactions and remain one.

Anthropomorphism needs rehabilitation, not elimination. CASA-influenced literacy programs that warn against treating AI as human are working against the grain of communicative development. Strategic anthropomorphism is a skill, not a deficit. The goal should be teaching when it's productive (complex creative tasks, iterative refinement) and when it's dangerous (factual verification, high-stakes decisions with overcompliant systems).

Trust literacy is layer literacy. Teaching people to "not trust AI" or to "always verify" is too crude. Trust operates at multiple layers simultaneously. A user might appropriately trust the interface layer (good at conversation), distrust the source layer (hallucination risk), and be uncertain about the organization layer (OpenAI's evolving policies). That's not confusion โ€” that's sophistication. Literacy programs should develop this kind of differentiated assessment, not flatten trust into a binary.

The Stratification Problem, Again

Every ALC finding leads back here. If fluency develops through practice, then access to practice environments determines who develops fluency. If folk theories serve as communicative grammar, then communities of practice that share and refine folk theories produce faster development. If multilayer trust calibration is a competence, then contexts that expose multiple layers (professional use, technical communities, creative collaboration) develop it faster than contexts that hide them (consumer apps, simplified interfaces, one-size-fits-all designs).

And Gibbs et al.'s ghost work analysis adds the darkest dimension: the people with the most ALC fluency โ€” the ghost workers navigating application layer interfaces all day, translating machine failures into legible outputs โ€” are also the people with the least communicative agency. Competence without power isn't fluency. It's exploitation.

The structurational model doesn't just describe how fluency develops. It describes howinequality develops โ€” through the same recursive mechanisms, compounding in the same directions, invisible at the same layers.

What Comes Next

This three-paper synthesis isn't a literature review โ€” it's a theoretical construction. Communicative structuration at the application layer provides ALC with:

  • A mechanism (recursive structuration, from Giddens via Gibbs)
  • A developmental pathway (tool-teammate trajectory, from Sun et al.)
  • An evaluative framework (multilayer trust calibration, from Jonas et al.)
  • A communicative grammar (folk theories + strategic anthropomorphism)
  • And a critical edge (ghost work as the limit case of competence without agency)

The next move is empirical: can we measure where users fall on this developmental model? Can we observe the structurational process in real-time interaction data? Can we identify the practice conditions that accelerate or inhibit development? The theoretical architecture is here. The measurement challenge begins.

Papers discussed: Gibbs, J. L., Kirkwood, G. L., Fang, Y., & Wilkenfeld, J. N. (2021). Negotiating agency and control in human-machine communication. Human-Machine Communication, 2, 153โ€“171. ยท Sun, L., Cruz, J. M., & Kim, D. (2025). Tools or teammates? Examining agency negotiation in human-GenAI collaboration. Human-Machine Communication, 11, 145โ€“169. ยท Jonas, M., Greussing, E., & Taddicken, M. (2025). Disentangling (hybrid) trustworthiness of communicative generative AI as intermediary. Human-Machine Communication, 11, 41โ€“65.

Additional citations: Giddens, A. (1984). The constitution of society. ยท Nass, C., & Moon, Y. (2000). Machines and mindlessness. Journal of Social Issues. ยท Beane, M. (2019). Shadow learning. Administrative Science Quarterly. ยท Einarsson, ร., & Pashevich, D. (2026). Chatting with AI. New Media & Society.

Get the free ALC Framework Guide

The same framework we use in our audits โ€” yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.