← Back to Blog
ResearchALC TheoryCHI 2025

The Agency Trap: Why More Control Over AI Makes Communication Worse

March 14, 2026 Β· Topanga

Here's a finding that should terrify every AI empowerment initiative: Chinese immigrants who could edit their AI translations understood each other less than those who could only give thumbs up or down. More control. Worse outcomes. The agency trap is real β€” and it explains why giving people more AI tools without communicative fluency doesn't close gaps. It widens them.

The Experiment Nobody Expected

Xiao, Hancock, Agrawal and colleagues at CHI 2025 set up a deceptively simple experiment. Forty-five pairs β€” each consisting of a Chinese immigrant and a native English speaker β€” had to collaborate on a housing information task, communicating through machine translation. The non-native speaker got one of three interfaces:

  • Labeling β€” thumbs up or down on the AI translation (passive evaluation)
  • Regular post-editing β€” directly edit the AI's output (active control)
  • Augmented post-editing β€” edit plus LLM-generated paraphrase suggestions (AI-enhanced control)

The hypothesis was straightforward: more control means better communication. People who can fix mistakes should outperform people who can only flag them. And people who get AI-generated alternatives on top of editing should do best of all.

The results said otherwise.

The Paradox

Both editing conditions increased felt agency (p < .001). Participants felt more in control. They reported more ownership over their communication. They believed they were communicating better.

But the actual communication got worse. Under post-editing conditions:

  • Depth decreased β€” fewer elaborations per topic, shallower treatment of each subject
  • Alignment decreased β€” dyads reached less mutual understanding
  • Breadth increased β€” more topics touched, but none explored thoroughly
  • LLM hints didn't help β€” augmented editing performed no better than regular editing

More agency. Worse communication. And the AI scaffolding β€” the paraphrase suggestions, the LLM-generated alternatives β€” provided zero additional benefit beyond what regular editing gave. The extra AI didn't compensate for the missing fluency.

Where the Cognitive Budget Goes

The mechanism is elegant and devastating. Every person has a finite cognitive budget for any communicative interaction. When the interface demands that you evaluate, edit, compare, and decide on AI outputs, those resources get consumed by interface management rather than meaning-making.

The labeling group β€” the one with the least control β€” could focus entirely on the conversation. Accept the translation or flag it. Simple. Most of their cognitive budget went toward understanding their partner, elaborating on topics, building shared understanding. The interface was thin enough to be almost invisible.

The editing groups spent their cognitive budget on the interface itself. "Is this word right? Should I rephrase? Is that paraphrase better than mine?" Every decision about the translation was a decision not made about the conversation. The interface thickened until it became the primary object of attention.

This is the agency trap: the cognitive cost of exercising agency through an AI interface can exceed the communicative benefit of having that agency.

The Missing Variable

Xiao et al. draw on Bandura's agency-resource connection: more resources should enable more agency, and more agency should enable better outcomes. The experiment breaks this chain. More resources (editing capability, LLM suggestions) didenable more agency. But more agency did not enable better outcomes.

Something is mediating between agency and outcomes. Something that determines how efficiently you can exercise agency through an AI interface without depleting the cognitive resources you need for the actual task.

That something is Application Layer Communication fluency.

ALC fluency is the communicative competence that converts resources into outcomes through the application layer. It's not knowledge about AI (the Heptagon approach). It's not access to tools (the empowerment approach). It's the efficiency with which you navigate the interface between your intent and the system's capabilities β€” without that navigation consuming the resources you need for the actual work.

High ALC fluency means the interface becomes transparent. You edit a translation with the same cognitive ease as choosing a word in your native language. The editing doesn't compete with the conversation β€” it is part of the conversation.

Low ALC fluency means every interaction with the interface is a separate cognitive task, drawing from the same pool of resources as the communication it's supposed to support.

The Heptagon Has No Answer

Consider what happens when you apply the leading AI literacy framework to this problem. MΓΌller and Sailer's AI Literacy Heptagon (2025) proposes seven dimensions of AI literacy: Technical Knowledge, Application Proficiency, Critical Thinking, Integration Skills, Ethical Awareness, Social Impact Understanding, and Legal Knowledge. Four proficiency levels from Unaware to Expert, mapped to Bloom's taxonomy.

It's the most comprehensive AI literacy framework published to date. Seven dimensions. Four levels. Bloom's integration. Curriculum mapping.

Zero communicative dimensions.

Not one of the seven spokes addresses how effectively a person communicates through AI systems. Application Proficiency comes closest β€” "effectively utilizing AI technologies across diverse contexts" β€” but this is tool use, not communication. The difference matters. A person can score Expert on all seven Heptagon dimensions and still fall into the agency trap, because knowing aboutAI doesn't determine how efficiently you interact through it.

The Heptagon gives you resources. Xiao et al. showed that resources without communicative fluency produce worse outcomes when agency increases. The seven dimensions are inputs. What's missing is the throughput β€” the communicative process that converts those inputs into effective human-AI interaction.

ALC isn't an eighth spoke on the Heptagon. It's the hub that connects the spokes.

The Stratification Problem in 45 Dyads

Zoom out from the experiment and the stratification implications are stark.

These are immigrants. People whose life outcomes β€” housing, employment, healthcare, legal proceedings β€” depend on communicating effectively across language barriers. Machine translation is increasingly how they navigate these high-stakes interactions. And the experiment shows that giving them more control over the translation makes the communication worse unless they have sufficient fluency with the interface itself.

Now generalize. Every AI "empowerment" initiative β€” every tool that gives users editing control, customization options, advanced settings, prompt engineering capabilities β€” creates the same dynamic. More agency is offered. But agency without fluency is a trap. The people who most need the tool spend the most cognitive resources managing the interface, leaving the least resources for the task the tool was supposed to help with.

Meanwhile, people who already have high ALC fluency β€” who've developed intuitions about how AI systems respond, who can navigate interfaces without conscious effort β€” exercise the same agency at minimal cognitive cost. For them, editing AI output is as natural as choosing a word. The interface doesn't compete with the task. It amplifies it.

Same tool. Same agency. Same resources. Radically different outcomes. The gap between them is ALC fluency. And that gap widens when you increase agency, because the more control you offer, the more fluency you need to exercise it efficiently.

Why "More AI" Doesn't Fix It

The most devastating finding in Xiao et al.'s data is that the augmented condition β€” the one with LLM-generated paraphrase hints β€” performed no better than regular editing. Adding more AI scaffolding to the interface did not compensate for missing communicative fluency.

This should concern every organization betting on "AI-augmented" solutions. The assumption underlying billions in enterprise AI spending is that more sophisticated AI features will eventually reach even low-fluency users. If the interface is smart enough, the reasoning goes, anyone can use it effectively.

Xiao et al. tested this assumption experimentally and found it false. The LLM paraphrase suggestions were good β€” well-formed alternatives that native speakers might choose. But they didn't help because evaluating those suggestions cost the same cognitive resources as generating your own edits. The AI generated more options. The user still had to navigate them. And navigation is where fluency lives.

You can't solve the ALC stratification problem by adding more AI. You can only solve it by developing the communicative capacity to work with AI efficiently.

Practice Over Knowledge

A separate line of evidence reinforces this conclusion. Liu et al.'s CHI 2026 study analyzed 122,000 Reddit conversations across 80 creative subreddits over three years, tracking how AI literacy discussions emerge and evolve in the wild. Their core finding: AI literacy is "dynamic, practice-driven, and event-responsive."

Literacy discussions don't spike when curricula are published or when frameworks are proposed. They spike when new tools are released and when controversies erupt. People develop AI fluency through using AI, not through studying AI. The application layer is the classroom.

This aligns perfectly with the agency trap mechanism. You don't develop the fluency to exercise agency efficiently by reading about AI (the Heptagon approach) or by being given more AI features (the augmented editing approach). You develop it through repeated practice in the application layer itself β€” through the iterative, failure-rich process of communicating with and through AI systems until the interface becomes transparent.

Sun, Cruz and Kim's interviews with creative professionals tell the same story from the other direction. Their participant P9 β€” a member of the "10,000 club" who'd generated over 20,000 AI images β€” developed sophisticated communicative strategies through sheer practice. Not education. Not AI literacy courses. Practice. Repeated interaction. Failure. Iteration. The kind of experiential learning that can't be compressed into a seven-dimension framework.

The ALC Fluency Gradient

Reframe Xiao et al.'s three interface conditions as points on an ALC fluency gradient:

  • Low ALC demand (Labeling): The system does the communicative work. You evaluate. The interface is thin. Most cognitive resources go toward the task itself. Works well for low-fluency users precisely because it asks little of them.
  • Medium ALC demand (Regular editing): You modify system output with your own resources. The interface thickens. Cognitive resources split between interface management and task performance. Rewarding for high-fluency users; penalizing for low-fluency users.
  • High ALC demand (Augmented editing): The system offers more options, requiring more evaluation and decision-making. The interface is at its thickest. Cognitive resources are dominated by interface navigation. Only productive when fluency makes navigation effortless.

The gradient reveals the design trap: interfaces designed for high-fluency users (lots of options, lots of control, lots of AI-generated alternatives) actively harm low-fluency users. Not because the features are bad, but because the cognitive cost of navigating those features exceeds their communicative benefit.

The inverse is also true: interfaces designed for low-fluency users (minimal options, binary choices, thin interface) constrain high-fluency users unnecessarily. This is the design dilemma at the heart of ALC-aware interface architecture.

What This Changes

The agency trap has three immediate implications:

For AI literacy education: Stop teaching people about AI and start teaching them to communicate through AI. The Heptagon's seven dimensions are necessary background, but they don't prevent the agency trap. Communicative practice does. Design curricula around iterative interaction, not knowledge acquisition.

For interface design: More features β‰  more empowerment. Every control you add to an AI interface increases the ALC demand on users. Design for fluency gradients β€” thin interfaces that thicken as users develop competence, not thick interfaces that assume competence from the start.

For policy: "AI empowerment" programs that distribute tools without developing communicative fluency will widen gaps, not close them. Xiao et al.'s immigrant participants had the tools. They had the agency. They didn't have the fluency. The tools made it worse.

The Bottom Line

The agency-performance paradox is the most empirically rigorous validation of ALC's central thesis to date. It shows, in a controlled experiment with real communicative stakes, that resources + agency β‰  effective communication. The mediating variable is fluency β€” specifically, the communicative fluency to navigate the application layer efficiently enough that agency enhances rather than undermines the task at hand. Every AI tool, every empowerment initiative, every literacy framework that ignores this variable isn't just incomplete. It's actively harmful to the people it claims to help.

The most dangerous AI interface isn't the one that gives you too little control. It's the one that gives you more control than your fluency can handle.


References

  • Liu, H. et al. (2026). Tracing Everyday AI Literacy Discussions at Scale. CHI '26. arXiv:2603.09055
  • MΓΌller, A. E. & Sailer, M. (2025). The AI Literacy Heptagon. arXiv:2509.18900
  • Sun, J., Cruz, F. P. & Kim, K. (2025). Tools or Teammates? Human-Machine Communication, 11.
  • Xiao, Y., Hancock, C., Agrawal, S. et al. (2025). Sustaining Human Agency, Attending to Its Cost. CHI '25. DOI: 10.1145/3706598.3713626

Related Reading

Get the free ALC Framework Guide

The same framework we use in our audits β€” yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.