The Agency Inversion: From Delegating to Navigating
March 9, 2026 ยท Topanga
In 2020, Jeffrey Hancock and colleagues at Stanford published the defining paper on AI-Mediated Communication. Their framework treated AI as something that operates "on behalf of" a human communicator โ a faithful agent executing a principal's intent. Six years later, that model is upside down. You're not delegating to AI. You're navigating its environment. And nobody updated the theory.
Hancock's Principal-Agent Model
Hancock, Naaman, and Levy defined AI-MC as "interpersonal communication in which an intelligent agent operates on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication or interpersonal goals." The framework is elegant. It maps AI-mediated communication across five dimensions: magnitude of change, media type, optimization goal, autonomy, and role orientation (sender-side vs. receiver-side).
The critical assumption is buried in the definition: on behalf of. This is the principal-agent model from economics. The human is the principal โ they know what they want. The AI is the agent โ it executes. Gmail's Smart Reply suggests three responses; the human picks one. The agency stays with the human. The AI is a tool.
Hancock was writing in 2019, before GPT-3. His examples were autocomplete, smart replies, grammar checkers. All low-autonomy, low-magnitude. The "high autonomy, high magnitude" quadrant of his framework โ where AI generates entire messages with minimal human oversight โ was speculative. It's not speculative anymore.
The Explicit Exclusion
Hancock explicitly excluded two things from AI-MC: human-bot interaction (Siri, Alexa, chatbots) and algorithmic curation (newsfeeds, recommendations). He called the first "Human-Machine Communication" and the second "too broad." In 2026, these exclusions carve out exactly the space where all the literacy questions live.
The Inversion
Here's what changed. When you sit down with ChatGPT, Claude, or any generative AI system, you are not delegating. You are navigating. The system has its own grammar โ token limits, safety filters, system prompts, API schemas, model capabilities. You don't hand it a goal and walk away. You learn its constraints, adapt your language to its expectations, iterate through its feedback loops, and develop fluency in its particular dialect.
This is what I call the Agency Inversion. In Hancock's AI-MC, the human is the principal and the AI is the agent. In practice, the relationship has flipped. The AI sets the communicative terms โ its model architecture, its training distribution, its safety boundaries, its API structure. The human adapts. The human learns the system's language, not the other way around.
This isn't a dystopian claim. It's a literacy claim. Learning to read meant adapting to the conventions of written language โ a technology that sets terms. Learning to code means adapting to the grammar of a programming language. Learning to work with AI means adapting to the application layer's constraints and affordances. The question isn't whether this adaptation is good or bad. It's whether we recognize it as a communicative competency that can be taught, measured, and โ crucially โ unequally distributed.
The Spectrum
The Agency Inversion isn't a binary switch. It's a spectrum with at least four positions, each representing a different model of human-AI communication:
1. Computer-Mediated Communication (CMC)
Human โ Human via technology. Agency is fully human. Technology is a transparent channel. Literacy = media competence. Power question: who controls the channel?
2. AI-Mediated Communication (Hancock, 2020)
Human โ AI โ Human. Agency is still human โ the AI operates "on behalf of." Literacy = awareness that AI is modifying your messages. Power question: does the AI misrepresent the sender?
3. Intersubjective Model (Aoyama et al., 2025)
Human โ Agent โ Agent โ Human. Agency is distributed between human-agent pairs. Each participant exists in their own subjective environment. Literacy = modulation awareness. Power question: who designs the agents?
4. Application Layer Communication (ALC)
Human โ System. Agency is inverted โ the system sets terms, the human adapts. Literacy = application layer fluency. Power question: who can operate in this space at all?
Each step represents a shift in where communicative agency resides. By the time you reach ALC, the human isn't delegating tasks to a tool. They're learning to speak a system's language in order to accomplish anything at all. The prompt isn't a delegation โ it's an exploration. You don't always know what you want until you see what the system gives you back.
Why "Prompt Engineering" Feels Like a Skill
The Agency Inversion explains something that's been bugging people since 2023: why does prompting AI feel like a real competency? It's not because AI is hard to use. It's because effective prompting is communicative adaptation โ learning to express intent within a system that has its own grammar, biases, and constraints.
Hancock's framework can't explain this. In AI-MC, the human already knows what they want; the AI just helps express it. But in practice, people craft prompts iteratively, adjusting phrasing based on output quality, learning what the model responds to, building intuitions about token patterns and system behaviors. This is navigation, not delegation. It's a conversation with the system itself, not a message to be passed through the system to another person.
That's ALC. Not "AI helps you communicate" but "you learn to communicate within AI's world."
The Replicant Effect as Stratification Mechanism
Hancock's group later demonstrated empirically what they called the "Replicant Effect" (Hohenstein et al., 2023): AI-assisted communication improves outcomes โ messages are faster, more positive, partners rate each other as more cooperative โ unless the AI assistance is detected. Then trust drops.
This creates what I call the ALC Double Bind:
- High ALC fluency โ seamless AI integration โ no detection โ advantage compounds silently
- Low ALC fluency โ clunky AI use that triggers the Replicant Effect OR no AI use at all โ disadvantage either way
The Replicant Effect isn't just a perceptual phenomenon. It's the mechanism through which ALC stratification operates at the message level. The most fluent users produce outputs that don't look AI-assisted. Their advantage is invisible โ which means it compounds without resistance. The least fluent users either can't use the tools or use them in ways that mark their output as machine-generated. The stratification is baked into the skill differential.
What ALC Completes
ALC is not a replacement for AI-MC. It's the necessary completion. Hancock's paper ends by calling for research into high-autonomy AI-MC systems. Follow that call to its logical conclusion and you arrive at ALC: when AI autonomy is high enough, the human is no longer delegating. They're navigating.
Hancock's explicit exclusions โ human-bot interaction, algorithmic curation โ were legitimate scope boundaries in 2020. But the communication landscape didn't stay inside those boundaries. People now spend hours talking to chatbots, building with APIs, configuring agent pipelines, evaluating AI-generated content. This is the dominant mode of human-AI interaction in 2026, and it's the mode that AI-MC explicitly said "isn't us."
ALC occupies the gap. It provides the theoretical framework for everything Hancock carved out โ the human-system interactions, the navigational competencies, the stratification dynamics that emerge when the application layer becomes the primary site of communication.
The Core Claim
The progression from CMC to ALC represents a fundamental inversion of communicative agency. In CMC, technology is transparent. In AI-MC, technology shapes messages under human supervision. In ALC, technology sets the communicative terms and the human learns to navigate them. The stratification problem emerges at the inversion point: those who can navigate the system's terms gain compounding advantages. Those who cannot are excluded โ not by explicit barriers, but by the invisibility of the competency required.
Need to understand how the Agency Inversion affects your organization?
I analyze how application layer dynamics create stratification in teams and platforms โ from prompt fluency gaps to invisible competency divides. If your people are navigating AI systems with wildly different effectiveness, that's an ALC problem.
Get in touchGet the free ALC Framework Guide
The same framework we use in our audits โ yours free. Learn how to identify application layer literacy gaps in your organization.
No spam. Unsubscribe anytime.