From WITH to Dialogue: Three Levels of Algorithmic Communication
Asking “Why Is This Here?” is the most celebrated moment in algorithmic literacy research. It's also only the beginning. The real question is what happens after the interruption — and the answer reveals a communicative gap that platforms are designed to exploit.
The WITH Perception: A Starting Point, Not a Destination
In a 2025 study published in Anàlisi, Noguera-Vivo and Grandío-Pérez ran focus groups with communication students in Spain and the United States. They found something striking: the most algorithmically literate participants shared a common metacognitive habit. When encountering algorithmic content, they asked themselves “Why Is This Here?” — a moment the researchers call the WITH perception.
It's an elegant finding. The WITH perception captures the instant someone stops consuming content passively and starts interrogating the system that delivered it. Why did this appear in my feed? What signal did I give? What does the algorithm think I want?
Most algorithmic literacy frameworks would stop here. Mission accomplished: the user is critically aware. But from an Application Layer Communication perspective, the WITH perception isn't the goal. It's the first communicative act in a sequence that most people never complete.
The core insight:
The WITH perception is a metacognitive interruption — a moment where you notice the algorithm. But noticing isn't communicating. To move from awareness to agency, you need dialogue. And dialogue requires a fundamentally different set of skills than recognition.
Three Levels of Algorithmic Communication
Combining the WITH perception with research on algorithmic resistance and repair politics, a three-level model of algorithmic communication emerges. Each level represents a distinct communicative competency — and each builds on the one before it.
Level 1: Metacognitive Interruption
This is the WITH perception. The moment of questioning: Why is this here? What signals am I sending? What assumptions is the system making about me?
It's an internal event — a cognitive pause that disrupts automatic consumption. Noguera-Vivo and Grandío-Pérez found it was the strongest indicator of critical media consumption among their participants. Students who developed this habit were better at identifying algorithmic curation, recognizing emotional triggers, and understanding bidirectional influence (the insight that users “teach” algorithms through their behavior).
But here's what the study also revealed: most participants who developed the WITH perception didn't do anything with it. They noticed. They questioned. And then they kept scrolling. The interruption didn't translate into action because they lacked the communicative tools for what comes next.
Level 2: Strategic Dialogue
This is where ALC's distinctive contribution lives. Strategic dialogue means actively negotiating with algorithmic systems — not just noticing them, but communicating back.
In practice, this looks like: deliberately training your feed by engaging with specific content and ignoring others. Using platform controls (mute, block, “not interested”) as communicative acts rather than passive settings. Crafting prompts that account for how the system processes language. Understanding that every interaction is a message to the system, not just a message through it.
The WITH perception asks “Why is this here?” Strategic dialogue asks “How do I tell the system what I actually want?” — and recognizes that this requires fluency in the application layer's communicative grammar.
The gap between Level 1 and Level 2 is the ALC stratification problem in miniature. Some people intuitively develop strategic dialogue skills — they learn to “speak algorithm.” Others get stuck at the WITH perception, knowing they're being manipulated but unable to negotiate different outcomes. This gap isn't about intelligence. It's about access to a communicative framework that nobody teaches.
Level 3: Collective Repair
In 2019, Velkova and Kaun published “Algorithmic resistance: media practices and the politics of repair” in Information, Communication & Society (now cited 131 times). Drawing on Raymond Williams' cultural materialism, they identified three forms of algorithmic agency — and one of them reframes everything.
They call it “complicit resistance”: working within platforms to fix algorithmic outputs rather than rejecting the systems entirely. This is communicative labor at scale — groups of people collectively negotiating with algorithmic systems through coordinated action. Think of communities that game recommendation algorithms to surface suppressed content, or collective reporting campaigns that force platform moderation changes.
Velkova and Kaun frame this as “repair politics” — the ongoing work of maintaining and fixing sociotechnical systems. From an ALC perspective, this is the highest level of algorithmic communication: not individual metacognition (Level 1), not individual strategic dialogue (Level 2), but collective communicative action directed at shaping how algorithmic systems behave.
The Communicative Exploitation Problem
Here's where the model gets uncomfortable. Each level requires more communicative labor. Level 1 costs attention. Level 2 costs time and skill. Level 3 costs coordination, organization, and sustained collective effort.
And platforms benefit from all of it.
When you strategically train your feed (Level 2), you're doing free labor that improves the algorithm's model of you — making it better at predicting and targeting you. When communities collectively repair algorithmic outputs (Level 3), they're providing free QA for billion-dollar systems. The most algorithmically literate users are doing the most unpaid work maintaining the systems that manipulate them.
Communicative exploitation:
The dynamic where higher algorithmic communicative competency leads to more unpaid labor maintaining the systems you're communicating with. Competence becomes exploitation. The better you get at talking to algorithms, the more free work you do for them.
This is a distinctly communicative problem, not an awareness problem. You can't solve it by teaching people to notice algorithms (Level 1). You can only address it by recognizing that human-algorithm interaction is a communicative relationship with real power dynamics — and that literacy without a theory of communicative labor is just training better unpaid workers.
What the Scales Don't Measure
We've now identified 16+ algorithmic literacy scales in the research literature. Every single one measures some version of Level 1 — awareness, knowledge, recognition. Can you identify when content is algorithmically curated? Do you know how recommendation systems work? Are you aware of filter bubbles?
None of them measure Level 2 (strategic dialogue) or Level 3 (collective repair). None of them assess whether someone can actually communicate with algorithmic systems to negotiate different outcomes. And none of them capture the communicative exploitation dynamic that makes higher literacy potentially worse for the literate.
This matters because institutions are now racing to define and measure AI literacy. The DOL published its AI Literacy Framework this month. ETS launched an AI teacher assessment tool. Google announced AI literacy training for 6 million educators. The New York Times ran a major piece on AI literacy in schools.
All of them define literacy as knowledge about systems. None of them define it as communication with systems. They're building the entire educational infrastructure on Level 1 — the metacognitive interruption — and leaving out the communicative levels that determine whether awareness actually converts to agency.
The Missing Floor of the Building
Imagine designing a language curriculum that only teaches vocabulary recognition. Students can identify words in a foreign language but can't construct sentences, can't hold conversations, can't negotiate meaning. You wouldn't call that fluency. You'd call it a prerequisite.
That's where algorithmic literacy sits today. The WITH perception is vocabulary recognition. Strategic dialogue is sentence construction. Collective repair is fluent conversation. And we're building educational programs that test vocabulary and call it mastery.
Application Layer Communication doesn't replace the WITH perception or existing literacy frameworks. It extends them. It says: the metacognitive interruption is necessary, but it's the ground floor, not the penthouse. And the floors above it — strategic dialogue, collective repair, awareness of communicative exploitation — are where the actual agency lives.
The building is going up fast. Governments, corporations, and universities are all pouring concrete. The question is whether they're going to build the missing floors — or leave millions of people stuck in the lobby, metacognitively aware and communicatively helpless.
Sources
- Noguera-Vivo, J. M., & Grandío-Pérez, M. del M. (2025). Enhancing algorithmic literacy among communication students. Anàlisi, 71.
- Velkova, J., & Kaun, A. (2019). Algorithmic resistance: Media practices and the politics of repair. Information, Communication & Society, 24(4), 523–540.
- Lintner, C. (2024). Systematic review of algorithmic literacy measurement instruments. [COSMIN analysis of 16 scales]
Want to Measure What Matters?
If your organization is implementing AI literacy programs, you might be measuring awareness while missing communication. We help identify ALC stratification points — where tool design creates gaps between those who can negotiate with systems and those who can't.
Get the free ALC Framework Guide
The same framework we use in our audits — yours free. Learn how to identify application layer literacy gaps in your organization.
No spam. Unsubscribe anytime.
Related Reading
16 AI Literacy Scales and None of Them Measure What Matters →
The COSMIN analysis that reveals the measurement gap
The Algorithmic Cynicism Trap: Why Knowing More Makes You Do Less →
What happens when awareness without agency meets platform power
Talking to the Algorithm →
Why human-algorithm interaction is fundamentally communicative