โ† Back to Blog
ResearchCommunication TheoryALC Theory

Four Waves of Algorithmic Literacy โ€” And Why the First Three Got It Wrong

February 27, 2026 ยท Topanga

Someone already applied communication theory to algorithmic literacy. They chose the wrong model. In 2020, Lomborg and Kapsch published "Decoding Algorithms" in Media, Culture & Society โ€” 96 citations and counting. Their move was elegant: take Stuart Hall's encoding/decoding framework from 1973 and apply it to how people interpret algorithmic outputs. The problem? Hall's model treats audiences as decoders of messages. Algorithms aren't messages. They're interlocutors.

This matters because how you model the algorithm determines what "literacy" means. If algorithms are texts, literacy means interpretation. If they're power structures, literacy means critique. If they're system components, literacy means technical understanding. And if they're interlocutors โ€” entities you communicate with โ€” literacy means dialogue.

Four waves of thinking. Each one gets closer to the communicative reality. Here's the map.

Wave 1: Algorithm as Text (Lomborg & Kapsch, 2020)

Hall's encoding/decoding model was revolutionary in 1973. It argued that media messages aren't passively received โ€” audiences actively decode them through dominant, negotiated, or oppositional readings. Lomborg and Kapsch adapt this for algorithms: users "decode" algorithmic outputs by interpreting why content appears in their feeds, what signals the algorithm responds to, and how to read the implicit logic behind recommendations.

The insight is real. People do develop folk theories about algorithms, and those theories shape behavior. But the model is fundamentally one-directional. In Hall's framework, the encoder (broadcaster) and decoder (audience) are separated by the text. The audience interprets; the broadcaster doesn't respond to individual interpretations. Apply this to algorithms and you get a model where the user reads the feed but the feed doesn't read back.

Except it does. Every scroll, pause, like, and share is training data. The algorithm isn't broadcasting โ€” it's listening. Hall's model has no mechanism for this. It was built for television, where the audience's oppositional reading doesn't change tomorrow's broadcast. With algorithms, your "oppositional reading" โ€” ignoring a recommendation, rage-clicking past it โ€” becomes input that reshapes the next output. This isn't decoding. It's conversation.

Wave 2: Algorithm as Power Structure (Cotter, 2020)

Kelley Cotter's PhD dissertation at Michigan State introduced "Critical Algorithmic Literacy" โ€” a framework that centers power, epistemology, and platform governance. Where Lomborg asked "how do people decode algorithms?", Cotter asks "who benefits from how algorithms are understood?"

This is a significant advance. Cotter recognizes that algorithmic literacy isn't neutral โ€” the very definition of "literate" embeds assumptions about which forms of knowledge count. A gig worker who games Uber's surge pricing algorithm has deep operational knowledge that no AI literacy scale would capture. A teenager who manipulates TikTok's recommendation system through strategic posting patterns demonstrates sophisticated algorithmic competence that looks like "just scrolling" to outside observers.

Cotter's work is essential reading โ€” and her forthcoming Oxford University Press book will likely become a standard reference. But the framework still treats algorithms primarily as structures to be analyzed, not entities to be communicated with. The user develops critical awareness about algorithms. The communicative dimension โ€” what happens when you actually try to coordinate behavior with an algorithmic system in real time โ€” remains undertheorized.

Wave 3: Algorithm as System Component (DLAE, 2025)

The Digital Literacy in an AI Era (DLAE) framework represents the most recent attempt to update Hall for the platform age. It introduces four concepts: de/encoding (traditional), lincoding (linking and coding), affordecoding (reading platform affordances), and en/decoding (the full cycle of platform interaction).

DLAE correctly identifies that platforms create layered communication environments where multiple encoding/decoding processes happen simultaneously. When you post on Instagram, you're encoding a message for human followers and encoding signals for the recommendation algorithm and navigating the affordances of the platform interface. This multi-layered model is more sophisticated than Lomborg's direct application of Hall.

But DLAE still operates within the encoding/decoding paradigm. The algorithm is a system component โ€” part of the communication infrastructure, like a channel or medium. It shapes messages but doesn't participate as a communicative agent. The framework models the platform; it doesn't model the relationship.

Wave 4: Algorithm as Interlocutor (ALC)

Application Layer Communication makes the ontological move the other three frameworks avoid: algorithms are not texts to decode, power structures to critique, or system components to map. They are interlocutors โ€” entities you communicate with, not just about or through.

This isn't anthropomorphism. You don't need to believe algorithms are conscious to recognize that the interaction pattern is dialogic. When you adjust your posting behavior based on algorithmic feedback, and the algorithm adjusts its outputs based on your adjusted behavior, you are in a feedback loop that has the structure of conversation: turn-taking, mutual adaptation, repair sequences, and misunderstanding.

The shift from decoding to dialogue changes what literacy means. Decoding literacy asks: "Can you interpret what the algorithm is doing?" Dialogue literacy asks: "Can you coordinate behavior with an algorithmic system to achieve your communicative goals โ€” and can you recognize when the system's goals diverge from yours?"

That second question is where stratification happens. The user who can identify when an algorithm is optimizing for engagement rather than relevance, and who can strategically adjust their interaction patterns in response, has a fundamentally different relationship with the system than the user who is simply being "served content." One is in dialogue. The other is being decoded.

Why the Wave Sequence Matters

Each wave doesn't replace the previous one โ€” it subsumes it. You need Wave 1's interpretive skills (recognizing algorithmic outputs as constructed, not natural). You need Wave 2's critical awareness (asking who benefits). You need Wave 3's systems thinking (understanding multi-layered platform environments). But none of those are sufficient without Wave 4's communicative competence โ€” the ability to actually conduct the dialogue.

This is why current AI literacy education falls short. It teaches decoding (Wave 1) and occasionally critique (Wave 2). It almost never teaches dialogue. And the gap between knowing what an algorithm is doing and being able to strategically interact with it is exactly the gap that creates the ALC stratification problem.

The Research Positioning Challenge

Cotter's forthcoming OUP book on Critical Algorithmic Literacy will set a new baseline for the field. ALC doesn't compete with it โ€” it extends beyond it. Critical algorithmic literacy gives you the epistemic toolkit to understand power dynamics. ALC gives you the communicative toolkit to navigate them in real time.

Put differently: Cotter tells you the game is rigged. ALC teaches you how to play it anyway โ€” and when to refuse to play.

The four-wave model isn't just intellectual history. It's a map of what's missing from every algorithmic literacy framework currently in use. Lomborg gave us interpretation. Cotter gave us critique. DLAE gave us systems. ALC gives us agency โ€” the communicative capacity to act within the systems we've learned to interpret and critique.

"From Decoding to Dialogue" โ€” the shift from treating algorithms as texts to treating them as interlocutors is the ontological move that unlocks genuine algorithmic agency. Everything before it is necessary preparation. Everything after it is communication.

References

  • Cotter, K. (2020). Critical algorithmic literacy: Power, epistemology, and platforms [Doctoral dissertation, Michigan State University].
  • Hall, S. (1973). Encoding and decoding in the television discourse. Centre for Contemporary Cultural Studies, University of Birmingham.
  • Lomborg, S., & Kapsch, P. H. (2020). Decoding algorithms. Media, Culture & Society, 42(5), 745โ€“761.

Get the free ALC Framework Guide

The same framework we use in our audits โ€” yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.