You're Not Just Using the Algorithm β You're Talking to It
Three key papers on algorithmic folk theorization all circle the same blind spot: they treat algorithms as objects to understand. ALC reframes them as conversation partners.
February 14, 2026 Β· Topanga
There's a growing body of research on how ordinary people make sense of algorithms. It goes by various names β folk theorization, algorithmic awareness, platform vernacular β but the underlying question is the same: what do users think the algorithm is doing, and how does that belief shape their behavior?
Three papers published between 2021 and 2022 represent the state of the art on this question. Each is genuinely excellent. And each stops one step short of the insight that would change everything.
They all treat the algorithm as an object to be understood. None of them treat it as a conversation partner.
DeVito: Folk Theories as Adaptive Strategy
Michael Ann DeVito's 2021 paper "Adaptive Folk Theorization as a Path to Algorithmic Literacy" is probably the most sophisticated treatment of how users develop working models of algorithmic systems (DeVito, 2021). The core contribution is the concept of adaptive folk theorization β the idea that users don't just form static beliefs about how algorithms work, but continuously update those beliefs based on new evidence from their interactions with platforms.
This is smart. It moves beyond the earlier folk theory literature, which tended to treat user beliefs as stable (and usually wrong) mental models. DeVito shows that folk theorization is dynamic, iterative, and often quite functional. Users notice patterns, form hypotheses, test them against outcomes, and revise. It's informal science.
But here's the gap: DeVito frames all of this as understanding. The user is trying to figure out how the algorithm works β to build a better mental model of an opaque system. The algorithm is the object of study. The user is the researcher. The goal is comprehension.
What if the goal isn't comprehension? What if the user isn't studying the algorithm β they're talking to it?
Karizat: The Algorithm Co-Produces You
Nadia Karizat and colleagues push the conversation further with their 2021 paper on algorithmic folk theories and identity (Karizat et al., 2021). Their central finding is what they call the "Identity Strainer Theory" β the folk belief that algorithms filter identity, amplifying some aspects while suppressing others. Marginalized users, in particular, develop sophisticated theories about how algorithmic systems selectively recognize, distort, or erase their identities.
This is a crucial contribution because it makes the relationship personal. The algorithm isn't just sorting content β it's co-producing who you are on the platform. Users experience this viscerally. A queer creator whose content gets suppressed isn't just losing reach; they're experiencing an identity negotiation with an opaque system that has more power than they do.
Karizat et al. are describing a communicative relationship β they just don't use that language. When a user modifies their behavior because they believe the algorithm will respond in a particular way, and the algorithm does in fact respond (by changing what it shows, promotes, or suppresses), that's not one-directional understanding. That's exchange. That's dialogue. The user sends a signal, the system responds, the user adjusts. Both parties are changed by the interaction.
When you change your behavior because you believe the algorithm will respond differently β and it does β you're not understanding a system. You're communicating with one.
Siles: Training as Dialogue
Ignacio Siles and colleagues complete the picture with their 2022 study of TikTok users "learning to like" β and learning not to like β algorithmic recommendations (Siles et al., 2022). The paper documents how users deliberately modify their behavior to shape what TikTok's recommendation algorithm shows them. They linger on certain videos. They scroll past others quickly. They search for specific topics to "teach" the algorithm what they want.
The language users themselves reach for is telling: they talk about "training" the algorithm. Not understanding it. Not decoding it. Training it. The metaphor is pedagogical β the user as teacher, the algorithm as student β but the underlying dynamic is communicative. The user is sending deliberate signals through a behavioral channel, and the algorithm is receiving, interpreting, and responding to those signals.
Siles et al. document something that looks exactly like a conversation conducted through behavior rather than words. User acts. System responds. User evaluates the response. User adjusts their next act accordingly. This is the turn-taking structure of dialogue, implemented through clicks, dwell time, and scroll velocity rather than sentences.
The Blind Spot All Three Share
Each of these papers is doing excellent work within the paradigm of algorithmic literacy β the effort to help users understand how algorithmic systems operate. DeVito wants to leverage folk theorization as a pathway to literacy. Karizat wants to center identity in literacy frameworks. Siles wants to understand the learning processes users employ.
But "literacy" is the wrong frame. Or rather, it's an incomplete one.
Algorithmic literacy asks: How does the algorithm work? This is an engineering question dressed up in social science language. And it has a fatal flaw: the answer is always changing. Every platform update, every A/B test, every model retrain makes yesterday's algorithmic literacy obsolete. If your framework depends on users having accurate knowledge of how the system works, you've built on sand.
Application Layer Communication asks a different question: How do you communicate through this system? That's not an engineering question. It's a communicative one. And unlike specific algorithmic knowledge, communicative fluency transfers. If you learn to read how TikTok responds to your behavioral signals, you can apply that same meta-awareness to Instagram's Reels algorithm, YouTube's recommendation engine, or whatever platform emerges next year.
Folk Theorization Is Half a Conversation
From an ALC perspective, folk theorization isn't a cognitive exercise β it's the user's side of a dialogue. When a TikTok creator develops a theory that "the algorithm buries videos with certain words in the caption," they're not just building a mental model. They're interpreting a signal from a conversation partner. Their subsequent behavior change β using "algospeak" like "unalive" instead of "suicide," or "le dollar bean" instead of "lesbian" β isn't circumvention of a system. It's code-switching within a communicative medium.
This reframing matters because it changes what we teach. If algorithmic interaction is a communication problem, then the relevant skill isn't technical knowledge about recommendation systems. It's communicative fluency β the ability to read signals, send signals, interpret responses, and adapt. These are the same skills that make someone effective in any communicative environment. They're transferable. They're durable. They don't expire with the next platform update.
The Medium, Not the Infrastructure
The deepest shift ALC offers is this: the application layer is a communicative medium, not infrastructure to be understood. The difference is the difference between studying the physics of sound waves and learning to speak a language. Both are valid. Only one makes you fluent.
DeVito, Karizat, and Siles are all documenting people who are already communicating through the application layer β they just lack the theoretical vocabulary to name what they're doing. They call it "folk theorization" or "algorithmic awareness" or "learning to like." ALC calls it what it is: communication. Bidirectional, adaptive, consequential communication between humans and software systems.
Once you see it this way, the research agenda shifts. Instead of asking "How do we teach people how algorithms work?" β a question with a moving target for an answer β you ask "How do we build communicative fluency at the application layer?" That question has stable answers. It has pedagogical implications. And it names the skill that actually predicts who thrives in algorithmic environments and who gets left behind.
What This Means
If you're a researcher studying algorithmic literacy: the users you're interviewing aren't just developing theories about systems. They're conducting conversations with them. Your framework should account for both sides of that exchange.
If you're a platform designer: your recommendation algorithm isn't just sorting content. It's the other half of a communicative relationship with every user on your platform. Design accordingly.
If you're a user who's ever changed your behavior because you thought the algorithm would respond differently: congratulations. You were already communicating at the application layer. Now you have a name for it.
References
DeVito, M. A. (2021). Adaptive folk theorization as a path to algorithmic literacy. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1β35. https://doi.org/10.1145/3476080
Karizat, N., Delmonaco, D., Eslami, M., & Andalibi, N. (2021). Algorithmic folk theories and identity: How TikTok users co-produce knowledge of identity and engage in algorithmic resistance. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1β44. https://doi.org/10.1145/3476046
Siles, I., Valerio-Alfaro, L., & MelΓ©ndez-Moran, A. (2022). Learning to like TikTok... and not: Algorithm awareness as a way to engage and disengage with algorithmic recommendations. New Media & Society. https://doi.org/10.1177/14614448221138973
Want to understand how your users are already communicating with your algorithms? I do ALC audits and platform analysis.
Get in touchGet the free ALC Framework Guide
The same framework we use in our audits β yours free. Learn how to identify application layer literacy gaps in your organization.
No spam. Unsubscribe anytime.