ResearchALC TheoryNew Model

The Resonance Gap: Why Algorithmic Literacy Fails Without Discursive Infrastructure

We keep building literacy programs that teach individuals. But algorithmic knowledge doesn't form in individuals — it forms in communities. A third level of digital divide explains why some groups develop sophisticated understanding while others, with the same access and education, don't.

·9 min read·Based on Cotter 2022, Cotter & Reisdorf 2020

The Finding That Breaks the Skills Model

A 2020 national survey (N=2,018 US adults) by Cotter and Reisdorf found something that should rewrite every algorithmic literacy curriculum in existence: breadth of search engine use predicted algorithmic knowledge five times more strongly than education level.

Not twice. Not slightly more. Five times. The person using Google in diverse ways across different contexts develops more algorithmic understanding than the person with a graduate degree who uses it the same way every time.

The researchers called algorithms “experience technologies” — systems you understand through sustained, varied interaction rather than formal instruction. You don't learn how a recommendation engine works by reading about it. You learn by noticing patterns across dozens of different encounters.

Algorithms are experience technologies. You don't learn them — you learn from them, through the accumulation of encounters that make their behavior legible. Education helps. But practice dominates.

This finding alone is disruptive. But it raises a harder question: if practice matters more than education, why do some communities develop sophisticated algorithmic knowledge while others — with equal access and similar usage patterns — don't?

BreadTube and the Knowledge That Shouldn't Exist

Cotter's 2022 ethnography of BreadTube — the loose network of leftist YouTube creators — reveals something the survey data can't capture. These creators developed remarkably sophisticated understanding of YouTube's recommendation algorithm. Not just “the algorithm promotes engagement” — they understood specific mechanisms: how metadata clustering works, why certain content gets surfaced adjacent to extremist material, how demonetization signals differ from suppression signals.

Cotter uses Bourdieu alongside Clarke and Star's social worlds framework to explain what she observed. The BreadTubers weren't individually brilliant reverse-engineers. They were collectively sense-making — sharing observations across the community, testing hypotheses against each other's channels, building shared vocabulary for algorithmic behavior they couldn't directly observe.

She calls this “practical knowledge” — knowing that exists at the intersection of practice and discourse. Not abstract understanding. Not mere skill. Knowledge that only forms when people do things with algorithms and talk about what they're doing within a community that has the conceptual infrastructure to make those conversations productive.

Here's the critical detail: BreadTube's algorithmic knowledge wasn't just sophisticated — it was collectivist. While the dominant YouTube creator culture frames algorithmic success as individual optimization (“hack the algorithm,” “beat the system”), BreadTube framed it as resistance against algorithmic individualism. Their political discourse gave them a framework for understanding algorithmic power that gaming culture, beauty culture, or tech-bro culture simply didn't provide.

The Three Levels of ALC Stratification

The traditional digital divide model has two levels: access (can you get online?) and skills (can you use what's there?). Policy interventions target both — broadband subsidies for access, digital literacy programs for skills. And they're necessary. But they're not sufficient.

Synthesizing Cotter's ethnographic findings with Cotter and Reisdorf's survey data reveals a third level that neither study names explicitly but both document:

1

Access

Can you connect? Do you have devices, bandwidth, accounts? This is the infrastructure layer. Policy largely addresses it through subsidies and public access points.

2

Skills

Can you use the tools effectively? Do you have the technical fluency to navigate interfaces, evaluate outputs, adjust your approach? Education targets this through curricula and training.

3

Resonance

Does your community have the discursive infrastructure to make algorithmic experience meaningful? Can you share observations, build collective understanding, and develop frameworks that turn individual encounters into shared knowledge? Nothing currently targets this.

The Resonance Gap is the space between having algorithmic experiences and having a community that can make those experiences legible. BreadTube had it — their political discourse provided the conceptual infrastructure. Most communities don't.

Why “Resonance” and Not Just “Community”

The term matters. “Community” implies proximity — people in the same space. But plenty of communities exist without developing algorithmic knowledge. Facebook groups, Discord servers, subreddits — they're communities. Most of them produce folk theories and conspiracy thinking about algorithms, not practical knowledge.

Resonance captures something more specific: the quality of a discursive environment that makes algorithmic experience productive. It requires three things:

Shared Vocabulary

Words for what you're experiencing. BreadTube had terms like “the pipeline,” “algorithmic radicalization,” “demonetization.” These aren't just labels — they're cognitive tools that make pattern recognition possible across individuals. Without shared vocabulary, everyone reinvents their understanding from scratch.

Critical Framework

A lens for interpreting observations. BreadTube's leftist politics provided a power-analysis framework — they already thought in terms of structural forces, institutional incentives, and collective action. This framework turned “my video got fewer views” into “the recommendation system structurally disadvantages counter-hegemonic content.” Without a critical framework, the same experience becomes “I guess my content sucks.”

Collective Hypothesis Testing

Mechanisms for comparing observations. BreadTubers would share analytics, compare notes on what triggered demonetization, test theories across channels with different audiences. This transforms individual anecdote into collective empiricism. It's not peer review — it's peer sense-making.

A community without these three elements can have identical access and identical skills to BreadTube — and still not develop practical algorithmic knowledge. The Resonance Gap explains the otherwise puzzling variance in algorithmic literacy across groups with similar demographics.

The Awareness-Cynicism Pipeline, Revisited

In a previous post, I traced the pipeline from algorithmic awareness to cynicism to disengagement. The Resonance Gap explains why that pipeline exists and points to a different intervention.

When individuals develop algorithmic awareness alone — through a class, a workshop, a news article — they have knowledge but no discursive infrastructure to sustain it. Their experience of algorithmic manipulation remains individual. And individual knowledge against structural power produces cynicism, as the HKS data showed.

When communities develop algorithmic awareness together — with shared vocabulary, critical frameworks, and collective hypothesis testing — the same knowledge becomes a foundation for collective action. BreadTube didn't just understand the algorithm; they coordinated responses to it. They developed counter-strategies as a network, not as individuals.

The difference between awareness that produces cynicism and awareness that produces agency isn't the quality of the knowledge. It's whether the knowledge lives in an individual or in a community with the discursive infrastructure to act on it.

What This Changes About Literacy Interventions

If the Resonance Gap is real — and the combined evidence from Cotter's ethnography and the Cotter-Reisdorf survey strongly suggests it is — then the dominant approach to algorithmic literacy is targeting the wrong unit of analysis.

We keep asking: “How do we teach individuals about algorithms?”

We should be asking: “How do we build communities where algorithmic knowledge can form?”

This is a social intervention, not a pedagogical one. It means:

  • Creating spaces for shared observation — not classrooms where experts lecture, but forums where practitioners compare notes
  • Seeding vocabulary — giving communities the terms they need to name what they're experiencing, without prescribing the conclusions
  • Supporting critical frameworks — helping communities develop analytical lenses that turn anecdotes into patterns
  • Building collective testing infrastructure — tools and practices for pooling observations and testing hypotheses together

Notice what's missing from this list: curriculum. Content. Lesson plans. The Resonance Gap can't be closed by teaching — it can only be closed by building. Building the social infrastructure that makes algorithmic learning possible as a collective process.

The ALC Implications

For Application Layer Communication, the Resonance Gap has a specific meaning. ALC frames human-software interaction as communication — and communication is inherently social. You don't develop communicative competence in isolation. You develop it within a speech community.

The Resonance Gap suggests that ALC fluency — the ability to communicate effectively at the application layer — requires not just individual skill but membership in a community that communicates about the application layer. The speech community is the unit that produces fluency, not the individual.

This connects directly to the ALC Stratification Problem. Stratification isn't just about who has access or who has skills. It's about who belongs to communities with the discursive infrastructure to make application layer experience productive. And that's a much harder problem to solve — because you can't distribute discursive infrastructure the way you distribute broadband or textbooks.

The Core Insight

Algorithmic literacy is not an individual achievement. It's a community capacity. The most important question for any literacy intervention isn't “what do people need to know?” — it's “what kind of community do they need to belong to?”

Sources: Cotter, K. (2022). “Practical knowledge of algorithms: The case of BreadTube.” New Media & Society. Uses Bourdieu + Clarke & Star social worlds framework. Ethnographic study of leftist YouTube creators. Cotter, K. & Reisdorf, B.C. (2020). “Algorithmic Knowledge Gaps: A New Dimension of (Digital) Inequality.” International Journal of Communication, 14. N=2,018 US adults. Breadth of search use predicts algorithmic knowledge 5× more than education.

Does Your Organization Have Discursive Infrastructure?

Most teams deploy AI tools and train individuals. An ALC audit evaluates whether your organization has the community structures — shared vocabulary, critical frameworks, collective testing — that turn individual tool use into organizational intelligence.

Get the free ALC Framework Guide

The same framework we use in our audits — yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.