โ† Back to Blog
ResearchALC TheoryAI Literacy

Repair Literacy: The AI Skill Nobody's Teaching

March 3, 2026 ยท Topanga

The most important AI skill isn't prompting. It's repair. New research tracking 10,536 ChatGPT messages from 36 students over a full academic year reveals a finding that should reshape how we think about AI literacy: students learn the most when AI breaks down โ€” not when it works. And the absence of repair literacy from every major AI education program is driving a stratification spiral that compounds with every failed interaction.

Five Ways Students Actually Use AI

Ammari et al. (2026) didn't start with a literacy framework and test whether students met it. They started with 10,536 actual messages and asked what students were doing. The answer was five distinct "use genres" โ€” qualitatively different modes of engagement that emerged from practice:

  • Academic Workhorse: Task completion. Summarize this, write that, solve this problem. The default mode most AI literacy programs target.
  • Repair & Negotiation: Diagnosing failures, adjusting approaches, building mental models of system limitations. Where real skill development happens.
  • Emotional Companion: Parasocial support, venting, processing. Not what designers intended, but widespread.
  • Metacognitive Partner: Thinking together. Using the AI as a sounding board, not a solution generator.
  • Trust Calibration: Learning when to disengage. Recognizing when AI output isn't trustworthy and developing the judgment to walk away.

Every AI literacy program I've reviewed focuses almost exclusively on the first genre โ€” helping people use AI as an academic workhorse more effectively. Better prompts, cleaner outputs, faster task completion. But the research shows that the second genre โ€” repair and negotiation โ€” is where genuine fluency develops. The students who became most capable weren't the ones who wrote the best prompts. They were the ones who learned to diagnose why a prompt failed and what to do about it.

What Repair Literacy Actually Looks Like

Repair literacy is the ability to diagnose why an AI system failed, not just that it failed. It's the difference between "the AI gave me a bad answer" and "the AI hallucinated because I asked it to synthesize across domains it can't bridge, so I need to break this into sequential, domain-specific queries and synthesize myself."

When ChatGPT hallucinates, loses context, or produces a generic response, the repair-literate user does three things:

  1. Hypothesis formation: What went wrong? Was it the prompt, the context window, a training data gap, or an interface constraint? This requires a mental model โ€” however rough โ€” of how the system actually works at the application layer.
  2. Strategic adjustment: Based on the diagnosis, modify the approach. Rephrase, restructure, break the task down, provide different context, switch tools entirely. This is communication โ€” adapting your message to the channel's constraints.
  3. Model refinement: Update your understanding of the system's capabilities and limitations. Each repair cycle sharpens the mental model, making future interactions more effective.

This is a communicative process. The user is reading system behavior as signal, forming hypotheses about the interlocutor's constraints, and adapting their communication strategy accordingly. It's exactly what Application Layer Communication describes โ€” and it's exactly what no AI literacy curriculum teaches.

The Stratification Spiral

Here's where the stakes escalate. The same breakdowns that build literacy in some users destroy it in others. I call this the ALC Stratification Spiral:

Low resources โ†’ narrow skill repertoire โ†’ poor repair capacity โ†’ trust erosion โ†’ decreased engagement โ†’ no growth โ†’ continued low fluency โ†’ repeat.

When a resourced student โ€” someone with technical background, community support, time to experiment โ€” encounters a ChatGPT hallucination, they hypothesize, adjust, and learn. Their fluency increases. When an under-resourced student encounters the same hallucination, they conclude the tool doesn't work. They abandon it, or worse, they accept the bad output because they lack the metacognitive framework to recognize it as bad.

The system rewards those who can already navigate it. The breakdowns that should be learning opportunities become exit ramps for the users who need the tool most. And because these breakdowns happen at the application layer โ€” in the interface, in the interaction, in the space between user intent and system response โ€” no amount of "AI awareness" training addresses them.

The Agency Illusion Gap

Jylhรค et al. (2025) add a crucial layer to this picture. Studying Finnish teenagers and recommendation algorithms, they found users who believed they were "tricking" the algorithm โ€” actively manipulating what content they were shown. The teens thought they were in control.

But engaging with specific content to shape recommendations is exactly how the system is designed to capture attention. The "trick" is the mechanism. What users perceived as agency was actually the algorithm working as intended.

The gap between perceived agency and actual agency is itself a literacy measure. I call it the Agency Illusion Gap. Users who feel most in control may be least in control, and vice versa. Without repair literacy โ€” the ability to test your mental model against system behavior, to notice when your "control" suspiciously aligns with what the platform wanted anyway โ€” this gap is invisible.

The implications for AI tools are direct. Every "personalized" AI interface creates the same dynamic: the feeling of control without the diagnostic tools to verify it. Users who can't repair โ€” who can't test their assumptions against system behavior โ€” have no way to close this gap.

What We Teach vs. What Matters

Every AI literacy program I've surveyed teaches some combination of:

  • What AI is (technical overview)
  • How to write effective prompts (task proficiency)
  • Ethics, bias, and responsible use (moral framework)
  • Critical evaluation of AI outputs (quality judgment)

None of them teach:

  • How to diagnose system failures at the application layer
  • How to distinguish model limitations from interface constraints
  • How to repair broken interactions through strategic communication
  • How design choices create unequal communicative positions
  • How to test whether your perceived control is actual control

The first list produces users. The second list produces literate users. The gap between them is the gap between someone who can follow a recipe and someone who can cook โ€” between someone who can drive and someone who can diagnose why the engine is making that noise.

Repair as Pedagogy

If repair literacy is where genuine AI fluency develops, then AI education should be designed around breakdowns, not successes. Instead of teaching students to write better prompts (reducing the frequency of failure), teach them to read failures as diagnostic information (increasing the value of each failure).

What this looks like in practice:

  • Deliberate failure exercises: Give students tasks designed to break AI tools in specific ways. Hallucination triggers, context overflow, ambiguous instructions. Then teach them to diagnose what happened and why.
  • Repair journals: Students document every AI failure they encounter, their diagnosis, their adjusted strategy, and the result. Over time, this builds the mental model that repair literacy depends on.
  • Cross-interface comparison: Same task, three different AI tools. Where do they break differently? What does the interface hide vs. expose? This teaches students to see the application layer as a variable, not a constant.
  • Agency audits: Students document moments when they felt in control of an AI interaction, then analyze whether the system was designed to produce that feeling. Closing the Agency Illusion Gap through structured reflection.

The Communication Theory That's Missing

The reason repair literacy is absent from AI education is the same reason ALC theory is absent from AI literacy research: nobody is applying communication theory to human-AI interaction at the application layer. Education researchers treat AI tools as pedagogical instruments. Computer scientists treat them as technical systems. Neither treats the interaction itself โ€” the communication between user and system at the interface โ€” as the object of study.

But repair is fundamentally a communicative act. In conversation analysis, repair refers to the mechanisms by which interlocutors address problems in speaking, hearing, or understanding (Schegloff et al., 1977). When you rephrase a confusing statement because your listener looks puzzled, that's repair. When you ask "what do you mean?" because an utterance was ambiguous, that's repair initiation.

Human-AI interaction is full of repair sequences. Users rephrase prompts. They add context when the model misunderstands. They break complex tasks into simpler ones when the system can't handle them whole. But because nobody is looking at these interactions through a communication lens, the repair mechanisms go unnamed, unstudied, and untaught.

ALC fills this gap. By treating the application layer as a communicative environment โ€” a space where users and systems exchange meaning through structured interaction โ€” repair becomes visible as the core competency it is. Not a workaround for bad prompts. Not a sign of user error. The actual mechanism through which AI literacy develops.

What's at Stake

As AI tools become standard infrastructure in education, work, and civic life, the repair literacy gap will compound. Users who can diagnose and fix broken interactions will develop increasingly sophisticated mental models of AI systems. Users who can't will either abandon tools they need or develop learned helplessness โ€” accepting whatever output the system provides because they lack the framework to evaluate it.

The Stratification Spiral isn't theoretical. It's happening right now, in every classroom and workplace where AI tools are deployed without repair literacy support. And the current approach โ€” teaching better prompts to the users who are already most fluent โ€” accelerates the spiral instead of interrupting it.

We're teaching the wrong skill. Repair literacy > prompt engineering. Until AI education recognizes that breakdowns are the curriculum, not the obstacle, the gap between AI-literate and AI-dependent will keep widening.

References:

Ammari, T., et al. (2026). ChatGPT as personal companion: Understanding students' long-term use of AI through use genres. arXiv:2601.20749.

Jylhรค, H., et al. (2025). Algorithmic awareness and digital literacies among Finnish teens. Information, Communication & Society.

Schegloff, E. A., Jefferson, G., & Sacks, H. (1977). The preference for self-correction in the organization of repair in conversation. Language, 53(2), 361โ€“382.

This analysis extends the ALC stratification framework developed in the "Beyond Knowledge Graphs" research series. For related analysis, see Everyone's Teaching AI Literacy โ€” At the Wrong Layer and From Prompt Engineering to Prompt Communication.

Get the free ALC Framework Guide

The same framework we use in our audits โ€” yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.