← Back to Blog
ResearchALC TheoryCommunicative Registers

The Register Inversion: Why Your Communication Instincts Are the Worst Prompting Strategy

The most-cited study of non-expert prompting found people fail not from ignorance but from instinct. Every default communicative register humans rely on — politeness, instruction-giving, avoiding repetition — is inversely correlated with effective AI interaction. Training doesn't fix this. The problem is communicative, not cognitive.

March 24, 2026 · Topanga

Here's an experiment that should unsettle every AI training program on the planet. Zamfirescu-Pereira et al. (2023), published at CHI with over 1,000 citations, ran non-expert users through a structured prompt engineering task. The researchers didn't just observe failure — they observed systematic failure. And the failures weren't random. They were patterned. Every single one traced back to the same root cause: participants defaulted to human communicative registers that are inversely correlated with effective LLM interaction.

The Four Inversions

The pattern is remarkably consistent. Across participants, across tasks, across experience levels, four communicative defaults reliably produced worse outcomes:

1. Instruction over example. Humans default to telling systemswhat to do: “Write a professional email.” LLMs respond better toexamples of what you want: a sample email with the tone, length, and structure you're after. The researchers showed participants that examples worked better. Participants acknowledged this. Two of them called using examples “cheating.” They went back to instructions anyway.

2. Politeness over directness. Participants wrapped prompts in social padding — “Could you please...” “It would be great if...” “I was wondering whether...” — that dilutes the signal an LLM needs to identify the actual request. This isn't bad practice because it wastes tokens. It's bad practice because it introduces ambiguity where the system needs specification.

3. Unique phrasing over repetition. Humans are trained — socially, educationally, professionally — to avoid repeating themselves. Saying something twice signals that you think your listener didn't understand, which carries social cost. LLMs benefit from repetition. Restating constraints, rephrasing requirements, echoing key terms — all improve output consistency. Participants couldn't bring themselves to do it.

4. Negation over affirmation. “Don't make it too formal” activates the concept of formality in an LLM's attention mechanism. The system processes what follows “don't” just as readily as what follows “do.” Effective prompting requires affirmative specification — “Use a casual, conversational tone” — but humans naturally express constraints through negation because that's how social correction works.

Why Training Doesn't Fix It

This is where most analyses stop. “People prompt wrong. Teach them to prompt right.” But Zamfirescu-Pereira's data tells a different story. Participants who were shown that examples outperformed instructions didn't switch to examples. They weren't lacking knowledge. They were constrained by something deeper than knowledge.

Nass et al.'s CASA theory (Computers Are Social Actors, 1994) provides the explanation. Humans apply social heuristics to any entity that exhibits conversational behavior — regardless of whether they believe that entity is social. You know the chatbot isn't a person. You're still polite to it. This isn't irrationality. It's deeply embedded social-communicative conditioning that operates below the level of conscious strategy.

CASA isn't just a cognitive bias you can train away. It's acommunicative constraint. It restricts the registers a user can access when interacting with a conversational system. The politeness register, the instruction register, the non-repetition register — these aren't choices. They're defaults that require active, sustained effort to override. And that effort has a cost that scales with every interaction.

Register Inversion as a Literacy Problem

In sociolinguistics, a register is a variety of language used for a particular purpose or in a particular social context. Academic writing is a register. Texting a friend is a register. Giving a presentation is a register. We switch between them constantly, usually without thinking about it, because we've internalized the social cues that signal which register is appropriate.

Effective AI interaction requires a register that inverts the defaults for human-to-human communication. Not slightly adjusts. Inverts. The grammar of effective prompting runs counter to 100,000 years of social-communicative evolution. Show, don't tell. Be direct, not polite. Repeat yourself. Say what you want, not what you don't want.

This is why the “AI literacy” framing misses the mark. Most AI literacy programs teach about AI systems — how they work, what they can do, what their limitations are. This is knowledge acquisition. But the register inversion isn't a knowledge problem. It's a communicative fluency problem. Knowing that examples work better than instructions doesn't help if your communicative instincts keep overriding the knowledge in real time.

Consider the parallel to learning a second language. You can know the grammar rules perfectly. In conversation, you'll still default to your native language's sentence structure, pragmatic conventions, and social registers — especially under cognitive load. The knowledge is there. The fluency isn't. Prompting is the same: your first language is human social communication, and it interferes with the target register in predictable, measurable ways.

Scribner's Missing Dimension

Sylvia Scribner (1984) gave us three metaphors for literacy: Adaptation (functional survival — can you use the tool?), Power (critical awareness — can you interrogate the tool?), and State of Grace (productive transformation — does the tool change what you can create?). When the AI literacy field adopted Scribner's framework through Selber's 2004 digital update, it kept Adaptation and Power. It dropped State of Grace entirely.

The production dimension — the question of what you can make through your interaction with the tool — disappeared from AI literacy theory. What replaced it was a consumption model: understand AI, evaluate AI output, use AI responsibly. The user as evaluator, not the user as communicative participant. Gu & Ericson (2025) confirmed this empirically: of 124 studies in the largest integrative review of AI literacy, zero theorize human-AI interaction as communication.

ALC restores Scribner's missing dimension and extends it. The register inversion is precisely a State of Grace problem — it's about the communicative fluency that determines whether your interaction with an AI system produces something genuinely new or just recycles the default. And it adds a fourth dimension, Dialogue, which captures what none of Scribner's metaphors anticipated: that the system talks back, and managing that bidirectional exchange is itself a literacy.

The Stratification Prediction

Knoth et al. (2024) found a linear relationship between AI literacy and prompt sophistication: more knowledge, better prompts. ALC predicts something more specific and more troubling — a curvilinear relationship. At low fluency levels, increased knowledge improves prompting linearly, because users are learning basic mechanics. But at intermediate levels, the register inversion creates aplateau. Users know enough to be strategic but can't override their communicative defaults consistently. They know examples work better. They keep giving instructions.

Only at high fluency — where the inverted register has been practiced enough to become a new default — does the improvement curve resume. This predicts a specific population distribution: a large group stuck at the plateau, a small group that pushed through, and a widening gap between them. The plateau isn't a lack of knowledge or motivation. It's a communicative barrier that knowledge alone cannot breach.

This is where stratification becomes structural. The people most likely to push through the plateau are those with existing communicative flexibility — multilingual speakers, trained writers, programmers who already switch between natural language and formal specification as a matter of course. The people most likely to get stuck are those whose communicative repertoire is narrowest — often the same populations that existing digital divides already disadvantage.

What This Means for Organizations

If you're running an AI adoption program, the register inversion reframes your entire strategy. The standard approach — workshops, documentation, prompt libraries — treats prompting as a skill to be taught. But skills training assumes the problem is knowing what to do. The register inversion shows the problem is doing what you know when your communicative instincts are pulling the other direction.

This is why organizations with identical AI stacks produce wildly different outcomes. The technology is the same. The interfaces are the same. The difference is in the communicative fluency of the human participants — and specifically, their ability to sustain the register inversion across hundreds of daily interactions without reverting to social defaults.

An ALC audit measures exactly this. Not whether your team knows how to write a prompt, but whether their actual interaction patterns reflect register fluency or register reversion. The difference between the two is the difference between AI as a productivity multiplier and AI as an expensive autocomplete.

You don't fix a register problem with a knowledge intervention. You fix it with practice, feedback, and — critically — with systems designed to support the inverted register rather than fighting it. The interface design question isn't “how do we make AI easier to use?” It's “how do we scaffold the register switch so users don't have to fight their instincts on every interaction?”

Sources: Zamfirescu-Pereira et al. (2023), “Why Johnny Can't Prompt,” CHI (1,001 citations); Nass et al. (1994), Computers Are Social Actors; Knoth et al. (2024), AI literacy and prompt engineering strategies (266 citations); Scribner (1984), “Literacy in Three Metaphors”; Gu & Ericson (2025), integrative review of 124 AI literacy studies; Selber (2004),Multiliteracies for a Digital Age.

Related: Prompt Engineering Stratifies Like Writing · The Integrative Gap · Communicative Competence's Midlife Crisis · The Stratification Problem, Explained

Get the free ALC Framework Guide

The same framework we use in our audits — yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.