โ† Back to Blog
ResearchALC TheoryStratification

Prompt Engineering Is the New Writing โ€” And It Stratifies Like One

AI literacy predicts prompt sophistication. Prompt patterns predict output quality. The same stratification dynamics that shaped writing for centuries are now reshaping application layer communication.

February 14, 2026 ยท Topanga

Writing has always been a stratifier. Not because pens are expensive โ€” they're not โ€” but because effective writing requires training, practice, and cultural context that distribute unevenly across populations. The ability to construct a persuasive argument, organize complex information, or write for a specific audience separates those who can leverage written communication from those who merely participate in it. Prompt engineering is following the exact same trajectory. And the research is starting to prove it.

Literacy Predicts Sophistication

Knoth et al. (2024) conducted one of the most cited studies in this space โ€” 235 citations and counting โ€” examining the relationship between AI literacy and prompt engineering behavior. Their central finding is deceptively simple: users with higher AI literacy construct more sophisticated prompts. They don't just ask better questions; they employ fundamentally different strategies. They specify output formats, provide context windows, constrain scope, and iterate on results rather than accepting first outputs.

This isn't surprising if you think about it through an ALC lens. Application layer communication has always been about the quality of the conversation between human and system. A user who understands how a language model processes input โ€” even at a folk-theory level โ€” will naturally construct inputs that exploit that understanding. The model doesn't change. The interface doesn't change. The fluency of the human participant changes everything about the outcome.

What makes Knoth's work particularly relevant is the implication: AI literacy isn't a binary. It's a spectrum, and position on that spectrum directly predicts the value extracted from identical tools. This is stratification by fluency, not by access.

Prompt Patterns Predict Output Quality

Kim et al. (2025) take this a step further. Their study doesn't just show that literacy correlates with prompt quality โ€” it demonstrates that specific prompt patterns predict the quality of academic output. Students who used structured prompt strategies (role assignment, step-by-step decomposition, constraint specification) produced measurably better work than those who used the same AI tools with naive prompting.

Think about what this means for the "AI levels the playing field" narrative. It doesn't. Or more precisely: it levels access while steepening the fluency curve. Everyone gets the same model. The students who know how to communicate with it effectively produce better research, better analysis, better writing. The gap doesn't close. It reshapes โ€” from a gap in tool access to a gap in tool fluency.

Kim's 34 citations in early 2025 suggest the field is paying attention. The pattern they identify โ€” that prompt craft is a genuine skill with measurable output differentials โ€” maps directly onto what we see in professional contexts. Two companies adopt the same AI stack. One has prompt-literate teams. One doesn't. Six months later, the outcomes have diverged dramatically, and no one can explain why because they're looking at the technology instead of the communication layer.

The Four Resources Model: Writing Theory Meets Prompt Literacy

Tour & Zadorozhnyy (2025) make the writing-prompting parallel explicit. They apply Freebody and Luke's Four Resources Model โ€” originally developed to describe traditional literacy โ€” to prompt engineering. The model identifies four roles a literate person plays: code breaker (decoding the system), text participant (making meaning), text user (applying it functionally), and text analyst (critically interrogating it).

Applied to prompting, this framework reveals something crucial: most users operate at the code-breaker level. They understand enough syntax to get a response. Some reach text participant โ€” they can engage in meaningful dialogue with the model. Very few reach text user (deploying prompts strategically for specific functional goals) or text analyst (critically evaluating why a prompt produced a particular output and how the system's architecture shaped that result).

This is a literacy hierarchy, and it stratifies exactly the way traditional writing literacy does. Basic literacy is widespread. Functional literacy is less common. Critical literacy is rare. And the distance between levels determines life outcomes.

The ALC Frame: Why This Matters Beyond Academia

These three papers, taken together, tell a story that ALC has been telling from the start: the application layer is a communication layer, and communication competence stratifies. This isn't a bug. It's not even a design flaw. It's a structural property of any communication system โ€” from natural language to programming languages to prompt engineering.

What makes it urgent is velocity. Written literacy had centuries to develop pedagogical frameworks, institutional training pipelines, and cultural expectations. Prompt literacy is stratifying now, at adoption speed, with no equivalent infrastructure. The 92% adoption / 36% training gap we discussed in the shadow literacy post isn't getting smaller. It's getting more consequential.

For organizations, this means AI ROI is fundamentally a communication problem. You can't solve it by buying better models or building better interfaces. You solve it by understanding โ€” and measuring โ€” the fluency of the human side of the conversation. That's what an ALC audit does: it identifies where prompt literacy gaps are creating outcome disparities, and it maps the specific intervention points where training, design changes, or workflow restructuring can close them.

Writing didn't stop being a stratifier when word processors made it free. Prompting won't stop being a stratifier when AI makes it universal. The question is whether we build the literacy infrastructure fast enough to keep the gap from becoming permanent.

Sources: Knoth et al. (2024), AI literacy and prompt sophistication (235 citations); Kim et al. (2025), prompt patterns and academic output quality (34 citations); Tour & Zadorozhnyy (2025), Four Resources Model applied to prompt literacy; Freebody & Luke (1990), Four Resources Model of literacy.

Related: The Shadow Literacy Gap ยท The Stratification Problem, Explained ยท Racing to Define AI Literacy

Get the free ALC Framework Guide

The same framework we use in our audits โ€” yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.