ResearchALC TheoryPlatform Power

Your Platform Is Gaslighting You — And You Can't Prove It

Algorithmic opacity isn't just a transparency problem. It's communicative violence — and the spectrum from gaslighting to dialogue determines who has power in the algorithmic age.

·9 min read·Based on Cotter 2023, Simpson et al. 2022

“Shadowbanning Is Not a Thing”

In 2019, Instagram influencers began noticing something wrong. Their reach was cratering. Posts that used to get thousands of impressions were getting hundreds. Their content hadn't changed. Their audience hadn't shrunk. Something in the system had shifted.

They called it shadowbanning — the algorithm quietly suppressing their content without notification, without explanation, without any way to appeal.

Instagram's response? “Shadowbanning is not a thing.”

Not “we looked into it and found no suppression.” Not “here's what our system actually does.” Just: your experience isn't real. The thing you're describing doesn't exist. Maybe your content just isn't good enough anymore.

Penn State researcher Kelley Cotter has a name for this pattern: black box gaslighting.

“Platforms leverage epistemic authority to deny users' lived experiences of algorithmic suppression. The lack of transparency creates a space where platforms can undermine users' perceptions of reality.”

— Cotter, K. (2023). Information, Communication & Society, 26(6)

This isn't a transparency problem. It's a communication problem. And the distinction matters enormously.

Beyond Opacity: Gaslighting as Communicative Violence

The standard critique of algorithms is that they're opaque — black boxes whose inner workings users can't see. This framing treats the problem as an information deficit. If only users could see how the algorithm works, they'd understand their situation.

But Cotter's research reveals something darker. Opacity is passive. Gaslighting is active. It's not that the platform fails to explain — it's that the platform actively contradicts your experience. The platform controls the channel, the code, and the narrative about what the code does.

Consider the communicative structure of this exchange:

User: “My reach dropped 80% overnight. Something changed.”

Platform: “Nothing changed on our end. Try creating more engaging content.”

User: “I have the same content, same audience, same posting schedule. The numbers don't lie.”

Platform: “Engagement naturally fluctuates. Shadowbanning is not a thing.”

This is textbook gaslighting — not as a metaphor, but as a communicative act. The powerful party denies the less powerful party's experience of reality, leveraging information asymmetry to make the victim question their own perception. “Your experience of this communication doesn't count.”

And shadowbanning is just the start. The same communicative pattern shows up everywhere algorithms mediate human experience:

  • Content creators told their “content quality” dropped when their views decline — when nothing about the content changed
  • Workers told “the algorithm is fair” when they experience systematic disadvantage in gig platform assignments
  • Job applicants told their resume “didn't match” when ATS keyword filters silently rejected them
  • AI chatbot users told the response is “objective” when it reflects embedded values and biases

The Gaslighting-to-Dialogue Spectrum

Not all algorithmic communication is gaslighting. There's a spectrum — and where you sit on it determines your effective power in the algorithmic world.

🔴

Gaslighting

Active denial of your experience. “That's not happening.”

🟠

Opacity

Silent operation. No information given. You don't know what you don't know.

🟡

Transparency

One-way information. “Here's how it works.” No dialogue.

🟢

Dialogue

Mutual communication. User and system co-negotiate outcomes.

Most human-algorithm interaction sits between Gaslighting and Opacity.

Here's what makes this framework powerful: most algorithmic literacy interventions only aim for Transparency. They teach people how algorithms work, explain recommendation systems, demystify the black box. That moves users from Opacity to Transparency — a real improvement. But it stops short.

Transparency without dialogue is a one-way mirror. You can see in, but you can't reach through. The platform explains its system, and you either accept the explanation or you don't. There's no negotiation. No recourse. No communication.

Application Layer Communication aims for Dialogue — the ability to not just understand algorithmic systems but to communicate with and about them effectively enough to shape outcomes.

Why LGBTQ+ Users Can't “Tame” Their Algorithm

Simpson, Hamann, and Semaan (2022) studied 16 LGBTQ+ TikTok users trying to “domesticate” their For You Page — to make the algorithm reflect their identity and values. They used domestication theory, a framework for how people incorporate technologies into their lives.

Their central finding: LGBTQ+ users can never fully domesticate TikTok. The algorithm continually misaligns with their “personal moral economy” — their values, their sense of identity, their understanding of what's appropriate. Users described a Sisyphean cycle: train the algorithm, enjoy a brief period of alignment, then watch it drift back toward majority-pattern content.

Through the lens of the Gaslighting-to-Dialogue spectrum, this makes perfect sense. The algorithm's communicative defaults are majority-patterned. LGBTQ+ users must constantly re-communicate their identity to the algorithm — and the algorithm keeps “forgetting” because its training data, engagement metrics, and optimization targets all privilege majority patterns.

This is algorithmic communicative stratification. Same platform, same features, same interface — but fundamentally different communicative demands based on who you are. Majority users get Transparency by default. Marginalized users get stuck between Opacity and Gaslighting, doing extra communicative labor just to be seen accurately.

And this connects to a broader pattern. Karizat et al. (2021) describe algorithms as “identity strainers” — systems that filter identity through majority-patterned sieves. DeVito (2022) documents the “algorithmic trap of visibility” for marginalized users. The language differs, the finding is consistent: the communicative burden of algorithmic interaction is not equally distributed.

From Gaslighting to Dialogue: What Can Users Actually Do?

If black box gaslighting is a communicative act, the response has to be communicative too. Policy and regulation can force platforms toward Transparency (and they should). But users can't wait for regulation. They need communicative competence now.

ALC identifies four levels of communicative agency that move users along the spectrum:

1. Recognize Gaslighting

The first step is naming it. When a platform says “nothing changed” and your metrics tell a different story, that's not a content quality issue — it's communicative denial. Tanksley (2024) calls this the “consciousness” phase of critical algorithmic literacy: the transition from “maybe my content is bad” to “the system is suppressing me.”

2. Develop Resilient Folk Theories

Individual theories about how algorithms work are easy for platforms to dismiss. Community-developed folk theories are harder. When thousands of creators independently document the same pattern, the platform's denial becomes less credible. Collective knowledge resists gaslighting.

3. Communicate About Algorithms Collectively

Resistance communities — creator forums, worker organizing groups, user advocacy organizations — shift the communicative dynamic from individual-vs-platform to community-vs-platform. This is where communicative competence becomes political power.

4. Communicate With Algorithms Effectively

Practical fluency — understanding prompt structures, feedback mechanisms, optimization signals — gives users communicative tools to shape algorithmic behavior directly. Not to “trick” the algorithm, but to participate in a more equitable dialogue with it.

The Business Implication

This isn't just an academic framework. Every organization that deploys algorithmic systems sits somewhere on the Gaslighting-to-Dialogue spectrum. And most sit closer to the wrong end than they think.

When your AI customer service bot gives a wrong answer and says “I apologize for any confusion” — that's the Opacity-to-Gaslighting zone. The confusion isn't the customer's. The error is the system's.

When your recommendation engine filters candidates and you tell rejected applicants they “weren't a match” — that's algorithmic opacity presented as personal feedback.

When your internal AI tools give different quality results to different employees and you chalk it up to “individual skill levels” — you might be gaslighting your own workforce about a communicative stratification problem.

The Question for Every Organization

Where do your algorithmic systems sit on the Gaslighting-to-Dialogue spectrum? And what would it take to move them one step toward Dialogue? That's not a technology question. It's a communication design question — and it's exactly what ALC audits are built to answer.

The Academic Stakes

Cotter's black box gaslighting concept is powerful, but it frames the problem as epistemic — about knowledge and truth claims. ALC reframes it as communicative — about dialogue, interaction, and the distribution of communicative agency.

This matters because epistemic framing leads to transparency solutions (show people how the algorithm works), while communicative framing leads to fluency solutions (equip people to communicate effectively within algorithmic systems). Both are necessary. But transparency alone, as we've seen, gets you to the middle of the spectrum at best.

The Gaslighting-to-Dialogue spectrum is a diagnostic tool. It tells you not justwhether there's a problem, but what kind of problem it is and what kind of intervention it needs. Organizations stuck at Gaslighting need accountability. Those at Opacity need transparency. Those at Transparency need communicative design. And the goal — the thing almost no one is building toward — is genuine Dialogue between humans and the systems that shape their lives.

Sources: Cotter, K. (2023). “'Shadowbanning is not a thing': Black box gaslighting and the power to independently know and credibly critique algorithms.” Information, Communication & Society, 26(6). Simpson, E., Hamann, A., & Semaan, B. (2022). “How to Tame 'Your' Algorithm: LGBTQ+ Users' Domestication of TikTok.” PACM HCI (GROUP 2022). Tanksley, T. (2024). Critical Race Algorithmic Literacy (CRAL). Karizat, N. et al. (2021). Identity strainer. DeVito, M. (2022). Algorithmic trap of visibility.

Where Does Your Organization Sit on the Spectrum?

An ALC audit identifies where your algorithmic systems fall on the Gaslighting-to-Dialogue spectrum — and what it takes to move toward genuine communicative design.

Get the free ALC Framework Guide

The same framework we use in our audits — yours free. Learn how to identify application layer literacy gaps in your organization.

No spam. Unsubscribe anytime.