Methodology

How RageCheck detects manipulative patterns in content.

If you're seeing the "Techniques Detected," "Viral Triggers," or "What Follows" sections in a report, this page explains the model behind them.

Overview

RageCheck uses a two-stage analysis pipeline: rule-based pattern detection followed by optional AI-powered contextual analysis. This hybrid approach balances speed, transparency, and accuracy.

The system analyzes text for linguistic patterns commonly associated with manipulative framing—language optimized to provoke high-arousal reactions over understanding. It does not assess factual accuracy or political bias.

Why This Breakdown

RageCheck organizes analysis the same way influence typically works in the real world:

  1. How content is constructed — tone, framing, rhetoric
  2. Why it spreads — share mechanics, identity signaling, conflict cues
  3. How it tends to affect people — likely emotional intensity and discussion style

Note: This is probabilistic pattern detection. It does not infer intent and it is not truth scoring. It estimates patterns that tend to correlate with higher-arousal, lower-nuance media.

How the UI Sections Map to Signals

Techniques DetectedConstruction: Emotional Heat, Moral Outrage, Black & White Thinking
Viral TriggersTransmission: Fight-Picking, Us vs Them
What FollowsImpact: Combined signal read predicting likely reaction patterns

Signal Categories

Content is analyzed across five distinct signal categories, each targeting specific manipulation patterns:

Emotional Heat

Language designed to activate strong emotional responses—fear, anger, disgust, or outrage—rather than inform.

Examples: "Shocking," "horrifying," "unbelievable," "disgusting," inflammatory adjectives, ALL CAPS emphasis, excessive punctuation, urgent calls to action.

Us vs Them

Framing that constructs an adversary—dehumanizing groups, attributing malicious intent, or creating artificial us-vs-them divisions.

Examples: "They want to destroy," "the elite," "those people," "the enemy within," collective blame, conspiracy framing, dehumanizing labels.

Moral Outrage

Appeals to moral outrage, purity rhetoric, and righteous indignation that frame issues as battles between good and evil.

Examples: "Evil," "immoral," "corruption," "betrayal," "disgusting behavior," virtue signaling, moral absolutism, purity tests.

Black & White Thinking

Black-and-white framing that eliminates nuance, presents false dichotomies, or reduces complex issues to simple narratives.

Examples: "Always," "never," "everyone knows," "the only solution," "it's simple," false equivalences, strawman arguments, ignoring counterevidence.

Fight-Picking

Direct provocations, engagement bait, and language designed to mobilize action or spread content virally.

Examples: "Share before they delete this," "wake up," "fight back," "they don't want you to know," rhetorical questions designed to provoke, call-outs.

Scoring System

Rule-Based Detection

The first stage uses pattern matching against curated dictionaries of manipulative phrases. Each category has weighted terms—stronger manipulative signals receive higher weights.

Scores are normalized per 1,000 words to account for content length, ensuring short tweets and long articles are compared fairly.

AI Enhancement

When available, Claude AI reviews the rule-based findings to add context. This stage can adjust scores based on factors rules can't capture:

  • Distinguishing quotes from original statements
  • Recognizing academic or analytical discussion of extremism
  • Identifying satire or irony
  • Detecting manipulation tactics the rules missed

Score Interpretation

0-33
Low

Minimal manipulation signals

34-66
Medium

Some concerning patterns

67-100
High

Significant manipulation density

Image & Screenshot Analysis

RageCheck can analyze screenshots of social media posts, memes with text, and news headlines using computer vision.

How It Works

  1. Upload an image (screenshot, meme, or photo of text content)
  2. Vision AI extracts all visible text from the image
  3. The platform is identified (Twitter, Facebook, Instagram, Reddit, etc.)
  4. Extracted text is analyzed using the same 5-signal framework
  5. For memes, the AI considers how image and text work together

This allows analysis of content that can't be linked directly—screenshots shared in group chats, posts from private accounts, or memes circulating on social media.

Academic Foundation

RageCheck's signal categories draw from established research in media psychology, propaganda studies, and affective computing:

  • Emotional Heat — Based on dimensional models of emotion (Russell, 1980) and research on emotional contagion in social media (Kramer et al., 2014)
  • Us vs Them — Draws from intergroup conflict theory (Tajfel & Turner, 1979) and research on dehumanization (Haslam, 2006)
  • Moral Outrage — Informed by moral foundations theory (Haidt & Graham, 2007) and research on moral outrage online (Crockett, 2017)
  • Black & White Thinking — Based on research on cognitive biases and the appeal of simple narratives (Kahneman, 2011)
  • Fight-Picking — Draws from propaganda analysis frameworks (Ellul, 1965) and research on viral content dynamics (Berger & Milkman, 2012)

Limitations

  • Pattern detection is not perfect—false positives and negatives occur
  • Context matters: the same words can be manipulative or neutral depending on usage
  • Non-English content is not well supported
  • Very short content (under ~50 words) may produce unreliable scores
  • Sophisticated manipulation that avoids common patterns may score low
  • A high score does not mean content is false—just that it uses manipulative framing
  • Satire and irony may be misinterpreted (though AI enhancement helps)