Media is engineered to influence.
We make that visible.

The Media Integrity Institute is an independent research organization that detects persuasion techniques and addiction patterns in podcasts, news, and video content.

The techniques are invisible by design

Every day, millions of people consume hours of content built to shift what they believe, buy, and fear. The techniques doing that work have names. Researchers have been cataloging them for years: emotional manipulation, loaded language, false reasoning, selective framing, engineered addiction loops.

These aren't conspiracy theories. Peer-reviewed research published in Nature Communications, PNAS, and leading AI conferences shows that AI-generated persuasion is now twice as effective as human persuasion at deception. A single exposure can shift attitudes by 2.5 to 4 percentage points. And the people being influenced rarely notice it happening.

The cumulative effect of daily exposure across podcasts, news feeds, and social video is almost entirely unstudied. Nobody is measuring what that does to people over months and years. We think that needs to change.

Detection, not censorship

We don't fact-check. We don't rate bias. We don't tell anyone what to watch or what to avoid.

Instead, we built XrAE: a detection engine that exists for one reason, to identify exactly how content is designed to persuade. It analyzes the rhetorical structure, not the truth claims. It tells you what techniques are being used, where they appear, and how intense they are.

XrAE exists for this single purpose. Nothing else. It runs on hardware we control, with no dependency on third-party AI services. It detects 32 distinct techniques across six families of influence, including eight addiction patterns that keep audiences coming back. Every detection maps to published research and cites the actual words from the source material.

Single purpose

Trained specifically for persuasion detection. Not a general-purpose AI repurposed for something it wasn't made for.

Independent Infrastructure

Runs on hardware we own. No cloud API dependencies. No vendor lock-in. No one else sees the data.

32 Detection Codes

Covers emotional manipulation, faulty logic, loaded language, trust exploitation, framing, and addiction patterns.

Peer-reviewed foundation

10 published studies from PNAS, Nature, ACL, and CHI. Every detection code traces back to real research.

Tools for everyone

We believe this kind of analysis should be available to the public, not locked behind paywalls or limited to researchers. So we built two tools: one for everyone, and one for professionals who need deeper access.

O

OrgnIQ

Free. For everyone.

A nutrition label for your media. OrgnIQ scores podcasts and news articles on a 0-to-100 purity scale, showing you exactly what influence techniques are present and how heavily they're used. No account required. No cost.

Try OrgnIQ
P

Prizm

Professional media intelligence.

Built for litigation teams, media buyers, and researchers who need continuous monitoring, evidence-grade analysis, and full access to our detection data across 135+ actively monitored content sources.

Learn about Prizm

Consistent with peer-reviewed science

XrAE's approach aligns with what the academic community has independently documented. These are some of the relevant studies.

LLM-Generated Messages Can Persuade Humans on Policy Issues

Nature Communications, 2025

Large language model persuasion matches human persuasion effectiveness on policy issues, even when audiences know AI is involved.

When LLMs Are More Persuasive Than Incentivized Humans

arXiv (preregistered, N=1,242), 2025

AI-generated persuasion is twice as effective as human persuasion at deception, with measurable inoculation effects from repeated exposure.

ChatGPT Outperforms Crowd Workers for Text-Annotation Tasks

PNAS, 2023

LLM annotations outperform human crowd workers by 25 percentage points at 30x lower cost, validating AI-assisted content analysis at scale.

SemEval-2023 Task 3: Detecting Persuasion Techniques in News

ACL / SemEval, 2023

Establishes the canonical 23-technique taxonomy for persuasion detection in media, which XrAE extends with addiction pattern codes.

Start seeing what you've been missing

Check the podcasts and news sources you consume every day. You might be surprised by what you find.