What happened
The editorial failure in five steps
Source: Übermedien investigation, 17 February 2026. Neither false video was caught by the editorial workflow before broadcast.
✗
15 Feb 2026ZDF
heute journal broadcasts ICE deportation story
Moderatorin Dunja Hayali explicitly warns viewers that many fake ICE videos circulate online — then the broadcast uses one.
✗
During broadcastViewer
Sora logo detected in broadcast video
A viewer notices the Sora AI watermark embedded in the clip — the same AI tool ZDF claimed to be warning about.
✗
Post-broadcastZDF
Statement: "technical reasons"
ZDF claims the AI-generated footage was meant to be labeled but the label was lost "for technical reasons." The footage had no business being in the segment at all.
✗
Post-broadcastZDF
Second false video discovered and silently deleted
A 2022 clip of a boy arrested for threatening a school shooting — mislabeled as an ICE arrest — was quietly removed with no explanation.
✗
CorrectionZDF
"Video aus redaktionellen Gründen nachträglich geändert"
The correction note revealed nothing. No acknowledgment of what went wrong, why, or what was changed.
"Nicht nur der Beitrag ist völlig verkorkst und ärgerlich, auch der ganze Korrekturprozess."
— Übermedien, 17. Februar 2026The Factiator response
Five flags. Before broadcast.
Every claim and source submitted to Factiator is evaluated against the full epistemic framework. Here is what a pre-broadcast analysis would have produced.
[NK]
Unverifiable source origin
The AI-generated clip had no verifiable chain of custody. Factiator flags every source by type — [NK] means no external verification. An [NK]-tagged visual in a factual news segment triggers an automatic counterposition requirement.
[FI:?]
Unknown provenance
Who produced the Sora video and why? Origin unknown. Factiator requires [FI:?] disclosure and flags clips where provenance cannot be established — especially in politically charged contexts.
[K:0%]
Zero confidence for fabricated evidence
AI-generated footage presented as documentary evidence receives [K:0-9%] — Speculation or fabricated. The confidence score would have surfaced this as epistemically disqualifying before air.
[POL] + Self-contradiction
Structural irony flag
A segment explicitly warning about AI fakes while using an AI fake as evidence triggers a structural contradiction flag. This pattern — institutional bias combined with self-contradiction — is surfaced automatically.
Context mismatch
Mislabeled 2022 archive footage
The second clip (2022 school threat arrest, labeled as ICE raid) would be flagged for context mismatch — video metadata contradicts the claimed context. Date, location, and original source all contradict the label.
Illustrative epistemic score
Based on applying the Factiator framework to the documented facts.
Source quality8/100
AI-generated footage + misdated 2022 clip — both [NK]
Funding disclosure12/100
No provenance established for either video
Confidence accuracy15/100
Presented as documentary evidence without verification
Counterposition20/100
Framing as fact — no epistemic uncertainty shown
Bias symmetry10/100
Self-contradiction: warning about fakes while using one
Overall epistemic score
13/100
Epistemically unreliable
The structural problem
This is not a production error.
It is an architecture problem.
Speed over verification
Live news optimizes for speed. Epistemic verification is second-order — until it becomes the story.
No source classification
Editorial workflows have no equivalent to [NK] or [K:xx%]. Every source is "usable" or "not usable" — no gradations.
Correction without transparency
"Video aus redaktionellen Gründen geändert" tells audiences nothing. A Factiator score cannot be silently revised.
Self-contradiction undetectable
No editorial system flags the logical contradiction between the Hayali moderation and the segment content. Factiator does.
Factiator Corporate — pre-publication epistemic verification
Submit any article, script, or claim before publication. Get a structured epistemic report with source classifications, confidence scores, bias flags, and a public audit trail. The framework is open. The score is not editable.
✓ Deep analysis + citations
✓ Audit trail
✓ Team seats
✓ API access
✓ Dedicated support
Analysis based on publicly reported facts (Übermedien, 17 Feb 2026). Epistemic scores are illustrative — produced by applying the Factiator framework to documented facts. Factiator has no commercial relationship with ZDF.