đź§  FactGuard: A smarter way to detect Fake News

Most fake-news filters still judge writing style — punctuation, emotion, tone.
Bad actors already know this… so they just copy the style of legit sources.

FactGuard flips the approach:
Instead of “does this sound fake?”, it asks “what event is being claimed, and does it make sense?”

🔍 How it works (super short)

  1. LLM extracts the core event + a tiny commonsense rationale.
  2. A small model (BERT-like) checks the news → event → rationale for contradictions.
  3. A distilled version (FactGuard-D) runs without the LLM, so it's cheap in production.

This gives you:

  • Fewer false positives on emotional but real stories
  • Stronger detection of “stylistically clean,” well-crafted fake stories
  • Better generalization across topics

đź§Ş Example prompt you can use right now

You are a compact fake news detector trained to reason about events, not writing style.
Given a news article, output:

- label: real/fake
- confidence: [0–1]
- short_reason: 1–2 sentences referencing the core event

 Article:
"A city reports that every bus, train, and taxi became free of charge permanently starting tomorrow, but no details are provided on funding…"

Expected output

{
  "label": "fake",
  "confidence": 0.83,
  "short_reason": "A permanent citywide free-transport policy with no funding source or official confirmation is unlikely and contradicts typical municipal budgeting."
}

📝 Want the full breakdown?

Event extraction, commonsense gating, cross-attention design, and distillation details are all here:

👉 https://www.instruction.tips/post/factguard-event-centric-fake-news-detection

Leave a Reply