Fake News GIF

Identifying Fake News and Misinformation

Duration: 1 days / 6 hours
Delivery method:
Online (Zoom ↗️ or Teams ↗️)/ In-company training
Target Audience: 
This course is designed for security professionals who play a role in analysing open source information.
Cost:
Available upon application
Language:
English
Course code:
IFN-1

Introduction

In a world overflowing with information, how can you tell what’s true and what’s misleading? Fake news spreads faster than ever, influencing opinions, shaping narratives, and even impacting real-world events. But you don’t have to fall for it!

“Spotting Deception” is your essential guide to navigating the digital landscape with confidence. This course equips you with the critical thinking skills and fact-checking techniques needed to identify, analyze, and challenge misinformation—whether it appears on social media, in the news, or through viral trends.

What you will learn

  • The psychology behind why fake news spreads
  • Common techniques used to manipulate information
  • How to fact-check sources and verify credibility
  • Strategies to protect yourself and others from misinformation
  • Real-world case studies and hands-on practice

Quick (Free) online quiz

Real, Misleading, Fake buttons
12
0
0
0
Press Start to begin.

Potential course benefits, the why

In today’s digital world, misinformation is everywhere—on social media, in the news, and even in conversations with friends and family. Taking “Spotting Deception: A Practical Guide to Identifying Fake News” will empower you with essential skills to think critically, verify facts, and make informed decisions.

Here’s why this course is important:

Fake news is designed to manipulate emotions and opinions. This course teaches you how to analyze information critically, spot logical fallacies, and question misleading narratives.

False information can impact your health, finances, and decisions. From scams to propaganda, knowing how to detect deception helps you avoid manipulation and make smarter choices.

In an age of biased reporting and sensationalism, it’s crucial to understand how the media works, recognize trustworthy sources, and identify misinformation tactics.

Fraudulent news articles, deepfake videos, and misleading headlines are designed to trick people. This course will teach you how to identify clickbait, scams, and misinformation before they mislead you.

Fake news is often used to influence public opinion, elections, and policies. Learn how to fact-check political claims and make educated choices based on real information.

The “Identifying Fake News and Misinformation” course is crucial because it teaches critical thinking, sharpens media literacy, and helps individuals separate fact from fiction. In today’s digital world, these skills protect decision-making, trust, and informed participation in society.

Types of fake news

  • Sensational or misleading headlines designed to attract clicks.
  • Often exaggerates or distorts information.
  • Content may be loosely based on facts, but details are manipulated.

Example:
Headline: “You Won’t Believe What Scientists Found on Mars!”
➡ The article is about a rock formation, not life on Mars.

  • Claims of secret plots by powerful groups, often without credible evidence.
  • Exploits distrust in authorities or institutions.
  • Spreads rapidly due to emotionally charged narratives.

Example:
A theory spreads online claiming a secret group controls all world governments.

  • Completely made-up stories with no basis in reality.
  • Designed to deceive and mislead readers intentionally.
  • Often used for political or financial gain.

Example:
A website publishes a story saying a famous actor died — but they’re alive and well.

  • Real content presented in a false or misleading context.
  • Examples include old photos shared as recent or quotes misattributed.
  • Misleads by shifting the original meaning.

Example:
A photo from a 2010 protest is shared online in 2025 claiming it shows “current riots in Paris.”

  • Uses the branding of legitimate sources to appear credible.
  • Mimics trusted news outlets or government agencies.
  • Aims to trick readers into believing the information is verified.
  • Real images or information altered to deceive.
  • Includes doctored photos, deepfakes, or edited videos.
  • Can create false impressions of events or people.

Example:
A photo of a celebrity is photoshopped to make it look like they attended a controversial rally.

  • Misuses information to frame an issue or person inaccurately.
  • Often omits key facts or uses context selectively.
  • Can be based on real events but distorts the interpretation.

Example:
A politician’s quote is cut mid-sentence to make it sound like they said the opposite of what they meant.

  • Information spread to influence public opinion or promote a specific agenda.
  • Often emotionally charged, repetitive, and one-sided.
  • Can be based on truth, partial truth, or outright lies.

Example:
A government-run news outlet publishes exaggerated stories about military victories to boost national pride.

  • Meant to entertain or critique, not to inform.
  • May be mistaken for real news by unaware audiences.
  • Can unintentionally spread misinformation when taken seriously.

Example:
An article from The Onion jokes that “NASA Announces Cheese Found on the Moon,” and some people believe it’s real.

10 types of bias in news

Definition: Leaving out key facts or perspectives, making the story one-sided.
Example: Reporting a protest but not mentioning why people are protesting.
Things to look for:

  • Are important voices or sides missing?

  • Does the story feel incomplete or one-sided?

  • Compare with other sources — are they including details this one left out?

Definition: Using only sources that support one viewpoint.
Example: Quoting only government officials on a policy issue, not experts or affected citizens.
Things to look for:

  • Who’s being quoted or cited?

  • Are all relevant sides represented?

  • Are unnamed or anonymous sources used selectively?

Definition: Emphasizing or ignoring certain stories to shape public perception.
Example: Covering crimes by one demographic but not another.
Things to look for:

  • What stories dominate the headlines?

  • What stories are consistently ignored?

  • Does the outlet cover one side’s scandals more than another’s?

Definition: Giving certain stories or views more visibility than others.
Example: Positive news about a favoured politician on the front page; criticism buried deep inside.
Things to look for:

    • Where is the story placed — front page, middle, end?

    • Are opposing viewpoints equally prominent?

Definition: Using loaded or unfair labels to describe people, groups, or ideas.
Example: “Radical activists” vs. “concerned citizens.”
Things to look for:

  • Are adjectives emotionally charged or judgmental?

  • Are people labelled in ways that frame them positively or negatively?

  • Does the outlet label one side but not the other?

Definition: Using subjective tone or wording that suggests opinion.
Example: “The government finally admitted its failure” vs. “The government announced the results.”
Things to look for:

  • Are emotionally loaded verbs used (e.g., “attack,” “admit,” “slam”)?

  • Is the tone neutral or persuasive?

  • Does the writer sound like they’re approving or disapproving?

Definition: Using specific images or visuals that sway perception.
Example: Showing an angry photo of one candidate and a smiling one of another.
Things to look for:

  • Do images flatter or embarrass certain people?

  • Are captions neutral or suggestive?

  • Are visuals chosen to evoke emotion?

Definition: Using numbers selectively or misleadingly.
Example: Reporting “hundreds” attended a rally when actual numbers were thousands.
Things to look for:

  • Are numbers verified or vague (“many,” “some,” “a few”)?

  • Does the story use percentages without context?

  • Are comparisons fair and accurate?

Definition: Using positive or negative connotations to influence readers.
Example: “Tax relief” (positive) vs. “tax giveaway” (negative).
Things to look for:

  • Are descriptive words emotionally loaded?

  • Could neutral words be used instead?

  • Is the language meant to persuade or inform?

Definition: Presenting information without background or historical context.
Example: Reporting a country’s retaliation without mentioning the prior attack.
Things to look for:

  • Is context missing that changes how you interpret the event?

  • Are causes or history left out?

  • Does the story seem to start “in the middle” of an issue?

Blended types of misinformation

Blend: Sensational headline exaggerates or distorts true details in the article.
What to look for:

  • Overly emotional or shocking titles.

  • Article doesn’t match the headline.

  • Vague claims like “Experts say…” without sources.

Example:

“New Miracle Drink Cures Cancer!” — but article only discusses a vitamin study in mice.

Blend: Real image/video but shown with altered or wrong context.
What to look for:

  • Photo/video without a timestamp or credible source.

  • Captions referring to a different time or place.

  • Recycled visuals from unrelated events.

Example:

An old wildfire photo shared during a new crisis claiming it’s “today in California.”

Blend: Entirely fake stories pretending to come from real outlets.
What to look for:

  • Slightly misspelled domain names (e.g., bbc.co.news).

  • Mimicked logos and layouts of trusted sites.

  • No trace of the story on official channels.

Example:

A fake “CNN” webpage reports a celebrity’s death — story doesn’t exist anywhere else.

Blend: Organized manipulation using conspiracy narratives to shape public opinion.
What to look for:

  • Emotional language about “hidden enemies” or “cover-ups.”

  • Appeals to fear, nationalism, or group loyalty.

  • Vague claims supported only by “insider” sources.

Example:

A government-linked page spreads a theory that foreign powers caused a natural disaster.

Blend: Jokes or satire shared as real news by people unaware it’s humor.
What to look for:

  • Absurd or exaggerated claims that seem “too crazy to be true.”

  • Lack of credible sources.

  • Original site labeled as “satirical” or “humor.”
    Example:

A The Onion article saying “NASA to Build Moon Amusement Park” goes viral as “real.”

Blend: Edited images/videos used to twist the meaning of true events.
What to look for:

  • Cropped or spliced videos missing context.

  • Sound or quotes edited deceptively.

  • Unverified social media posts with strong bias.

Example:

A protest clip trimmed to remove police provocation, making it seem like unprovoked violence.

Blend: Fully invented “facts” designed to serve political or ideological goals.
What to look for:

  • Repetition of false claims across multiple partisan sources.

  • Emotional appeals without verifiable data.

  • “Sources” that lead nowhere or are anonymous.

Example:

A fake statistic shared by coordinated accounts claiming an election was rigged.

Blend: Real facts twisted to fit a false larger narrative.
What to look for:

  • Small kernels of truth wrapped in speculation.

  • “Connect the dots” diagrams or pseudoscientific claims.

  • Claims that “mainstream media won’t tell you.”

Example:

A real vaccine side-effect report used to claim all vaccines are part of a global control plan

  • Meant to entertain or critique, not to inform.
  • May be mistaken for real news by unaware audiences.
  • Can unintentionally spread misinformation when taken seriously.

Example:
An article from The Onion jokes that “NASA Announces Cheese Found on the Moon,” and some people believe it’s real.

Tri blended types of misinformation

Blend description:
A shocking headline attracts attention → uses a real image/video but recontextualizes it → ties it to a baseless secret plot.
What to look for:

  • Emotional or fear-inducing titles (“You won’t believe what the government is hiding!”).

  • Real footage used with false explanations.

  • References to “hidden truths” or “what they don’t want you to know.”

Example:

A video of military trucks moving equipment is captioned: “Secret army operation to enforce new world order.”
(Real trucks, wrong context, conspiratorial framing.)

Blend description:
Entirely false information presented as if it comes from a legitimate source to promote an agenda.
What to look for:

  • Fake “news outlets” with logos mimicking real ones.

  • Coordinated sharing from political or ideological accounts.

  • Emotional appeals about national pride, threats, or identity.

Example:

A fabricated “Reuters” story claiming a country’s leader was poisoned by rivals — shared by partisan accounts to inflame tensions.

Blend description:
Real images or statements are edited or taken out of context to push a political message.
What to look for:

  • Cropped videos or altered subtitles.

  • Narratives that oversimplify complex issues (“proof of corruption!”).

  • Circulation by coordinated social media pages.

Example:

A speech clip cut to make it sound like a candidate endorsed violence, then widely spread by political groups.

Blend description:
A satirical story is misrepresented as real and then incorporated into conspiracy narratives.
What to look for:

  • Absurd claims suddenly being shared seriously.

  • Lack of credible verification.

  • Conspiracy pages treating parody as “proof.”

Example:

A satirical article claiming “NASA Confirms Hollow Earth” gets cited by flat-earth communities as evidence.

Blend description:
Fake stories with sensational headlines and misleading data points.
What to look for:

  • Wild claims with no sources.

  • Statistics or charts that don’t match cited studies.

  • Repetition across low-credibility blogs or YouTube channels.

Example:

“Study Proves Coffee Causes Cancer!” — headline links to a made-up “research” page with no actual data.

Blend description:
A deliberate campaign uses edited images and conspiracy tropes to shape public opinion.
What to look for:

  • Viral memes mixing truth and falsehood.

  • References to “traitors,” “enemies,” or “secret plots.”

  • State-backed or coordinated accounts amplifying the same narrative.

Example:

Deepfaked videos showing activists “confessing” to foreign funding, used by governments to discredit opposition.

Blend description:
Partly true info combined with fake elements, framed with the wrong background to mislead.
What to look for:

  • Real quotes paired with false images.

  • Conflicting dates or locations.

  • Emotional calls to “share before they delete this!”

Example:

A real protest photo from 2019, combined with a fake statement attributed to a public figure, claiming it’s from “today’s uprising.”

Types of fake images

Text-to-image deepfakes (GANs/diffusion): Entire scene created by AI.
Tell-tales: too-perfect skin, jewellery/micro-text smeared, fingers/teeth anomalies, inconsistent lighting/shadows.

  • Face swap: One person’s face mapped onto another’s body.
    Tell-tales: hairline/ear mismatch, flickering edges, warped glasses/earrings.

  • Face reenactment: Real face, fake expressions/lip movements driven by another track.
    Tell-tales: mouth not matching teeth/tongue, jawline “slides,” specular highlights jump.

  • Composites/splices: People/objects pasted in from other photos.
    Tell-tales: mismatched perspective/light direction, repeated noise patterns, ragged edges.

  • Copy–move/clone stamp: Duplicating pixels to hide or multiply objects.
    Tell-tales: repeating textures/patterns (smoke, crowd patches).

  • Inpainting/object removal: Filling areas to erase items.
    Tell-tales: unnaturally smooth backgrounds, warped lines.

  • Seam carving/retouching: Resizing or slimming parts without obvious crops.
    Tell-tales: bent backgrounds, stretched tiles/grids.

  • Color/tonal falsification: Extreme grading to imply time/place (e.g., orange “wildfire” look).
    Tell-tales: skies/skin tones implausible, highlights clipped.

  • Staged scenes/props: Real photo, fabricated event.
    Tell-tales: multiple “press” angles posted first, participants looking at cameras.

  • Out-of-context real photos (miscaptioned/old): True pixels, false story.
    Tell-tales: reverse-image search finds older posts; metadata/time don’t match claim.

  • Cropping for deception: True pixels, misleading framing.
    Tell-tales: wider originals contradict narrative.

Types of blended images (fake + real)

AI-inserted crowd/signs/fire/smoke on genuine photo.
Look for: local blur/noise not matching sensor grain; added elements ignore scene lighting.

Real subject pasted into real scene from different photo.
Look for: shadow direction/length mismatch; wrong scale or depth of field.

Real body with different real (or AI) face.
Look for: skin-tone boundaries at jaw/neck; ear shape mismatch; hair edges haloing.

Real scene with duplicated elements to inflate numbers (e.g., flags, soldiers).
Look for: repeating clusters; periodic patterns under error-level analysis.

Heavy beautify/airbrush plus misleading caption (e.g., “from disaster zone today”).
Look for: plastic skin, erased pores; reverse-image search shows different date/place.

Legitimate bracketed exposure merged, then objects added.
Look for: consistent tone mapping on scene, inconsistent on added item.

Real footage with swapped frames or overlaid CG.
Look for: motion blur inconsistent on the overlay; rolling-shutter wobble missing on added layer.

Three horseman, plus one - GIF
"The danger is not in what is false, but in what feels true enough to believe."

In-house courses

Zoom ↗️is a default setting for this course, it could be delivered via Microsoft Teams ↗️ or Webex.

The above course can be modified to better fit, the appetite of your organisation

Jump to internal page: Contact us ↗️

Identifying fake news and misinformation

Continue Looking:

Course page: Identifying fake news and misinformation