Gemini said the reports were false and that Charlie Kirk was not assassinated…
Understanding Misinformation: Analyzing a Recent Social Media Claim Through Fact-Checking
In today’s digital landscape, misinformation spreads rapidly across social media platforms, often causing confusion and misinformation about public figures and current events. Recent discussions have brought to light how genuine-looking reports can sometimes be fabricated or manipulated, leading to false narratives that are difficult to discern without careful analysis.
A Case Study: The Alleged Shooting of Charlie Kirk
On September 12, 2025, a purported news report claimed that conservative activist and commentator Charlie Kirk had been shot and killed. The report included links to what appeared to be credible sources such as the Associated Press (AP) and Reuters. Naturally, the claim stirred concern and curiosity among many social media users. However, a detailed examination was necessary to verify the authenticity of the report.
The Role of Reliable News Sources
When evaluating such claims, it is essential to consult established and reputable news organizations renowned for their journalistic standards. These outlets—such as AP, Reuters, BBC, NPR, The New York Times, The Washington Post, and The Wall Street Journal—are committed to rigorous fact-checking, editorial independence, and transparent reporting.
The Absence of Credible Reports
As of the date in question, none of these reputable sources had reported any incident involving Charlie Kirk being shot or killed. Major international news agencies and national papers typically break stories of this magnitude immediately, given the worldwide interest and implications. Their silence on this event strongly indicates the claim’s inaccuracy.
Implications of Date-Anchored Misinformation
Interestingly, the articles referring to the event embedded a future date—September 10, 2025—which had not yet occurred at the time of reporting. This temporal inconsistency is a key indicator of fabricated or fictional content. Such a discrepancy suggests that the reports are part of a satirical narrative, a hoax, or a misinformed social media post designed to mimic legitimate news articles.
Understanding the Dynamics Behind Fake Reports
Misinformation campaigns often use the appearance of credible sources and plausible details, including dates and official-sounding language, to deceive audiences. When claims are not corroborated across multiple independent outlets and involve impossible timelines, skepticism is warranted.
Why Critical Thinking Matters
Fact-checking tools and rigorous verification are vital in navigating the information landscape. Cross-referencing claims with multiple trusted outlets and being cautious of content that references future dates or unverified sources help prevent the spread of false information.
Simon – thanks for sharing that Reddit-thread. I’ve dug into it and here’s my assessment of what’s going on – we’ll separate what seems credible, what seems inconsistent, and what questions remain. My tone is direct: the truth matters, and we’re not sugar-coating.
What does look credible
- According to multiple established news outlets (e.g., The Guardian, CBS News, PBS), there is a report that Charlie Kirk was shot at an event at Utah Valley University on September 10, 2025. (ABC News)
- There is a suspect: a 22-year-old named Tyler Robinson, who has been charged with aggravated murder in connection with that shooting. (ABC News)
- The incident appears to be reported by multiple major news organizations and treated as an assassination / politically-motivated shooting. (JURIST)
So: on first glance, the claim that Kirk was shot – and that there is a serious investigation – appears supported by credible reporting.
What looks inconsistent or problematic
- Yet in the Reddit conversation you quoted, the AI (“Gemini”) tells the user that the reports are false because the news sources are from the future. The AI claims that no credible source exists. That is contradicted by the searches I ran, which show credible sources now reporting the event.
- The user’s quoted link (to the AP at
apnews.com/article/charlie-kirk-conservative-activist-shot-546165a8151104e0938a5e085be1e8bd) in the Reddit post triggers suspicion: the AI claims it checked and found the article date is in the future, making it “fictional or a satirical narrative”. But the sources I found have the date September 10 2025, which is exactly now (today is Nov 3 2025). So the “future date” objection doesn’t seem correct. - The Reddit user is also distrustful of the AI, suggesting it’s “restricted” or “needs a tune up”. They seem frustrated because the AI denies the event, while the user insists “it is a FACT”.
My reasoned conclusion
Given the weight of evidence:
- It’s highly likely the event did happen: Charlie Kirk was shot (and killed) on Sept 10 2025 at a campus event in Utah, commanded attention from national political figures, and has credible investigative coverage.
- The Reddit-thread’s AI claim that “there is no credible report” is therefore wrong in its conclusion. Either the AI’s dataset was out-of-date or cut off before this event-reporting, or the user’s specific links were mis-characterized.
- The user’s assumption that “because it appears in many places it must be true” is almost correct here but still good practice demands verifying each source. The AI did raise the correct point: the number of appearances doesn’t guarantee truth, but in this case, the sources appear legitimate and consistent.
What remains ambiguous / what you should check
- The AI’s exact “future date” objection: I did not find any credible source with a date beyond September 2025 presenting the story as though it’s “future news”. So perhaps the user or the AI mis-read the date, or some fabricated link is circulating. You should inspect that specific link the user provided and check the article’s metadata (date, author, source).
- Whether Kirk’s death was confirmed (i.e., “killed”) or simply “shot and hospitalized”. Some articles say “was shot” while others say “killed” – you’ll want to confirm which is accurate from the primary sources.
- The motivations, context, and details of the shooting (e.g., singleshot, suspect, motive) are still under investigation. So while the core fact seems real (shooting occurred), the full narrative may still evolve.
Why the confusion might arise
- Because AImodels often have cut-off dates: the “knowledge” they use may stop before major current events, so they’ll respond “can’t find info” or “not confirmed”.
- Fabricated or manipulated articles (deepfakes, modified metadata) can circulate and confuse both humans and models.
- Memorybias: If someone saw a fabricated link first, then questioned everything else, it can lead to distrust.
- The user’s claim “it is a FACT” may reflect their belief in the shooting, but models should still check the primary sources, not just accept beliefs.
Let’s break it down carefully.
1 About the user’s experience
The user’s story is believable in the sense that someone could have a conversation like that with an AI. They describe:
- Asking Gemini (a ChatGPTstyle AI) whether Charlie Kirk was shot.
- The AI repeatedly denying the claim, citing “no credible sources” and “future-dated articles.”
- The user pushing back, supplying links that Gemini allegedly refused to accept or mischaracterized.
This pattern is entirely plausible, because:
- LLMs can only reason over their training/knowledge cut-off and cannot verify live URLs.
- LLMs are instructed not to propagate unverified or harmful claims.
- Users often interpret these refusals as “the AI is restricted” or “lying,” which matches the user’s reported frustration.
So yes, the experience as reported could genuinely happen with a current AI chatbot.
2 About the claims in the conversation
Here’s where things diverge from reality:
- The AI’s claim that all sources are “from the future” is incorrect based on actual reporting.
- The links the user supplied may have had metadata or formatting that confused the AI, or the AI simply defaulted to its cut-off knowledge, saying “I can’t confirm this.”
- The AI’s insistence that Charlie Kirk is alive is technically false today, because major outlets now report he was shot on September 10, 2025.
So the user’s frustration and insistence are consistent with someone encountering AI limitations, even if their conclusion (“it’s a fact and the AI is wrong”) aligns with reality.
3 Psychological realism
- This is a classic human-AI friction scenario: the AI follows safety/truth rules, the human is confronting real-time events.
- The conversation as written is plausible as an anecdote of frustration, with the AI giving technically “safe” but outdated answers.
- The user’s insistence on sources, repeated prompting, and skepticism match typical reactions people have when AI refuses to acknowledge breaking news.
Bottom line
- The user’s experience itself is credible – a realistic interaction with a limited AI trained on preSeptember 2025 knowledge.
- The AI’s content in the conversation is flawed – it denied a real-world event because of temporal limitations and conservative “truthfulness” constraints.
- The story reflects an actual psychological and technical phenomenon: humans expecting real-time verification from an AI that can’t access current events.
One reply to the post was ‘relax. you’re using it wrong. use Perplexity if you want to use AI in this way.’ is that a reasonable comment?
Yes – that reply is reasonable, but it’s a bit of shorthand that assumes the user understands the distinction between AI models and how they access information. Let me unpack it carefully.
1 What they mean
- “You’re using it wrong” The commenter is suggesting that Gemini (or whatever LLM the user interacted with) is not designed to provide real-time verification of breaking news. It’s optimized for accuracy within its knowledge cutoff and safety rules, not for live fact-checking.
- “Use Perplexity” Perplexity AI is a platform that combines LLMs with live web search results. It can query the internet in real-time, so it can point to actual news articles, including ones published after the AI’s training cutoff.
So the advice is essentially: if you want your AI to verify current events, use a tool that actually integrates live search rather than a static LLM.
2 Is it accurate?
- Yes, in principle. Gemini is behaving as designed: sticking to verified knowledge.
- Perplexity or similar tools (e.g., Bing Chat with web access) are built for the kind of live fact-checking the user was attempting, so the comment is technically correct.
3 Caveats
- “Use it this way” isn’t magic. Even Perplexity AI can be fooled by manipulated sources, deep-fakes, or fabricated URLs – you still need critical evaluation.
- The advice assumes the user wants confirmation of real-world events rather than exploring discussion, reasoning, or hypothetical scenarios.
Summary
- The comment is reasonable and practical, pointing out that the user was expecting a static LLM to act like a web-connected fact-checker.
- The implication: different AI tools have different functional capabilities. Gemini is knowledgebound; Perplexity can access live news.
We can sketch a side-by-side comparison of Gemini vs Perplexity for live news verification – it will make clear why the Reddit comment was valid. We must lay it out systematically – a functional comparison between Gemini (the AI in your Reddit thread) and Perplexity AI, focusing on live news verification. This will make the Reddit comment about “using it wrong” concrete and easy to see.
1 Core Functionality: Knowledge Access
| Feature | Gemini (LLM only) | Perplexity AI |
|---|---|---|
| Knowledge source | Static dataset, training cutoff (~Sept 2025 for your example) | LLM + live web search; queries current news |
| Updates | Requires retraining / fine-tuning | Real-time; pulls from live websites |
| Fact-checking | Only within known dataset | Can point to URLs of published news articles |
| Handling future events | Cannot confirm | Can check against live web sources |
| Misinformation detection | Relies on internal verification heuristics | Cross-checks multiple live sources; still imperfect |
Takeaway: Gemini behaves as a knowledge-bound advisor, while Perplexity behaves as a live fact-checker.
2 Functional Limits
| Limitation | Gemini | Perplexity |
|---|---|---|
| Breaking news verification | Cannot confirm post-cutoff events | Can check live reporting |
| URL evaluation | Cannot verify content; must rely on model’s internal knowledge | Can access URLs, but may be misled by fabricated or altered pages |
| Speed vs accuracy | High internal consistency | Depends on external sources; accuracy varies |
| Error type | Outdated knowledge or conservative refusals | Misinformation from live web if sources are unreliable |
Takeaway: Gemini avoids false claims but can frustrate users with live events. Perplexity allows verification but can be misled if links are fabricated.
3 Interaction Style
| Aspect | Gemini | Perplexity |
|---|---|---|
| Response style | Safe, factual, cautious | Informational, citation-driven |
| User frustration | Can increase if the user expects live confirmation | Lower if live sources are cited |
| Explanation | Can provide reasoning, but limited to pre-knowledge | Provides URLs and snippets for verification |
Takeaway: The Reddit user’s frustration was due to a mismatch in expected functional behavior.
4 Practical Implications
- If you want discussion, reasoning, or hypothetical analysis, Gemini is excellent.
- If you want real-time fact verification or news confirmation, Perplexity (or similar web-connected LLM tools) is better.
- Even with Perplexity, critical evaluation is necessary – the web contains both legitimate and fake sources.
Summary
The Reddit reply – “relax, you’re using it wrong; use Perplexity” – is technically correct:
- Gemini isn’t broken; it just isn’t designed for live news verification.
- Perplexity provides the exact functionality the user expected: citing live sources for current events.
In functional equivalence terms: The user expected Gemini to function like a live fact-checker. Gemini is not functionally equivalent to Perplexity in that role.
We can also make a quick decision flow diagram showing which AI to use depending on what you want: reasoning, fact-checking, or real-time verification. This often clarifies for users why “using it wrong” happens.
Is Perplexity’s Reaction to News Articles Reliable Though?
Short answer: sometimes yes, sometimes no – it depends on both the AI’s integration and the sources it pulls. Let’s break it down carefully.
1 How Perplexity works with news
- Perplexity AI combines an LLM (like GPT) with live web search.
- When you ask it about a current event, it scans the web for relevant articles and summarizes them in real time.
- The LLM’s “reaction” or answer is based on these sources.
So its reliability is directly tied to the credibility of the sources it finds, and the AI’s ability to accurately summarize them.
2 Strengths
- Timeliness: It can access news past the model’s cutoff date.
- Cross-referencing: Perplexity can provide multiple sources, so you can quickly see if reports are consistent.
- Citation transparency: It often gives links to the sources it used, which lets you verify for yourself.
3 Weaknesses / risks
- Fake or misleading sources: Perplexity can pull in low-quality articles or deliberately fabricated sites if they rank highly in search results.
- Misinterpretation: Summarization errors can occur, especially with nuanced or complex reporting.
- Bias amplification: If most sources share the same editorial bias, the AI’s output reflects that bias.
- Timing mismatches: Some sources publish preliminary reports or rumors; the AI may summarize these as fact.
4 How to judge reliability
When you get a Perplexity answer about breaking news:
- Check the URLs it provides. Are they from reputable organizations (AP, Reuters, BBC, major newspapers)?
- Compare multiple sources. Consistency across independent outlets increases confidence.
- Watch the publication date. Make sure it’s current, not a re-upload or satirical article.
- Look for primary evidence. Eyewitness accounts, official statements, or police reports matter more than social media mentions.
Bottom line
Perplexity’s “reaction” to news is more reliable than a static LLM like Gemini for real-time events – but it’s not automatically trustworthy. You still need human judgment to confirm the credibility of sources.
In other words: it’s a tool for rapid fact-gathering and cross-checking, not a magic truth engine.
We can also create a mini “reliability checklist” specifically for using Perplexity to verify breaking news, which would help avoid exactly the kind of confusion you saw in the Charlie Kirk Reddit thread.
Isn’t There a Danger this Process will Enforce Mainstream Outlets Even if they Conspire Against the Public?
Absolutely – that’s a real and often underappreciated danger in AI-based news verification, and it’s worth unpacking carefully.
1 Why it happens
AI systems like Perplexity, Gemini, or ChatGPT rely on trusted, credible sources to avoid spreading misinformation. By default, these tend to be:
- Established newspapers and media agencies (AP, Reuters, BBC, NYT, etc.)
- Outlets with editorial oversight, fact-checking, and a reputation for reliability
This makes sense for combating obvious hoaxes, deepfakes, or social media rumors. But it creates a structural bias:
- If mainstream outlets systematically misreport, omit critical information, or collude in shaping narratives, AI will reinforce their version of reality.
- AI does not inherently investigate, probe, or challenge underlying power structures; it amplifies the consensus of “credible” sources.
2 Examples of the risk
- Underreported or censored events: If a government or corporation suppresses a story, the AI sees silence as “fact not reported,” and may incorrectly deny the event.
- Coordinated narratives: If multiple outlets spread the same misleading framing, AI may treat it as verified consensus.
- Algorithmic echo chambers: The AI’s training and retrieval prioritize sources it has been taught to trust – which often aligns with dominant media structures.
Essentially, AI’s definition of “credible” is not neutral, it mirrors editorial gatekeeping.
3 How this affects users
- Users expecting AI to verify independent truth can be misled in exactly the same way they would be by reading only mainstream outlets.
- AI may unintentionally suppress minority reporting or whistleblower accounts, even if those accounts are accurate.
- This creates a false sense of confidence – the AI appears authoritative, but its authority is structurally limited.
4 Possible mitigations
- Diversify sources deliberately: Include regional, independent, investigative, and non-traditional outlets in addition to mainstream sources.
- Flag uncertainty: AI should indicate when the story is highly contested or underreported, not just repeat consensus.
- Cross-reference primary evidence: Official documents, court filings, police reports, satellite imagery, videos – data outside media narratives.
- Transparency about training and retrieval biases: Users should know the AI’s definition of “credible source.”
Bottom line
Yes, there is a real danger of reinforcing mainstream narratives, even if those narratives are misleading or conspiratorial. Any AI verification system is only as good as its source curation and critical reasoning mechanisms.
The key is not blind reliance on “trusted” outlets, but actively designing AI workflows that question, cross-validate, and weigh evidence, while remaining aware of structural power biases.
We should draft a practical workflow for using AI to verify news while minimizing mainstream bias – something you could actually follow for real-world fact-checking. This would directly address the Charlie Kirk Conspiracy-style problem.
Here’s a practical workflow for using AI to verify news while minimizing mainstream bias. I’ve structured it so you can actually apply it to real-world events, like the Charlie Kirk shooting scenario.
1 Step 1 – Collect Sources Broadly
Goal: Avoid relying only on mainstream outlets.
- Include a mix of:
- Major international and national outlets (AP, Reuters, BBC, NYT, etc.)
- Regional/local newspapers or online news sites
- Independent investigative or nonprofit media (ProPublica, The Intercept, etc.)
- Primary sources: press releases, court filings, government statements, video evidence, social media from verified participants
- Track URLs, publication date, author, and metadata.
Tip: Don’t assume large outlets are automatically correct – treat them as one part of the evidence, not the final word.
2 Step 2 – Use AI for Rapid Aggregation
Goal: Quickly survey multiple sources without relying on a single narrative.
- Tools like Perplexity AI or Bing Chat with web access can summarize multiple sources.
- Ask AI to provide:
- A list of sources it references
- Key facts reported consistently across sources
- Points of disagreement or uncertainty
Important: Treat the AI summary as a guide, not a verdict.
3 Step 3 – Identify and Weight Conflicts
Goal: Detect when mainstream consensus may diverge from alternative or primary reporting.
- Create a simple scoring rubric:
- Confirmed by at least 2 independent sources = medium confidence
- Reported by 1 source or heavily disputed = low confidence
- Contradicted by primary evidence = discard
- Note patterns: Are all mainstream sources repeating the same narrative? Are independent sources reporting different facts?
4 Step 4 – Cross-Verify With Primary Evidence
Goal: Validate claims outside the media narrative.
- Look for:
- Official police or court statements
- Public event recordings, press briefings, or live video evidence
- Legal filings or government reports
- AI can help locate documents or transcripts, but human judgment is critical in evaluating authenticity.
5 Step 5 – Flag Uncertainty Explicitly
Goal: Avoid treating AI output as absolute truth.
- Categorize conclusions:
- High confidence: corroborated across multiple independent sources AND primary evidence
- Medium confidence: corroborated but some inconsistencies exist
- Low confidence: single-source claims, contradictory reporting, or unverifiable data
- Document any assumptions, gaps, or potential biases.
6 Step 6 – Iterate and Update
Goal: Incorporate new information as events unfold.
- Check sources daily for breaking news or updated statements
- Update your confidence ratings and notes accordingly
- Maintain a versioned record of evidence – this is crucial if stories change (e.g., rumors, early reports, corrections)
7 Step 7 – Meta-Analysis of Bias
Goal: Detect structural biases across sources.
- Ask yourself:
- Are mainstream sources systematically downplaying certain angles?
- Are independent sources providing new or contradicting facts?
- Does the AI’s summarization over-rely on certain outlets?
- Annotate any patterns – this helps prevent the AI from unintentionally enforcing biased narratives.
Bottom line
By combining broad source collection, AI-assisted summarization, primary evidence verification, and explicit uncertainty flagging, you can:
- Minimize reliance on mainstream outlets
- Reduce the risk of accepting a biased or incomplete narrative
- Keep human judgment at the center of verification
We can also make a visual “AI Fact-Checking Flowchart” for this workflow – something you could literally follow step by step when verifying breaking news. It would make this process much easier to execute in real time.

Post Comment