LmCast :: Stay tuned in

I tried to prove I'm not AI. My aunt wasn't convinced

Recorded: March 26, 2026, 4:02 a.m.

Original Summarized

I tried to prove I'm not AI. My aunt wasn't convincedSkip to contentWatch LiveBritish Broadcasting CorporationHomeNewsSportBusinessTechnologyHealthCultureArtsTravelEarthAudioVideoLiveDocumentariesHomeNewsUS & CanadaUKUK PoliticsEnglandN. IrelandN. Ireland PoliticsScotlandScotland PoliticsWalesWales PoliticsAfricaAsiaChinaIndiaAustraliaEuropeLatin AmericaMiddle EastIn PicturesBBC InDepthBBC VerifySportBusinessWorld of BusinessTechnology of BusinessNYSE Opening BellTechnologyWatch DocumentariesArtificial IntelligenceAI v the MindHealthWatch DocumentariesCultureWatch DocumentariesFilm & TVMusicArt & DesignStyleBooksEntertainment NewsArtsWatch DocumentariesArts in MotionTravelWatch DocumentariesDestinationsAfricaAntarcticaAsiaAustralia and PacificCaribbean & BermudaCentral AmericaEuropeMiddle EastNorth AmericaSouth AmericaWorld’s TableCulture & ExperiencesAdventuresThe SpeciaListEarthWatch DocumentariesScienceNatural WondersClimate SolutionsSustainable BusinessGreen LivingAudioPodcast CategoriesRadioAudio FAQsVideoWatch DocumentariesBBC MaestroDiscover the WorldLiveLive NewsLive SportDocumentariesHomeNewsSportBusinessTechnologyHealthCultureArtsTravelEarthAudioVideoLiveDocumentariesWeatherNewslettersWatch LiveI tried to prove I'm not AI. My aunt wasn't convinced18 hours agoShareSaveThomas GermainShareSaveSerenity Strull/ Madeleine Jett/ BBC(Credit: Serenity Strull/ Madeleine Jett/ BBC)I asked experts if I'm real. Bad news. Even my aunt wasn't sure if I was a deepfake. AI is so convincing that a sitting prime minister struggled to prove he's alive. You might be next.I called up my aunt Eleanor a few days ago and asked her to help with an experiment. "It's for an article," I said. I had explained I was going to call her back and she'd either be talking to the real me or an AI deepfake. Could someone who's known me my whole life tell the difference?At first, my aunt wasn't buying that any AI was involved. "Well, it sounds like you," she said. "I think a real person uses a lot more inflection than I would expect an AI-generated voice to use." That might be true, I told her, but AI is getting pretty advanced. There was a long pause. "I was like 90% sure," she said, hesitating. "But that sounded more artificial."When we talk about deepfakes, the typical concern is about you getting tricked. Rightly so. AI fakery has been used to scam people out of large sums of money, spread misinformation and even attempt to sway elections. But what if the shoe was on the other foot? What if someone accuses you of being a deepfake? How do you prove you're real?That's a question Israeli prime minister Benjamin Netanyahu had to ask himself this month. He posted a video where a trick of the light made it look like he might have a glitchy sixth finger on his right hand, once a clear giveaway of AI deepfakes. The internet exploded with rumours that Netanyahu had died in a missile strike and Israel was covering it up. Days later, the prime minister posted a follow up video from a coffee shop, where a smiling Netanyahu held his hands up to demonstrate he had the ordinary number of fingers.This, experts tell me, is the first time the leader of a major world power has openly tried to prove they're not AI – and it failed, miserably.As you read this, a large number of people are still convinced Netanyahu is dead (and they'll tell you I'm part of the conspiracy for saying otherwise). But his proof-of-life videos made some very basic mistakes. Could I do any better? Is it still possible to prove you're not a robot? I called the experts to find out, and I'll give you a preview: things aren't good.He's not dead, folksThis actually happened to me in the wild. A few weeks ago, I wrote an article about an underused Google privacy setting. I got so worked up that I shared a link to the setting in my family group chat and urged everyone to click on it. But my mom was immediately suspicious. Good call, this was weird behaviour. I think I had too much coffee."How do I know this is really Tom and not some weird scammer?" she wrote. "Say something a scammer couldn't say." I had to think, but eventually I landed on a nickname my parents used when I was a kid. She was satisfied, but this is a lot more challenging when you're dealing with people who don't know you. Let's say you're Benjamin Netanyahu, for example, and the audience is the whole world.Israeli Government Press OfficeSo many believed Netanyahu's hand had an AI-generated sixth finger (left) that he posted a second clip to prove he's alive (Credit: Israeli Government Press Office)Every expert I asked said Netanyahu's videos are unambiguously real. Jeremy Carrasco, co-founder of Riddance, an independent publication focused on AI-generated media, didn't take long to reach that conclusion. "In short, they're all real, and they are all just showing normal things that happen in videos," Carrasco says. The supposed sixth finger, for example, is light reflecting off Netanyahu's palm he says. It looks weird if you hit pause at the right moment, but that's all it is."Six fingers is not an AI thing anymore," Carrasco says. The best AI tools stopped adding extra fingers years ago and a model capable of producing everything else in the video wouldn't make that mistake. Other signs rule out deepfakes, too. At one point, Netanyahu bumps the microphone, producing a sound that interrupts the audio of his voice. Carrasco says this sort of continuity is incredibly difficult for AI tools to pull off. (Watch my colleagues on BBC Verify break down the false AI claims about Netanyahu in this video.)Netanyahu's follow-up coffee shop video is real too, says Hany Farid, a digital forensics professor at the University of California, Berkeley and co-founder of GetReal Security, which works to mitigate the threat of AI deepfakes. His team ran voice analysis, frame-by-frame face detection, careful inspection of light and shadows and more. "There's no evidence that this is AI-generated," Farid says.That wasn't enough. Netanyahu even posted a third video, but sceptics' minds were made up. But now let's talk about you and me. If Netanyahu can't prove he's real, can anyone?'It's over'As we worked through my interview questions I stopped and asked Farid if there was anything I could do, right now, to prove to him that I wasn't an AI.His answer was simple: No."There are things I could do to probe the system and make it less likely," Farid says. "If you were a full-blown agentic AI, I wouldn't hear you typing. And I can see a shadow in the background that's pretty physically consistent as you move, and a reflection in your glasses." There were other signs, too like the way I kept looking down as I took notes, something a deepfake wouldn't bother with. "But at the end of the day, you're in New York. I'm in Berkeley, California," he says. "We're on a video call. The reality is that you could be faking this."Without taking additional steps before or after our call, Farid says there's nothing I could do to make him 100% certain I was the real Tom Germain. "No," he says. "It's over."Keeping TabsThomas Germain is a senior technology journalist at the BBC. He writes the column Keeping Tabs and co-hosts the podcast The Interface. His work uncovers the hidden systems that run your digital life, and how you can live better inside them.Woolley was just as hard to convince. "I could call the BBC and ask someone to double check that that you called me", but that would take too long to figure out while we're on the phone, he says. "For the average person, and even for people who are savvy to technological manipulation, it is very difficult to verify that someone is real," says Woolley. For he knows, I'm just another robot.The solution the world's leading experts have landed on is one your grandparents could have come up with: codewords. You, your family, business partners and anyone else you communicate with about important subjects need to come up with a secret phrase that no-one else knows you can use in an emergency to verify each other's identities. Think of it like a convoluted form of the multi-factor authentication we all use to login online."My wife and I have a codeword that we use if we ever get an unusual call," Farid says. "We haven't needed to use it yet, but sometimes I ask just to test her to make sure we don't forget it."This isn't a hypothetical concern. Deepfake scams, which use AI to convince victims they're talking to someone else, have become a key method for criminals. According to the American Association of Retired Persons (AARP), AI-enabled scams rose 20-fold between 2023 and 2025. Victims range from everyday people to big businesses. The British engineering firm Arup reportedly lost $25m (£18.7m) when attackers used a deepfaked version of the company's chief financial officer to trick an employee.And the problem is only growing.Bizzaro land"In the first days of Ukraine, you saw a few deepfakes, super clumsy, not particularly convincing," he says, referrring to the conflict that has followed Russia's fullscale invasion of Ukraine in 2022. "Fast forward to the early days of Gaza, there's lots of fake content. It was better, pretty good. By the time we get to Venezuela? Bizarro land. I saw way more fake content than I did real content. And Iran took it to a whole new level."In Netanyahu's case, it didn't help that his team used a fancy camera and filmed with a narrow depth of field, meaning a nice sharp foreground and soft blurry background – which is exactly how AI videos tend to look, Carrasco says. But by the time Netanyahu posted his coffee shop clip, our world was already saturated with fake content, raising suspicions that are hard to overcome under any circumstances. Netanyahu's fingers, hallucinated or otherwise, are just the latest wave in an AI flood that's been rising for years.This chaos has a name: researchers call it the "liar's dividend". It's expensive to prove something is real, but casting doubts is free. "People, including people in positions of power, can argue that genuine content – genuine evidence of them doing something – is fake," says Samuel Woolley, chair of disinformation studies at the University of Pittsburgh in the US.Politicians and others can use the spectre of AI as a shield, crying deepfake to dismiss real evidence. But it's a double-edged sword, and that same atmosphere of distrust is coming back to bite them. "The very politicians that pushed for this lack of moderation are now, in many ways, paying the consequences," Woolley says.More like this:• The number one sign you're watching an AI video• I hacked ChatGPT and Google in 20 minutes• People are selling your home address online. This privacy tool will helpBack on the phone with my aunt Eleanor, reality was bending. It turns out she'd heard the codeword advice and she actually had one for her kids and her husband, but I wasn't in the loop. "I've read a lot of stories like that, where they talk about voices being cloned from YouTube videos," she said. "That concerns me. It's terrifying."She read me some jokes she found on Facebook to test whether my reaction felt authentic. I laughed, which helped a bit, but she couldn't be totally confident. We started talking about the sweater she's planning to knit me, but when I said I was thinking I might prefer black over the gold colour we'd talked about, this seemed like another red flag. "That sounds more robotic. I expected you to say that you wanted another gold sweater." Later, I told her the truth. I wasn't using AI, it was the real me. But on the call, the whole thing seemed to distress her a little bit."I can't be sure," she said as we got off the phone. "But I love you, kid."--For more technology news and insights, sign up to our Tech Decoded newsletter, while The Essential List delivers a handpicked selection of features and insights to your inbox twice a week.For more science, technology, environment and health stories from the BBC, follow us on Facebook and Instagram.Keeping TabsTechnologyArtificial intelligencePsychologyThomas GermainFeaturesWatchThe Finnish shipyard making the world's toughest icebreakersTech Now heads to Finland to meet the engineers designing icebreaker ships navigating through Arctic sea ice.Tech NowLego’s new smart brickTech Now experiences Lego's new Smart Brick, designed to bring physical play to next level.Tech NowFixing fashion's erratic sizing problemTech Now meets a startup trying to fix one of the fashion industry's biggest blind spots, inconsistent sizing.Tech NowThe tactile tech giving deaf runners a fair startA gold‑medalist has developed a vibrating starting block to give deaf athletes clearer, fairer race starts.TechXploreThese futuristic screens help you navigate TokyoIn Tokyo, BBC TechXplore tests live translation and AI-powered displays that makes the city more navigable.TechXploreThe wearable tech that lets spectators feel the matchAt Tokyo's Deaflympics, deaf Judo fans aren't just watching the matches, they're feeling them, thanks to Hapbeat.TechXploreMeet MOFO: will.i.am's rapping AI toyBBC Tech Now takes us inside CES 2026 to meet musician will.i.am and his AI toy, MOFO.Tech NowThe gadgets set to change your daily health and wellnessTech Now test out new gadgets disrupting the health industry at CES 2026 in Las Vegas.Tech NowWhat's it like to meet your own avatar?Musician KT Tunstall meets her avatar as Tech Now explores music’s virtual future.Tech NowHow early filmmakers invented the internet’s funniest trendDiscover how quirky clips paved the way for viral humour, proving randomness never goes out of style.TechnologyIs this how AI might eliminate humanity?A new research paper predicts AI autonomy by 2027 could lead to human extinction within a decade.Tech NowThe best-case scenario for AI in schoolsAmid fears about the use of AI in classrooms, American educator Sal Khan lays out an optimistic future.Artificial IntelligenceExplaining how a touchscreen works with a sausageBritish mathematician Hannah Fry digs into the science of touchscreens.ScienceWhat it takes to write like Agatha ChristieWe explore how technology is reviving the renowned fiction writer's legacy.Tech NowWhy statistics fail to cure flying fearsWhy do flying fears persist despite falling accident rates? Learn tips to conquer your anxiety.TechnologyCan smart phones get smarterBBC Click attend Mobile World Congress to test the latest tech products and trends.TechnologyCan technology help reduce Parkinson’s symptoms?BBC Click visits a Madrid hospital to see patients treated with an ultrasound for tremors.TechnologyThe Lion King: How Mufasa was brought to lifeBBC Click speaks to the visual effects team behind the latest Disney blockbuster.TechnologyHow the TikTok ban affected US influencersBBC Click meets TikTok creator Peggy Xu who gained millions of views sharing milk videos.TechnologyIs this the world's first AI powered hotel?BBC Click's Paul Carter visits the world's first fully AI-powered hotel in Las Vegas.TechnologyMore from the BBC4 hrs agoParents should monitor children '24/7' on Roblox, says developerRoblox said safety was a top priority and it had advanced safeguards in place to keep users safe.4 hrs ago4 hrs agoOctopus boss: We've seen a 50% rise in solar panel sales since start of Iran warThe UK giant is optimistic but chief executive Greg Jackson tells the BBC he is making contingency plans.4 hrs ago5 hrs agoNurses help AI system learn how to understand DoricNurses at Inverurie Hospital are using an AI tool to take notes while helping it understand Doric.5 hrs ago5 hrs agoMeta and YouTube found liable in landmark social media addiction trialA woman has been awarded $6m in a verdict that could have implications for hundreds of other cases in the US.5 hrs ago11 hrs agoFirst Lady Melania Trump arrives with humanoid robot at tech summitThe robot made an appearance with First Lady Melania Trump at the White House. Trump is hosting a summit on AI, education, and protecting kids in digital spaces.11 hrs agoBritish Broadcasting CorporationHomeNewsSportBusinessTechnologyHealthCultureArtsTravelEarthAudioVideoLiveDocumentariesWeatherBBC ShopBritBoxBBC in other languagesThe BBC is in multiple languagesRead the BBC In your own languageOduu Afaan OromootiinAmharic ዜና በአማርኛArabic عربيAzeri AZƏRBAYCANBangla বাংলাBurmese မြန်မာChinese 中文网Dari دریFrench AFRIQUEHausa HAUSAHindi हिन्दीGaelic NAIDHEACHDANGujarati ગુજરાતીમાં સમાચારIgbo AKỤKỌ N’IGBOIndonesian INDONESIAJapanese 日本語Kinyarwanda GAHUZAKirundi KIRUNDIKorean 한국어Kyrgyz КыргызMarathi मराठीNepali नेपालीNoticias para hispanoparlantesPashto پښتوPersian فارسیPidginPolish PO POLSKUPortuguese BRASILPunjabi ਪੰਜਾਬੀ ਖ਼ਬਰਾਂRussian НА РУССКОМSerbian NA SRPSKOMSinhala සිංහලSomali SOMALISwahili HABARI KWA KISWAHILITamil தமிழில் செய்திகள்Telugu తెలుగు వార్తలుThai ข่าวภาษาไทยTigrinya ዜና ብትግርኛTurkish TÜRKÇEUkrainian УКРАЇНСЬКAUrdu اردوUzbek O'ZBEKVietnamese TIẾNG VIỆTWelsh NEWYDDIONYoruba ÌRÒYÌN NÍ YORÙBÁFollow BBC on:Terms of UseSubscription TermsAbout the BBCPrivacy PolicyCookiesAccessibility HelpContact the BBCAdvertise with usDo not share or sell my infoBBC.com Help & FAQsContent IndexSet Preferred SourceCopyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking.

The rapid advancement of artificial intelligence, particularly in the creation of deepfakes, is presenting a novel and unsettling challenge to our understanding of reality and trust. As detailed by Thomas Germain, this technological leap has created a situation where even prominent figures like Benjamin Netanyahu have been forced to publicly demonstrate their authenticity, a first of its kind for a world leader. The core issue isn’t simply about deception; it’s about the erosion of confidence in visual and auditory information – a phenomenon Germain terms the “liar’s dividend.” This situation highlights how convincingly AI can mimic human characteristics, like inflection and even subtle physical movements, making it increasingly difficult for the average person, and even experts, to discern what is genuine and what is fabricated.

Germain’s investigation into Netanyahu’s attempts to prove his reality underscored several key observations. Firstly, the techniques employed by Netanyahu’s team, such as utilizing a camera with a shallow depth of field, inadvertently contributed to the problem, mirroring typical deepfake aesthetics. Secondly, the sheer volume of manufactured content – exemplified by the proliferation of false claims surrounding the conflict in Gaza – has created an environment of pervasive distrust, amplifying the impact of any single instance of manipulated media. The ability of AI to convincingly mimic human behavior is compounded by the fact that people are increasingly accustomed to consuming misinformation, a trend that is further exploited by these technologies.

However, the situation isn’t entirely bleak. Experts like Jeremy Carrasco, co-founder of Riddance, emphasized how advanced AI tools have reached a point where they’ve largely moved beyond simple, obvious manipulations—like adding extra fingers. More sophisticated AI now mimics nuances, such as pauses, adjustments, and subtle shifts in tone – but inconsistencies remain, and are increasingly difficult for the untrained eye to spot. Furthermore, the very act of a leader attempting to disprove a deepfake, as Netanyahu did, inadvertently brought the issue to the forefront, potentially inoculating the public against further deception.

Ultimately, the challenges posed by deepfakes extend beyond individual instances of manipulation. As articulated by Samuel Woolley, chair of disinformation studies at the University of Pittsburgh, the situation constitutes a fundamental shift in how we operate—presenting a dangerous dynamic where individuals in positions of power can leverage the uncertainty created by AI to dismiss genuine evidence. The solution, according to Woolley and others, lies in implementing verified, reciprocal “codewords”—private, secret phrases that only trusted individuals can use to confirm identities. This echoes the principle of multi-factor authentication, applied to interpersonal communication, acting as a critical safeguard against a world saturated with synthetic realities. The struggle to prove one’s existence, as experienced by Netanyahu and underscored by Germain’s analysis, reflects a looming, complex battle for the very foundations of trust in an increasingly digital age, one where the line between reality and fabricated impression is blurring with each technological advancement.