Skip to content

UAE: Deepfakes now so real ‘you can’t tell Zoom from fake videos,’ expert warns

    Deepfakes can now be so real that ‘you can’t tell Zoom from fake video,’ a tech expert highlighted recently on the sidelines of the Dubai Business Forum–USA Edition.

    The rising threat of deepfakes that can be generated in minutes dominated conversations around AI safety during Dubai Chambers’ visit to IBM Innovation Studio in New York City.

    Ben Colman, CEO and Co-founder of Reality Defender, which creates tools to spot AI-generated media, believes Dubai’s push to become a global tech hub also puts the city at the heart of important conversations about synthetic media — and the real risks it brings for businesses, governments, and everyday people.

    Stay up to date with the latest news. Follow KT on WhatsApp Channels.

    Last month, UAE Economy Minister Abdullah Bin Touq Al Marri also condemned a series of deepfake videos that falsely showed him promoting investment schemes, calling them completely fabricated and dangerous. Speaking at GITEX 2025 in Dubai, he had urged the public to rely solely on official channels for accurate information.

    Cybercrime doesn’t need computer science skills

    With Dubai accelerating its digital economy and preparing for widespread AI adoption, Colman’s message at NYC served as both a warning and a call to action.

    The expert emphasised that one of the biggest shifts in cyber risk today is the ‘accessibility’ of AI tools that enable criminals to launch sophisticated attacks with almost no technical background.
    “To create a ransomware attack you don’t have to be a computer science expert. Now you can be my eighty-year-old mother or my eight-year-old son. They are not experts of AI or Computer Science,” said Coleman. “You go online to range of platforms which are free for the most part and create fraud or dangers for others. These can create dramatic existential harm.”

    He illustrated how simple it has become to fabricate persuasive manipulated videos. “Playing a video click which says – I wanted to say that I love Reality Defender and suggest you adopt their technology,” he noted, explaining how easily any public figure could be mimicked.

    What makes this more alarming, he said, is how little data is needed. In a world where everyone’s online, a few seconds clip can cause real damage.

    “So, what is interesting and also very scary is that today for a video a single piece to camera image is all you need to create a perfect Deepfake.”

    ‘AI will never be able to replicate what you know’

    A year ago, many could still spot inconsistencies. But Colman said the technology has now surpassed what most people can detect. “A year ago, you could tell the difference but right now technology is good and real that in our own studies, a Deepfake is more real than a screenshot from Zoom…which means Zoom looks fake and Deepfake looks real,” he said. “In Deepfake anyone can be anybody…very easy, very dangerous. So, now it’s never trust and always verify.”

    That’s why personal authentication — and simple human questions — may become crucial safeguards. “AI will be able to replicate you but it will never be able to replicate what you know. If it’s my family and kids I’ll ask questions like ‘What did you have for dinner last night?’”

    While some celebrities are already adopting permission-based AI to extend their presence, he warned that the same technology can be weaponised. “In good ways actors like Will Smith and others are using permission-driven AI to propagate themselves in commercials. That’s a good use case,” he said. “Challenges?…you can also use Will Smith and make him do or say absolutely anything.”

    To demonstrate the threat to the Dubai audience, Colman revealed that even the video being shown onstage was fabricated. “Hello Dubai Chambers Delegation. Deepfakes and AI manipulated vidoes pose serious threats to individuals, organisations and governments because of how easy it is for bad actors to create realistic and fake personifications and read your files. That…this video (pointing to the video) is a Deepfake of me.”

    He then broke down the playbook used by attackers. “We are all online, so it’s about a few Google Searches, checking your social media, Linkedln, collecting information…typically a face, few seconds of audio and if you don’t have the audio…there are hundreds of platform for real time voice donation and just pick an audio clip that sounds reasonably similar,” he said. Everything — from tone to dialect — can be copied.

    Deepfake videos can be created in minutes

    Bad actors then decide their goal, Colman explained. “What are you trying to achieve? Are you trying to damage someone’s reputation? Are you trying to get the earliest announcements of a company? Perhaps a country leader speaking at the United Nations. How do you make it topical or viral and create a sense of connectedness? Then you create the Deepfake that takes seconds and then you deploy it. This whole process can be done in minutes and that’s the scare.”

    The real crisis, he said, is that even experts can’t reliably tell what’s real and what’s fabricated. “So why are deepfakes problematic? It’s because individuals, law enforcement, and corporations cannot tell the difference between authentic media and deepfake,” he said. “Everything can claim to be my name, my voice, my face, my device, my connection, my IP (Internet Protocol) address… it can say yes — but it’s AI. With deepfakes, it will never be 100 per cent. It will be 99 per cent.”

    This creates fertile ground for highly convincing social-engineering attacks. “With the right amount of confidence and perceived authority, junior employees will often trust a senior figure,” he said. “If a call appears to come from a CEO or CFO and sounds reasonable, you’re not going to say, ‘Hey CEO, prove it’s really you — give me a two-factor code before you log into your email.’ No one does that.”

    Technology that detects deepfakes

    Colman warned that even more advanced capabilities are imminent. “Deepfakes will become easier and cheaper (in future). Right now, most of the Deepfakes are using real time audio. It’s not so much real time video, not because the tech is not available, but it’s still too expensive to make it free,” he said. “In six months, real time Deepfake video will be as cheap and as free as real time audio.”

    Reality Defender, he added, has developed tools built to keep pace with this rapid evolution. “We can detect real-time audio, video, images, and text. You don’t know whether Ben is saying something truthful or untruthful. But we do know if your name, face, or voice is AI or not. It can now (also) understand different (Arabic) dialects — UAE, Saudi, Egyptian.”

    www.khaleejtimes.com (Article Sourced Website)

    #UAE #Deepfakes #real #Zoom #fake #videos #expert #warns