Fri | Jan 23, 2026

Kristen Gyles | Is AI now too real to be safe?

Published:Friday | January 23, 2026 | 12:10 AM
This March 2025 image from the website of artificial intelligence company Xoltar shows a demonstration of one of their avatars for conducting video calls with a patients.
This March 2025 image from the website of artificial intelligence company Xoltar shows a demonstration of one of their avatars for conducting video calls with a patients.

Tupac Shakur or ‘2Pac’ was an American rapper who died in 1996. Yet, I recently saw a video of 2Pac on the poolside of a luxury property talking about music hits he is almost ready to release, that fans can expect in 2026. If it wasn’t for the watermark on the video, indicating that the video was AI-generated, I probably would have wondered if 2Pac had been resurrected.

This is where AI is now. It resurrects the dead.

It is becoming increasingly difficult to distinguish between the products of artificial intelligence and things and people that exist in real life. AI can be used to mimic a person’s appearance, voice and mannerisms in a way that looks real to the untrained eye. People can therefore produce pictures and videos of themselves and others saying things they did not say and doing things they did not do.

The rapid growth in the creation of this type of deepfake content is now very disturbing. When anyone can produce a video of you engaging in whatever activity they choose, there is a very real risk of reputational damage that becomes impossible to ignore. Furthermore, what recourse do I have if someone produces video ‘evidence’ placing me at the scene of a crime?

No longer do people need to take wedding photos or search for the best pictures of their deceased family members to include in funeral programmes. AI can take care of all that. AI can also produce professional headshots, graduation pictures and other portraits that can be used for a variety of purposes. There is now a much lower demand for actual photography.

IMPLICATIONS

The implications here are far-reaching. Recently, members of the government highlighted as a scam, an AI-generated video that was being circulated of the Minister of Labour and Social Security speaking on a talk-show about an investment opportunity of sorts which requires a minimum investment of J$40,000 that could generate an income of J$160,000 per week. The video was totally fabricated. While I could tell that the video was entirely fake, I can see how a not-so-tech-savvy person could have been fooled.

Think back to October of last year when Hurricane Melissa made its passage across the western end of the island. Unsurprisingly, several AI-generated pictures and videos emerged, supposedly showing the effects of the hurricane in full motion or showing the damage caused by the hurricane.

One video, for example, depicted raging flood waters at what appeared to be the back of a hotel, with small sharks swimming around in the pool. Another AI-generated video seemed to be from the vantage point of someone in an airplane looking down at the eye of the hurricane. The effect of these manufactured videos was confusion, panic and hysteria.

Many social media influencers and content creators make AI-generated content because they can use the content to generate genuine interest, whether through fear or excitement, which then translates into online engagement which can then be monetised. One way to create an interesting video is by fabricating an event altogether and having AI depict it. Scammers also use AI to spread misinformation that can mislead the public into sending them money or routing funds to their criminal cronies.

DIFFERENTIATE

The question is, with AI becoming more and more imperceptible, how can we differentiate real from fake?

One dead giveaway that a video is AI-generated, and therefore cannot be trusted as a reliable information source, is the presence of a label or watermark on the video from an AI video generator. One popular AI video development tool is ‘Sora’, which is a text-to-video model developed by OpenAI. It allows users to create video content by simply uploading an image and giving the app prompts or instructions that will determine the activities depicted in the video. Sora leaves a watermark of their logo on videos created on their platform, which helps users distinguish their videos from real footage.

AI is becoming much more refined and much better at mimicking real life images and movements. However, for the time being, another thing to consider is that AI is not always very good at replicating fine details. A good place to start when trying to determine if an image or video is AI-generated is to look at the fingers or toes of the person(s) in the picture. AI images and videos often depict individuals as having distorted fingers, teeth and/or ears and with unnatural movements such as weird blinking patterns that can give a hint that the video is not real. Another hint that a video might be AI-generated is lip movement that is out of sync with the audio of the video.

Furthermore, some amount of common sense must be utilised when consuming social content nowadays. If the scenario is not a likely one, it may very well not have happened. While this is not a foolproof indicator, it is something to consider.

AI takes entertainment to a new level in that it makes social media more interesting and, perhaps, more engaging, but it also turns the social media space into a cesspool of misinformation. Not everyone can distinguish between what is real and what is AI-generated. Many people watch AI-generated videos thinking they are real and make actual decisions based off false narratives arising from AI-generated content.

Of course, this opens up another conversation about trust. If you have to check every video you watch to verify that it is real, could you potentially end up dismissing real video footage thinking it might be AI? AI has evolved to a point where it looks and feels so real that it just doesn’t seem safe any more.

Kristen Gyles is a free-thinking public affairs opinionator. Send feedback to kristengyles@gmail.com and columns@gleanerjm.com.