By Editorial Staff
In an age where information flows ceaselessly through our screens, the line between truth and fiction has become increasingly blurred. Advances in technology, particularly the rise of AI-generated images, deep-fakes, and fabricated news stories, have created a new reality where distinguishing what is real from what is manufactured is more challenging than ever. This is compounded by growing distrust in traditional media, in various places including the United States. Once-reliable outlets are now viewed with skepticism, as one of Life In Humanity’s future articles will expound on it.
In this perfect storm of manipulation and misinformation, the very foundation of our information ecosystem is under threat. With digital deception becoming ever more sophisticated, how can we— as consumers of news— navigate this maze of manipulated images, videos, and stories without falling prey to the falsehoods that flood our world? The answer lies in a return to critical thinking and media literacy—tools essential for safeguarding truth in an era dominated by digital illusion. The article is built on these main parts:
- Are people generally aware of this threat?
- Disastrous effects of digital deception
- Ways to deal with the threat
Are people generally aware of this threat?

Ease with which information can be manipulated and spread, particularly through social media, has created a new landscape where distinguishing truth from falsehood can be extremely challenging. Detecting fake news can often be impossible without thorough analysis and reliable fact-checking. Deep-fakes, AI-generated images, and fabricated news stories are becoming increasingly sophisticated; often making it difficult or impossible for the public to discern what is real and unreal.
For instance, not all images that we see are real. While several images may appear authentic, advances in technology, especially in AI and digital manipulation tools, have rendered it increasingly difficult to discern true images from fictional ones. Images can be altered, fabricated, or even entirely generated by AI to appear real and correspond to events or situations that seem actual.
This is particularly true with the rise of deep-fakes which employ AI to create highly realistic videos or images of people doing or saying things that they never did. Similarly, photo manipulation and AI-generated art have blurred the line between reality and fabrication, making it intensely difficult for the average viewer to tell the difference.
To be frank, Life In Humanity has not managed to determine whether or not people are generally aware of the threat. While awareness of the threat posed by manipulated images and misinformation has been growing, Life In Humanity is convinced that it’s not been universally understood yet. Therefore, there must be several people remaining unaware of how easily digital content can be altered or fabricated. The rapid advancement of AI tools, like deep-fakes and image generation, has outpaced the general public’s understanding of their potential risks. Some key factors contributing to this lack of awareness include:
- Technological complexity– numerous people are not familiar with how digital manipulation works or tools behind it. AI and deep-fake technology can seem like distant or niche concepts to those who don’t work in technology or media industries.
- Media saturation-the sheer volume of images, videos, and news shared daily can overwhelm people. In such a fast-paced environment, it is easy to accept content without critically evaluating its authenticity.
- Trust in sources-people often trust platforms or individuals that they follow on social media, assuming that content shared by these sources is always trustworthy, even if it’s not verified.
- Rapid spread of misinformation-once a manipulated image or piece of fake news is shared, it can be propagated quickly. Thus, by the time a correction is conducted, the false information may have already reached a wider audience than one which will be reached the rectified version.

Charles Russell Speechlys is an international law firm established on 01 January 1891 by Charles, son of Lord Russell of Killowen (MP and later Attorney-General under Prime Minister Gladstone and Lord Chief Justice of England and Wales). It focuses on private capital, at the intersection of personal, family and business. Its headquarters is situated in London with offices across the UK, Europe, Asia and the Middle East.
In its 30 April 2024 article headlined “Digital Deception: The Rise of Deepfakes”, this firm affirms the point already addressed. “Deepfakes are manipulated audio, video or images that use artificial intelligence (AI) to create highly realistic content that can be difficult to distinguish from reality.
While this technology certainly has the potential for positive applications, the misuse of deepfakes present new and complex challenges for both individuals and businesses alike.”
Disastrous effects of digital deception
Deep-fake technology is becoming a formidable tool in the spread of misinformation, creating unprecedented challenges for individuals and organizations alike. Its potential to manipulate perceptions and damage reputations has caught the attention of experts across industries.
In its June 13, 2024 article, the University of North Carolina at Greensboro states “Over the past two decades, we’ve all developed a keen eye for spotting doctored images. ‘Is it real or photoshopped?’ has become a common question. But with the rise of generative AI, our skepticism needs an upgrade. We can describe any image in text and watch an AI bring it to life. But we now have to ask, “Is this photoshopped or AI-generated?’”
Generative AI dazzles with its limitless potential, transforming imagination into reality with breathtaking ease, but lurking beneath its brilliance is a shadow of risk and deception. “Generative AI is like a magic wand for creating content—images, audio, video—if you can name it, it can create it. Text-to-image models can turn your written descriptions into stunning visuals. But this technology is a double-edged sword. While it sparks creativity, it also opens doors to new forms of deception.
Deepfakes are AI-generated audio or video clips that make it look like someone is saying or doing something they never did. Sure, it’s fun to see Elvis Presley rap or a cartoon character sing pop hits. But deepfakes have a dark side. They can be used maliciously and can cause real harm.”

Charles Russell Speechlys observes “Businesses need to be aware of the potential of deepfakes to spread misinformation about a particular topic, industry, or person, or particular entity. Deepfake technology can be used to create convincing videos of CEO’s and other public figures saying or doing things that haven’t actually occurred, inflicting both serious financial and reputational damage.”
The sinister power of deep-fake technology has escalated, turning digital deception into a high-stakes weapon for financial fraud. With chilling precision, where deep-fake technology has blurred the line between reality and illusion as already explained, its recent Hong Kong case has stunned the corporate world. Charles Russell Speechlys explains “As we have seen in the recent case in Hong Kong, deepfakes are increasingly being used to commit financial crimes by impersonating individuals within a company in order to obtain sensitive information.
An employee in a multinational firm’s Hong Kong office was duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations. Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars (about $25.6 million) to the fraudsters.”
This case is confirmed by CNN’s February 4, 2024 article titled “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’.” The article reads “ A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.” “The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.” According CNN, senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK “(In the) multi-person video conference, it turns out that everyone [he saw] was fake.”
Chan said that the worker had experienced doubt after he obtained a message allegedly from the company’s UK-based chief financial officer. Initially, the worker suspected that it constituted a phishing email, as it expressed the need for a secret transaction to be executed. Nevertheless, Chan said the worker dismissed his early doubts after the video call because “other people in attendance had looked and sounded just like colleagues he recognized”. The police officer further said that believing everyone else on the call was real, the worker consented to remit a total of $200 million Hong Kong dollars – about $25.6 million.
CNN points out “The case is one of several recent episodes in which fraudsters are believed to have used deepfake technology to modify publicly available video and other footage to cheat people out of money. Hong Kong police did not reveal the name or details of the company or the worker. Authorities across the world are growing increasingly concerned at the sophistication of deepfake technology and the nefarious uses it can be put to.
At the end of January, pornographic, AI-generated images of the American pop star Taylor Swift spread across social media, underscoring the damaging potential posed by artificial intelligence technology. The photos – which show the singer in sexually suggestive and explicit positions – were viewed tens of millions of times before being removed from social platforms.”
The University of North Carolina at Greensboro also provides two additional real-world examples of havoc caused by deep-fake technology. “High School Scandal: an athletic director in Maryland used AI to create an audio clip of the principal making racist comments. The faked clip was shared with school faculty and caused a major upheaval. Read more.
Political Sabotage: a deepfake audio robocall of Joe Biden was sent to New Hampshire voters, telling them not to vote in the primary election. This malicious use of AI aimed to disrupt the democratic process. Read more. These examples highlight the potential for deepfake to wreak havoc, from tarnishing personal reputations to undermining elections and causing massive financial losses.”
Ways to deal with the threat

There are efforts underway to address this issue, with media literacy programs, fact-checking initiatives, and tools designed to identify deep-fakes and false images becoming more common. Yet still, for true widespread awareness to take hold, individuals must be actively educated on the potential for digital deception and encouraged to question and verify information that they encounter online.
In this age of misinformation, media literacy is more crucial than ever. In this context, it’s important to question the source of an image, check for inconsistencies, and cross-reference with reliable information before assuming something is true. As technology advances, we find it increasingly challenging to rely on visuals alone as a representation of reality. Encouraging critical thinking and verifying sources before accepting information as truth is essential.
The University of North Carolina at Greensboro furnishes valuable insights. “Some deepfake videos will be clearly marked as fictional, often with labels on the perimeter of the video. However, many deepfakes aren’t labeled, so it’s important to know the signs. Unnatural human features: AI sometimes struggles with realistic human figures. Look for oddities like too many fingers, unusually wide teeth, or arms disappearing briefly during the video.
Audio-video synchronization: often, deepfake videos have poor lip-syncing. If a person’s lips don’t match the voice, it could be a deepfake. Labels and disclaimers: always check the edges or corners of the video for any disclaimers indicating that the video is not real. Far-fetched content: if something seems too incredible to be true, it probably is. Validate extraordinary claims with other reliable sources. Inconsistencies in the Video: Look for glitches or odd movements that seem unnatural. Deepfakes can sometimes have visual inconsistencies that give them away.”