Deepfakes are digital manipulations of individuals, making them appear to say things they have not actually said, with seamless voices and facial features.
These deceptive creations are intentionally designed to mislead and deceive, posing a targeted threat against truth. Utilising Artificial Intelligence (AI), deepfakes produce convincing audio, video, and image forgeries, in contrast to cheapfakes which involve manual or digital manipulation of existing recordings.
Through the use of AI, media can automatically be created or altered, allowing for changes in faces, speech, gestures, and even the invention of entire speeches purportedly delivered by individuals.
Furthermore, the realistic appearance of these creations prompts us to doubt their authenticity and reliability. Deepfakes pose a serious threat to organisations as their lifelike quality can lead to detrimental outcomes. Instances of audio and visual deepfakes have been utilised for fraudulent activities, financial theft from organisations, personal harassment, and dissemination of misinformation.
Various methods, such as non-algorithmic techniques, automated tools, and cryptographic or blockchain-based solutions, have been developed to identify and combat deepfakes.
On the contrary, financial institutions are facing a major challenge as criminals are exploiting personal engagement techniques, like phone calls and video calls, used to prevent fraud to commit fraud through deepfakes.
The rise in deepfake-assisted financial crimes includes impersonating account holders for unauthorised withdrawals and manipulating transactions through voice scams, indicating the quick evolution of tactics used by fraudsters.
Additionally, bad actors leverage deepfakes to take advantage of the fast-paced, widespread nature of social media platforms in order to undermine public trust and interfere with democratic processes. This is facilitated through the use of automated bots, which are software agents created to imitate real users on social media.
Furthermore, micro-targeting is employed to strategically utilise individuals’ online information to personalise and distribute tailored misinformation to specific, often small, audience groups. These tactics have increased the scope, efficiency, and accuracy of disinformation campaigns, which pose a threat to elections and can influence public perception negatively.
Moreover, it is important for individuals to be aware of the signs of deepfakes, as they may be used for malicious purposes such as harassment, intimidation, and spreading misinformation.
To avoid falling victim to deepfakes, individuals should carefully scrutinise emails and other communications, question suspicious audio and video content, report any suspicious activities, verify the authenticity of information through questioning and cross-referencing with reputable sources.
On a national and regulatory level, strategies to combat deepfakes involve the development of laws aimed at preventing the harmful use of AI and deepfake technology, the establishment of frameworks to minimise the potential risks of these technologies, collaboration among various stakeholders such as governments, tech companies, and civil society, and multi-stakeholder partnerships focused on combating synthetic disinformation while preserving the balance between combatting misinformation and upholding freedom of expression.
*Cornelia Shipindo is Manager: Cyber Security at the Communications Regulatory Authority of Namibia (CRAN)