The way technology keeps shifting its paradigm, the line between genuine interactions and digital deception is becoming increasingly difficult to distinguish. Today’s cybercriminals are leveraging the power of generative artificial intelligence (AI) to create more closely intricate and harder-to-detect threats. This new wave of AI-powered cybercrime represents a humongous challenge for organisations across the globe.
Generative AI, a technology known for producing lifelike text, images, and even voice imitations, is now being used to execute more convincing and elaborate cyberattacks. What used to be simple email scams and basic malware have developed into highly realistic phishing attempts and ransomware campaigns. Deepfake technology, which can fabricate videos and audio clips that appear genuine, is particularly alarming, as it allows attackers to impersonate real individuals with unprecedented accuracy. This capability, coupled with the availability of harmful AI tools on the dark web, has armed cybercriminals with the means to carry out highly effective and destructive attacks.
While AI offers numerous benefits for businesses, including efficiency and productivity, it also expands the scope of potential cyber threats. In regions like Scotland, where companies are increasingly adopting AI-driven tools, the risk of cyberattacks has grown considerably. A report from the World Economic Forum, in collaboration with Accenture, highlights that over half of business leaders believe cybercriminals will outpace defenders within the next two years. The rise in ransomware incidents—up 76% since late 2022— underlines the severity of the threat. One notable incident involved a finance executive in Hong Kong who lost $25 million after being deceived by a deep fake video call that appeared to be from his CFO.
Despite the dangers posed by generative AI, it also provides opportunities to bolster cybersecurity defences. By integrating AI into their security protocols, organisations can improve their ability to detect and respond to threats more swiftly. AI-driven algorithms can be utilised to automatically analyse code, offering insights that help predict and mitigate future cyberattacks. Moreover, incorporating deepfake detection technologies into communication platforms and monitoring systems can help organisations safeguard against these advanced forms of deception.
As companies continue to embrace AI technologies, they must prioritise security alongside innovation. Conducting thorough risk assessments before implementing new technologies is crucial to ensure they do not inadvertently increase vulnerabilities. Additionally, organisations should focus on consolidating their technological resources, opting for trusted tools that offer robust protection. Establishing clear policies and procedures to integrate AI security measures into governance frameworks is essential, especially when considering regulations like the EU AI Act. Regular training for employees on cybersecurity practices is also vital to address potential weaknesses and ensure that security protocols are consistently followed.
The rapid evolution of generative AI is reshaping the course of cybersecurity, requiring defenders to continuously adapt to stay ahead of increasingly sophisticated cybercriminals. For businesses, particularly those in Scotland and beyond, the role of cybersecurity professionals is becoming increasingly critical. These experts must develop new skills and strategies to defend against AI-driven threats. As we move forward in this digital age, the importance of cybersecurity education across all sectors cannot be overstated— it is essential to safeguarding our economic future and maintaining stability in a world where AI is taking the steering wheel.
The African Union Commission (AUC) is an executive and administrative body, functioning as the secretariat of the African Union (AU). It plays a crucial role in coordinating AU operations and communicating with foreign partners, much like the European Commission does inside the European Union.
The chairperson of the AUC, Moussa Faki Mahamat, often holds formal meetings with global leaders through a “note verbal.” The AU leadership regularly schedules meetings with representatives of other nations or international organizations using these diplomatic notes.
However, now the routine meetings are unfortunately disrupted due the cybercrime activities revolving around AI. The cybercriminals apparently successfully impersonated Mahamat, conducting meetings under his guise. The imitation, which went so far as to mimic Faki's voice, alarmed leaders in Europe and the AUC.
About the Impersonation Attack
The cybercriminal further copied the email addresses, disguised as AUC’s deputy chief of staff of the AUC in order to set up phone conversations between Faki and foreign leaders. They even went to several European leaders' meetings, using deepfake video editing to pass for Faki.
After realizing the issue, the AUC reported these incidents, confirming that it would communicate with foreign governments through legitimate diplomatic channels, usually through their embassies in Addis Ababa, the home of the AU headquarters.
The AUC has categorized these fraudulent emails as “phishing,” suggesting that the threat actors may have attempted to acquire digital identities for illicit access to critical data.
While Africa’s digital economy has had a positive impact on its overall economy, with an estimate of USD 180 billion by 2025, the rapid development in digitalization has also contributed to an increase in cyber threats. According to estimates posted on the Investment Monitor website, cybercrime alone might cost the continent up to USD 4 billion annually.
While the AUC have expressed regrets over the event of a deepfake of the identity of Moussa Faki Mahamat, the organization did not provide any further details of the investigation involved or the identity of the criminals. Neither did the AUC mention any future plans to improve their cyber landscape in regard to deepfake attacks.
The incident has further highlighted the significance of more robust cybersecurity measures and careful channel monitoring for government and international organizations.