Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deep Fakes. Show all posts

Tech Expert Warns AI Could Surpass Humans in Cyber Attacks by 2030

 

Jacob Steinhardt, an assistant professor at the University of California, Berkeley, shared insights at a recent event in Toronto, Canada, hosted by the Global Risk Institute. During his keynote, Steinhardt, an expert in electrical engineering, computer science, and statistics, discussed the advancing capabilities of artificial intelligence in cybersecurity.

Steinhardt predicts that by the end of this decade, AI could surpass human abilities in executing cyber attacks. He believes that AI systems will eventually develop "superhuman" skills in coding and finding vulnerabilities within software.

Exploits, or weak spots in software and hardware, are commonly exploited by cybercriminals to gain unauthorized access to systems. Once these access points are found, attackers can execute ransomware attacks, locking out users or encrypting sensitive data in exchange for a ransom. 

Traditionally, identifying these exploits requires painstakingly reviewing lines of code — a task that most humans find tedious. Steinhardt points out that AI, unlike humans, does not tire, making it particularly suited to the repetitive process of exploit discovery, which it could perform with remarkable accuracy.

Steinhardt’s talk comes amid rising cybercrime concerns. A 2023 report by EY Canada indicated that 80% of surveyed Canadian businesses experienced at least 25 cybersecurity incidents within the year. While AI holds promise as a defensive tool, Steinhardt warns that it could also be exploited for malicious purposes.

One example he cited is the misuse of AI in creating "deep fakes"— digitally manipulated images, videos, or audio used for deception. These fakes have been used to scam individuals and businesses by impersonating trusted figures, leading to costly fraud incidents, including a recent case involving a British company tricked into sending $25 million to fraudsters.

In closing, Steinhardt reflected on AI’s potential risks and rewards, calling himself a "worried optimist." He estimated a 10% chance that AI could lead to human extinction, balanced by a 50% chance it could drive substantial economic growth and "radical prosperity."

The talk wrapped up the Hinton Lectures in Toronto, a two-evening series inaugurated by AI pioneer Geoffrey Hinton, who introduced Steinhardt as the ideal speaker for the event.

Why AI-Driven Cybercrime Could Be Your Business's Biggest Risk


 


The way technology keeps shifting its paradigm, the line between genuine interactions and digital deception is becoming increasingly difficult to distinguish. Today’s cybercriminals are leveraging the power of generative artificial intelligence (AI) to create more closely intricate and harder-to-detect threats. This new wave of AI-powered cybercrime represents a humongous challenge for organisations across the globe.

Generative AI, a technology known for producing lifelike text, images, and even voice imitations, is now being used to execute more convincing and elaborate cyberattacks. What used to be simple email scams and basic malware have developed into highly realistic phishing attempts and ransomware campaigns. Deepfake technology, which can fabricate videos and audio clips that appear genuine, is particularly alarming, as it allows attackers to impersonate real individuals with unprecedented accuracy. This capability, coupled with the availability of harmful AI tools on the dark web, has armed cybercriminals with the means to carry out highly effective and destructive attacks.

While AI offers numerous benefits for businesses, including efficiency and productivity, it also expands the scope of potential cyber threats. In regions like Scotland, where companies are increasingly adopting AI-driven tools, the risk of cyberattacks has grown considerably. A report from the World Economic Forum, in collaboration with Accenture, highlights that over half of business leaders believe cybercriminals will outpace defenders within the next two years. The rise in ransomware incidents—up 76% since late 2022— underlines the severity of the threat. One notable incident involved a finance executive in Hong Kong who lost $25 million after being deceived by a deep fake video call that appeared to be from his CFO.

Despite the dangers posed by generative AI, it also provides opportunities to bolster cybersecurity defences. By integrating AI into their security protocols, organisations can improve their ability to detect and respond to threats more swiftly. AI-driven algorithms can be utilised to automatically analyse code, offering insights that help predict and mitigate future cyberattacks. Moreover, incorporating deepfake detection technologies into communication platforms and monitoring systems can help organisations safeguard against these advanced forms of deception.

As companies continue to embrace AI technologies, they must prioritise security alongside innovation. Conducting thorough risk assessments before implementing new technologies is crucial to ensure they do not inadvertently increase vulnerabilities. Additionally, organisations should focus on consolidating their technological resources, opting for trusted tools that offer robust protection. Establishing clear policies and procedures to integrate AI security measures into governance frameworks is essential, especially when considering regulations like the EU AI Act. Regular training for employees on cybersecurity practices is also vital to address potential weaknesses and ensure that security protocols are consistently followed.

The rapid evolution of generative AI is reshaping the course of cybersecurity, requiring defenders to continuously adapt to stay ahead of increasingly sophisticated cybercriminals. For businesses, particularly those in Scotland and beyond, the role of cybersecurity professionals is becoming increasingly critical. These experts must develop new skills and strategies to defend against AI-driven threats. As we move forward in this digital age, the importance of cybersecurity education across all sectors cannot be overstated— it is essential to safeguarding our economic future and maintaining stability in a world where AI is taking the steering wheel.


Jack Dorsey Warns: The Blurring Line Between Real and Fake

 

Tech billionaire Jack Dorsey, best known as the founder of Twitter (now X), has issued a stark warning about the future. He predicts that in the next five to ten years, it will become increasingly difficult for people to distinguish between reality and fabrication. "Don't trust; verify," he advised.

Dorsey emphasized the need for personal verification and experience in an era dominated by advanced image creation, deep fakes, and manipulated videos. "You have to experience it yourself. And you have to learn yourself. This is going to be so critical as we enter this time in the next five years or 10 years because of the way that images are created, deep fakes, and videos; you will not, you will literally not know what is real and what is fake," he stated.

He warned that the overwhelming production of artificial content will make it feel like living in a simulation. "It will be almost impossible to tell. It will feel like you're in a simulation. Because everything will look manufactured, everything will look produced. It's very important that you shift your mindset or attempt to shift your mindset to verify the things that you feel you need through your experience and your intuition," he added.

Dorsey also highlighted a concerning trend where devices are replacing functions traditionally performed by the human brain. "Devices in your bags and your pockets are taking over functions traditionally performed by the human brain, and because all these are on your phone now, you're not building those connections in your brain anymore," he warned.

The video of Dorsey's comments was posted on X, prompting a response from Elon Musk, the current owner of the social media site. Musk questioned, “How do we know we aren’t already there?"

In May 2024, Dorsey made headlines for resigning from the board of Bluesky, a social networking service he helped fund and popularize. This decision followed his regret over selling Twitter to Musk.

Dorsey also significantly reduced his list of followed accounts on X to just three: Elon Musk, Edward Snowden, and Stella Assange, wife of WikiLeaks publisher Julian Assange. This move was seen as a sign of improving relations between Dorsey and Musk. Previously, Dorsey had expressed disappointment over Musk’s takeover and drastic changes to Twitter, posting on Bluesky in 2023 that “it all went south" after the acquisition.

What are Deepfakes and How to Spot Them

 

Artificial intelligence (AI)-generated fraudulent videos that can easily deceive average viewers have become commonplace as modern computers have enhanced their ability to simulate reality.

For example, modern cinema relies heavily on computer-generated sets, scenery, people, and even visual effects. These digital locations and props have replaced physical ones, and the scenes are almost indistinguishable from reality. Deepfakes, one of the most recent trends in computer imagery, are created by programming AI to make one person look like another in a recorded video. 

What is a deepfake? 

Deepfakes resemble digital magic tricks. They use computers to create fraudulent videos or audio that appear and sound authentic. It's like filming a movie, but with real people doing things they've never done before. 

Deepfake technology relies on a complicated interaction of two fundamental algorithms: a generator and a discriminator. These algorithms collaborate within a framework called a generative adversarial network (GAN), which uses deep learning concepts to create and refine fake content. 

Generator algorithm: The generator's principal function is to create initial fake digital content, such as audio, photos, or videos. The generator's goal is to replicate the target person's appearance, voice, or feelings as closely as possible. 

Discriminator algorithm: The discriminator then examines the generator's content to determine if it appears genuine or fake. The feedback loop between the generator and discriminator is repeated several times, resulting in a continual cycle of improvement. 

Why do deepfakes cause concerns? 

Misinformation and disinformation: Deepfakes can be used to make convincing films or audio recordings of people saying or doing things they did not do. This creates a significant risk of spreading misleading information, causing reputational damage and influencing public opinion.

Privacy invasion: Deepfake technology has the ability to violate innocent people's privacy by manipulating their images or voices for malicious intents, resulting in harassment, blackmail, or even exploitation. 

Crime and fraud: Criminals can employ deepfake technology to imitate others in fraudulent operations, making it challenging for authorities to detect and prosecute those responsible. 

Cybersecurity: As deepfake technology progresses, it may become more difficult to detect and prevent cyberattacks based on modified video or audio recordings. 

How to detect deepfakes 

Though recent advances in generative Artificial Intelligence (AI) have increased the quality of deepfakes, we can still identify telltale signals that differentiate a fake video from an original.

- Pay close attention to the video's commencement. For example, many viewers overlooked the fact that the individual's face was still Zara Patel at the start of the viral Mandana film; the deepfake software was not activated until the person boarded the lift.

- Pay close attention to the person's facial expression throughout the video. Throughout a discourse or an act, there will be irregular variations in expression. 

- Look for lip synchronisation issues. There will be some minor audio/visual sync issues in the deepfake video. Always try to watch viral videos several times before deciding whether they are a deepfake or not. 

In addition to tools, government agencies and tech companies should collaborate to develop cross-platform detection tools that will stop the creation of deepfake videos.

Identity Hijack: The Next Generation of Identity Theft

 

Synthetic representations of people's likenesses, or "deepfake" technology, are not new. Picture Mark Hamill's 2019 "The Mandalorian" episode where he played a youthful Luke Skywalker, de-aged. Similarly, artificial intelligence is not a novel concept. 

However, ChatGPT's launch at the end of 2022 made AI technology widely available at a low cost, which in turn sparked a competition to develop more potent models among almost all of the mega-cap tech companies (as well as a number of startups). 

Several experts have been speaking concerning the risks and active threats posed by the current expansion of AI for months, including rising socio economic imbalance, economic upheaval, algorithmic discrimination, misinformation, political instability, and a new era of fraud. 

Over the last year, there have been numerous reports of AI-generated deepfake fraud in a variety of formats, including attempts to extort money from innocent consumers, ridiculing artists, and embarrassing celebrities on a large scale. 

According to Australian Federal Police (AFP), scammers using AI-generated deepfake technology stole nearly $25 million from a multinational firm in Hong Kong last week.

A finance employee at the company moved $25 million into specific bank accounts after speaking with several senior managers, including the company's chief financial officer, via video conference call. Apart from the worker, no one on the call was genuine. 

Despite his initial suspicions, the people on the line appeared and sounded like coworkers he recognised.

"Scammers found publicly available video and audio of the impersonation targets on YouTube, then used deepfake technology to emulate their voices... to lure the victim into following their instructions," acting Senior Superintendent Baron Chan told reporters. 

Lou Steinberg, a deepfake AI expert and the founder of cyber research firm CTM Insights, believes that as AI grows stronger, the situation will worsen. 

"In 2024, AI will run for President, the Senate, the House and the Governor of several states. Not as a named candidate, but by pretending to be a real candidate," Steinberg stated. "We've gone from worrying about politicians lying to us to scammers lying about what politicians said .... and backing up their lies with AI-generated fake 'proof.'" 

"It's 'identity hijacking,' the next generation of identity theft, in which your digital likeness is recreated and fraudulently misused," he added. 

The best defence against static deepfake images, he said, is to embed micro-fingerprint technology into camera apps, which would allow social media platforms to recognise when an image is genuine and when it has been tampered with. 

When it comes to interactive deepfakes (phone calls and videos), Steinberg believes the simple solution is to create a code word that can be employed between family members and friends. 

Companies, such as the Hong Kong corporation, should develop rules to handle nonstandard payment requests that require codewords or confirmations via a different channel, according to Steinberg. A video call cannot be trusted on its own; the officers involved should be called separately and immediately.

Impersonation Attack: Cybercriminals Impersonates AUC Head Using AI


Online fraudsters, in another shocking case, have used AI technology to pose as Moussa Faki Mahamat, the chairman of the African Union Commission. This bold cybercrime revealed gaps in the African Union (AU) leadership's communication channels as imposters successfully mimicked Faki's voice, held video conferences with European leaders, and even set up meetings under false pretence.

About the African Union Commission and its Leadership

The African Union Commission (AUC) is an executive and administrative body, functioning as the secretariat of the African Union (AU). It plays a crucial role in coordinating AU operations and communicating with foreign partners, much like the European Commission does inside the European Union. 

The chairperson of the AUC, Moussa Faki Mahamat, often holds formal meetings with global leaders through a “note verbal.” The AU leadership regularly schedules meetings with representatives of other nations or international organizations using these diplomatic notes.

However, now the routine meetings are unfortunately disrupted due the cybercrime activities revolving around AI. The cybercriminals apparently successfully impersonated Mahamat, conducting meetings under his guise. The imitation, which went so far as to mimic Faki's voice, alarmed leaders in Europe and the AUC.

About the Impersonation Attack

The cybercriminal further copied the email addresses, disguised as AUC’s deputy chief of staff of the AUC in order to set up phone conversations between Faki and foreign leaders. They even went to several European leaders' meetings, using deepfake video editing to pass for Faki.

After realizing the issue, the AUC reported these incidents, confirming that it would communicate with foreign governments through legitimate diplomatic channels, usually through their embassies in Addis Ababa, the home of the AU headquarters.

The AUC has categorized these fraudulent emails as “phishing,” suggesting that the threat actors may have attempted to acquire digital identities for illicit access to critical data. 

Digitalization and Cybersecurity Challenges in Africa

While Africa’s digital economy has had a positive impact on its overall economy, with an estimate of USD 180 billion by 2025, the rapid development in digitalization has also contributed to an increase in cyber threats. According to estimates posted on the Investment Monitor website, cybercrime alone might cost the continent up to USD 4 billion annually.

While the AUC have expressed regrets over the event of a deepfake of the identity of Moussa Faki Mahamat, the organization did not provide any further details of the investigation involved or the identity of the criminals. Neither did the AUC mention any future plans to improve their cyber landscape in regard to deepfake attacks.

The incident has further highlighted the significance of more robust cybersecurity measures and careful channel monitoring for government and international organizations.