Even though the National Security Agency has been investigating cyberattacks and propaganda campaigns, a National Security Agency official said Tuesday that hackers are turning to generative artificial intelligence chatbots, such as ChatGPT, to make their operations appear more convincing to native English speakers to make their operations appear more credible to native English speakers.
When Rob Joyce, NSA Cybersecurity Director, spoke at Fordham University in New York to discuss cyber security at the International Conference on Cyber Security, NSA Cybersecurity Director said the spy agency has observed hackers and cybercriminals using chatbots to appear more natural to foreign intelligence agencies, and both of them used such bots to make them appear more likely to be native English speakers.
Cybercriminals of all skill levels are using artificial intelligence to enhance their abilities. However, AI is also helping to hunt them down, as security experts have warned. It was revealed at a conference at Fordham University that the director of cybersecurity at the National Security Agency, Rob Joyce, said that Chinese hackers are using artificial intelligence to get past firewalls when infiltrating networks and using it to their advantage.
The report Joyce warns that hackers are using artificial intelligence to improve the quality of their English when conducting phishing scams, as well as to give technical advice to them when they attack or infiltrate a network.
There was no mention of specific cyberattacks involving the use of artificial intelligence or attribution of particular activity to a state or government in Joyce's remarks, which were aimed at preventing and eradicating threats aimed at critical infrastructure and defence systems within the U.S.
The recent hacker attacks on U.S. critical infrastructure by China-backed hackers were an example of how AI technology was surfacing malicious activity and giving U.S. intelligence an edge over criminal activity, Joyce argued. These hack attacks were thought to have been made in preparation for the upcoming Chinese invasion of Taiwan.
There is no need to use traditional malware that can be detected by China state-backed hackers, according to Joyce, but rather they are exploiting vulnerabilities and implementation flaws that allow them to gain access to a network and to appear legitimate and authorized to be in that network. This comment comes at a time when generative artificial intelligence tools are increasingly being used in cyberattacks and espionage campaigns to produce convincing computer-generated text and images.
As part of its ongoing efforts to establish new standards for AI safety and security, the Biden administration released an executive order in October. This order aims to strengthen the protection against abuses, errors, and abuses of the technology. There has been a recent warning from the Federal Trade Commission regarding the dangers associated with artificial intelligence, including ChatGPT, which has been used “to boost fraud and scams.” Joyce believes that artificial intelligence is a powerful tool that can enable someone incompetent to become competent, but it is going to make those who use it more effective and dangerous, as well.
In 2023, the US government came under increased attack from groups linked to China and Iran, which they attributed to groups that aim to attack infrastructure sites that are vital to energy and water production in the US. There are several ways that the China-backed 'Volt Typhoon group has used to attack networks, and one of them involves hacking into networks covertly and then using their built-in network administration tools to launch attacks against the networks.
Although Joyce did not provide any specific examples of recent cyber attacks involving artificial intelligence, she pointed out, "They are hacking into places like electric grids, transportation pipelines, and courts, trying to get in so they can cause social disruption and panic at the time and place they choose.".
Groups with strong Chinese links have been gaining access to networks by abusing installation flaws - bugs arising from poorly implemented updates to software - and then establishing themselves as what is perceived as legitimate users of the system to gain access.
However, there are often instances in which their activities and traffic within the network go beyond what is expected, resulting in unusual network behaviour. In an interview with Joyce Chung, he explains how machine learning, artificial intelligence, and big data combine to help us surface (and expose) these behaviours [and] bring them to the forefront, which is important because when it comes to critical infrastructure, these accounts don't behave the same as usual business entities, which gives us the advantage