Search This Blog

Powered by Blogger.

Blog Archive

Labels

A.I. could turn devices into weapons, a report warns

With an increase in the incorporations of advanced Artificial intelligence into a range of devices, the risk of hackers using these technologies to launch deadly malicious attacks are exponentially increasing, a report warns.

The report, titled "The Malicious Use of Artificial Intelligence,"  was published by 26 UK and US experts and researchers to caution against various security threats posed by the misuse of AI.

“Because cybersecurity today is largely labor-constrained, it is ripe with opportunities for automation using AI. Increased use of AI for cyber defense, however, may introduce new risks,” the report warns.

The report predicts an expansion in the cyber threats as AI capabilities are becoming more powerful and widespread. For example, self-driving cars could be easily tricked into misinterpreting road signs that could cause fatal road accidents.

“The use of AI to automate tasks involved in carrying out cyber attacks will alleviate the existing trade-off between the scale and efficacy of attacks,” the report said. As a result, the researchers believe the threat from labor-intensive cyber attacks such as spear phishing will be increased. They also expect new attacks that exploit human vulnerabilities by using speech synthesis for impersonation, for example.

Malicious actors have natural incentives to experiment with using AI to attack the typically insecure systems of others, the report said, and while the publicly disclosed use of AI for offensive purposes has been limited to experiments by “white hat” researchers, the pace of progress in AI suggests the likelihood of cyber attacks using machine learning capabilities soon.

“Indeed, some popular accounts of AI and cybersecurity include claims based on circumstantial evidence that AI is already being used for offense by sophisticated and motivated adversaries. Expert opinion seems to agree that if this hasn’t happened yet, it will soon,” the report said.

The report highlights the need to:

  • Explore and potentially implement red teaming, formal verification, the responsible disclosure of AI vulnerabilities, security tools, and secure hardware.
  • Re-imagine norms and institutions around the openness of research, starting with pre-publication risk assessment in technical areas of special concern, central access licensing models, sharing regimes that favor safety and security, and other lessons from other dual-use technologies.
  • Promote a culture of responsibility through standards and norms.
  • Developing technological and policy solutions that could help build a safer future with AI.
Share it: