Cloud Europe is a Tier IV carrier-neutral data center based in Rome's Tecnopolo Tiburtino. According to the company's website, it specializes in data center architecture and management, focusing on security and service continuity. The company creates, hosts, and operates modular infrastructure for data centers in both the private and public sectors.
1. Cloud Europe: On June 29, 2024, RansomHub claimed responsibility for infiltrating the servers of Cloud Europe, a prominent Tier IV certified data center in Rome. The attackers allegedly encrypted the servers and exfiltrated 70 terabytes of data. Among the stolen information were 541.41 gigabytes of sensitive data, including client records, financial documents, and proprietary software.
2. Mangimi Fusco: The same day, RansomHub targeted Mangimi Fusco, an animal food manufacturer. The group claimed to have stolen 490 gigabytes of confidential data, including client files, budget details, and payroll information. However, as of now, Mangini Fusco’s website shows no signs of the reported attack, leaving room for skepticism.
3. Francesco Parisi: RansomHouse, another hacking collective, breached the website of Francesco Parisi, a group specializing in freight forwarding and shipping services. The attack occurred on May 29, 2024, and resulted in the theft of 150 gigabytes of company data. Francesco Parisi has acknowledged the breach and is working to restore normalcy while enhancing its cybersecurity defenses.
These attacks raise critical questions about the state of cybersecurity readiness among Italian businesses:
Vulnerabilities: Despite advancements in security protocols, organizations remain vulnerable to sophisticated attacks. The ability of threat actors to infiltrate well-established data centers and corporate websites highlights the need for continuous vigilance.
Data Privacy: The stolen data contains sensitive information that could be exploited for financial gain or used maliciously. Companies must prioritize data privacy and invest in robust encryption, access controls, and incident response plans.
Business Continuity: When ransomware strikes, business operations grind to a halt. Cloud Europe’s experience serves as a stark reminder that even data centers, designed to ensure continuity, are not immune. Organizations must have contingency plans to minimize disruptions.
To safeguard against ransomware and other cyber threats, companies should consider the following strategies:
Trento was the first local administration in Italy to be sanctioned by the GPDP watchdog for using data from AI tools. The city has been fined a sum of 50,000 euros (454,225). Trento has also been urged to take down the data gathered in the two European Union-sponsored projects.
The privacy watchdog, known to be one of the most proactive bodies deployed by the EU, for evaluating AI platform compliance with the bloc's data protection regulations temporarily outlawed ChatGPT, a well-known chatbot, in Italy. In 2021, the authority also reported about a facial recognition system tested under the Italian Interior Ministry, which did not meet the terms of privacy laws.
Concerns around personal data security and privacy rights have been brought up by the rapid advancements in AI across several businesses.
Following a thorough investigation of the Trento projects, the GPDP found “multiple violations of privacy regulations,” they noted in a statement, while also recognizing how the municipality acted in good faith.
Also, it mentioned that the data collected in the project needed to be sufficiently anonymous and that it was illicitly shared with third-party entities.
“The decision by the regulator highlights how the current legislation is totally insufficient to regulate the use of AI to analyse large amounts of data and improve city security,” it said in a statement.
Moreover, in its presidency of the Group of Seven (G7) major democracies, the government of Italy which is led by Prime Minister Giorgia Meloni has promised to highlight the AI revolution.
Legislators and governments in the European Union reached a temporary agreement in December to regulate ChatGPT and other AI systems, bringing the technology one step closer to regulations. One major source of contention concerns the application of AI to biometric surveillance.
In addition, Italian cybersecurity firm Cleafy researchers Federico Valentini and Alessandro Strino reported an ongoing financial fraud campaign since at least 2019 that leverages a new web-inject toolkit called drIBAN. The main goal of drIBAN fraud operations is to infect Windows workstations inside corporate environments, altering legitimate banking transfers performed by the victims and transferring money to an illegitimate bank account.
These accounts are either controlled by the threat actors or their affiliates, who are then tasked with laundering the stolen funds. The fraudulent transactions are often realized by means of a technique called Automated Transfer System (ATS) that's capable of bypassing anti-fraud systems put in place by banks and initiating unauthorized wire transfers from a victim's own computer.
The operators behind drIBAN have become more adept at avoiding detection and developing effective social engineering strategies, in addition to establishing a foothold for long periods in corporate bank networks. Furthermore, there are indications that the activity cluster overlaps with a 2018 campaign mounted by an actor tracked by Proofpoint as TA554 targeting users in Canada, Italy, and the U.K.
Organisations need to be aware of these threats and take immediate action to protect their systems from cyberattacks. The ACN has reported that dozens of Italian organisations have been likely affected by the global ransomware attack and many more have been warned to take action to avoid being locked out of their systems.
After Italy became the first Western country to block advanced chatbot ChatGPT on Friday due to a lack of transparency in its data use, Europe is wondering who will follow. Several neighboring countries have already expressed interest in the decision.
“In the space of a few days, specialists from all over the world and a country, Italy, are trying to slow down the meteoric progression of this technology, which is as prodigious as it is worrying,” writes the French daily Le Parisien.
Many cities in France have already begun with their own research “to assess the changes brought about by ChatGPT and the consequences of its use in the context of local action,” reports Ouest-France.
The city of Montpellier wants to ban ChatGPT for municipal staff, as a precaution," the paper reports. “The ChatGPT software should be banned within municipal teams considering that its use could be detrimental.”
According to the BBC, the Irish data protection commission is following up with the Italian regulator to understand the basis for its action and "will coordinate with all E.U. (European Union) data protection authorities" in relation to the ban.
The Information Commissioner's Office, the United Kingdom's independent data regulator, also told the BBC that it would "support" AI developments while also "challenging non-compliance" with data protection laws.
ChatGPT is already restricted in several countries, including China, Iran, North Korea, and Russia. The E.U. is in the process of preparing the Artificial Intelligence Act, legislation “to define which AIs are likely to have societal consequences,” explains Le Parisien. “This future law should in particular make it possible to fight against the racist or misogynistic biases of generative artificial intelligence algorithms and software (such as ChatGPT).
The Artificial Intelligence Act also proposes appointing one regulator in charge of artificial intelligence in each country.
The Italian situation
The Italian data protection authority explained that it was banning and investigating ChatGPT due to privacy concerns about the model, which was developed by a U.S. start-up called OpenAI, which is backed by billions of dollars in investment from Microsoft.
The decision "with immediate effect" announced by the Italian National Authority for the Protection of Personal data was taken because “the ChatGPT robot is not respecting the legislation on personal data and does not have a system to verify the age of minor users,” Le Point reported.
“The move by the agency, which is independent from the government, made Italy the first Western country to take action against a chatbot powered by artificial intelligence,” wrote Reuters.
The Italian data protection authority stated that it would not only block OpenAI's chatbot, but would also investigate whether it complied with the EU's General Data Protection Regulation.
It goes on to say that the new technology "exposes minors to completely inappropriate answers in comparison to their level of development and awareness."
According to the press release from the Italian Authority, on March 20, ChatGPT "suffered a loss of data ('data breach') concerning user conversations and information relating to the payment of subscribers to the paid service."
It also mentions the "lack of a legal basis justifying the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the platform's operation."
ChatGPT was released to the public in November and was quickly adopted by millions of users who were impressed by its ability to answer difficult questions clearly, mimic writing styles, write sonnets and papers, and even pass exams. ChatGPT can also be used without any technical knowledge to write computer code.
“Since its release last year, ChatGPT has set off a tech craze, prompting rivals to launch similar products and companies to integrate it or similar technologies into their apps and products,” writes Reuters.
"On Friday, OpenAI, which disabled ChatGPT for users in Italy in response to the agency's request, said it is actively working to reduce the use of personal data in training its AI systems like ChatGPT."
According to Euronews, the Italian watchdog has now asked OpenAI to "communicate within 20 days the measures undertaken" to remedy the situation, or face a fine of €20 million ($21.7 million) or up to 4% of annual worldwide turnover.
The announcement comes after Europol, the European police agency, warned on Monday that criminals were ready to use AI chatbots like ChatGPT to commit fraud and other cybercrimes. The rapidly evolving capabilities of chatbots, from phishing to misinformation and malware, are likely to be quickly exploited by those with malicious intent, Europol warned in a report.