Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Deloitte. Show all posts

Cyber Theft Hits Providence School District Data

 


On Friday, Providence Public School officials were on their way to finalizing an agreement about credit monitoring for the district's teachers and staff following a recent ransomware attack on the district's network that affected teachers and staff last Friday. Then, over the weekend, information about the theft of data from Providence Public School District (PPSD) was shown on a regular website with a video preview. 

A cybercriminal group called Medusa appears to have taken control of the dark web ransom page where the 201 gigabytes of data from the CIA were allegedly leaked by cybercriminals in September, simply because they can access it through any internet browser. The district hired an undisclosed "vendor with expertise in cyber-security" to conduct an ongoing analysis of the network and audit it on behalf of a third-party IT agency. 

This cyberattack was reported to the FBI, the Department of Homeland Security, and the Rhode Island State Police. It took the district until now to disclose the nature of the security breach, as there has been a tight-lipped stance on the matter. On Sept. 11, IT staff was instructed to shut down the entire network as a result of abnormal activity that was detected, and they wouldn't be able to provide any further details. More than a week has passed since teachers and students were unable to access online curriculum, email, or use computers. 

As the district works on forensics to determine what caused the breach, a credit monitoring agreement has been finalized with a vendor not yet identified and a letter containing information about how staff can access these services is being written for distribution to employees “very soon,” district spokesperson Jay G. Wégimont said in a letter to staff. According to BCC spokesperson Brian Hodge, the Rhode Island Attorney General’s office still has not been officially notified of the data breach and is awaiting formal notification from the company.

Upon confirmation of a breach of personal information, any municipal or government agency must notify the Attorney General's office, credit reporting agencies, and individuals affected by the breach within 30 days of the incident. In a letter from Superintendent Javier Montaez to the Providence School Board on Sept. 25, the PPSD first used the term "unauthorized access" to refer to the breach, though the term "breach" was also used in the public statement that the Providence School Board issued on September 18.

It is "encouraging" that the Providence school district is informing potentially affected employees and finalizing the credit monitoring contract as soon as possible, spokeswoman Anthony Vega said in an email sent to Rhode Island Current on Tuesday, that he received from the mayor. It was reported in an e-mail sent by a spokesperson for the Providence City Council that the council would not be able to comment. Despite requests for comment, the governor's office did not get back to the Guardian with a response. 

Despite repeated requests for comment from Rhode Island Current, Rogel has not responded to any of those requests. There seemed to be a discrepancy between the school board president's use of the term "breach" and that of the district's official language which avoided stating the exact nature of the problem. The PPSD community was informed on Sept. 12 that the district's network had experienced "irregular activity," which ultimately led computer staff to cut off internet access to the district's offices and schools across the district. 

There is still a large lack of broadband availability in Providence schools, aside from a fleet of WiFi hotspots that are being deployed to provide connectivity in the absence of a main network. There was a letter from PPSD to residents sent on Sept. 16 informing them that a forensic analysis was still being conducted and that no evidence had been found that PPSD data had been compromised.  

However, Medusa appeared to claim credit for the "irregular activity" on Monday by posting a message to their publicly accessible ransom blog claiming 41 watermarked, sometimes partially obscured screenshots that preview the contents of the 201 gigabytes of data that the hackers claim to have stolen. The hackers also included identifying information — including serial numbers of employee cell phones and parent contact information — that helped identify the content of the data.   

Medusa ransomware is an extremely dangerous malware that works quietly in the background after it has penetrated a system and accumulated exploitable data. Once the bounty has reached a sufficient amount, the database will encrypt the files to prevent users from accessing them. Ransom notes are then sent to victims demanding that they pay a ransom in exchange for the release of their files. There has also been a growing trend of "double extortion", where the hackers are not only stealing files but are also selling or releasing the data to the public if they do not receive payment.   

A ransom page indicates that, in exchange for a payment of $1 million, PPSD can retrieve or delete its data. An additional day would be added to the timer if $100,000 was paid. Based on the hackers' countdown timer, the deadline for submitting the hack will be Sept. 25 in the morning.  Deloitte, however, released a report on Monday showing that state-level IT officials and security officers are unsure about the budgets that will be allocated to their state's telecommunications network infrastructure due to the uncertainty around it. 

"The attack surface is increasing as state leaders become more reliant on information when it comes to operating government itself as the use of information is becoming more central," Srini Subramanian, a principal at Deloitte & Touche LLP, told States Newsroom in an interview. Chief information security officers (CISOs) face an increasing number of challenges, and they have to make sure that the IT infrastructure survives ever-increasing cyber threats posed by hackers. This difficulty was reflected in the survey results, which revealed that almost half of all respondents did not know their state's cybersecurity budget, which resulted from these challenges. Around 40% of state IT officers reported that they needed more money to comply with regulations or meet other legal requirements to comply with government regulations. 

Those findings were confirmed this year by a report published by Moody's Ratings in 2023, which scores and analyzes municipal bonds. Robust cybersecurity practices can reduce exposure to threats to the enterprise, but initiatives that are difficult to implement and take resources away from core business functions may pose a credit challenge, according to Gregory Sobel, a Moody analyst and assistant vice president.

One study by Moody's also revealed that 92% of local governments have cyber insurance, an unprecedented two-fold increase over the last five years, according to Moody's. It is important to note that the popularity of this system did come with higher rates: a county in South Carolina went from paying a $70,000 premium in 2021 to a $210,000 premium in 2022 for this system. Aside from the higher costs, there are also stricter stipulations on risk management practices that need to be followed before a policy can be paid, such as better firewalls, consistent data backups, and multi-factor authentication, all of which make it difficult to get it to pay out. 

During an email exchange with Rhode Island Current, Douglas W. Hubbard, CEO of Hubbard Decision Research, a consulting firm, and the author of “How to Measure Anything in Cybersecurity Risk,” informed the paper that schools should make use of the low-cost, free, or shared resources available to them to manage cyber risk more effectively.

Ethics and Tech: Data Privacy Concerns Around Generative AI

Ethics and Tech: Data Privacy Concerns Around Generative AI

The tech industry is embracing Generative AI, but the conversation around data privacy has become increasingly important. The recent “State of Ethics and Trust in Technology” report by Deloitte highlights the pressing ethical considerations that accompany the rapid adoption of these technologies. 30% of organizations have adjusted new AI projects, and 25% have modified existing ones in response to the AI Act, the report mentions.

The Rise of Generative AI

54% of professionals believe that generative AI poses the highest ethical risk among emerging technologies. Additionally, 40% of respondents identified data privacy as their top concern. 

Generative AI, which includes technologies like GPT-4, DALL-E, and other advanced machine learning models, has shown immense potential in creating content, automating tasks, and enhancing decision-making processes. 

These technologies can generate human-like text, create realistic images, and even compose music, making them valuable tools across industries such as healthcare, finance, marketing, and entertainment.

However, the capabilities of generative AI also raise significant data privacy concerns. As these models require vast amounts of data to train and improve, the risk of mishandling sensitive information increases. This has led to heightened scrutiny from both regulatory bodies and the public.

Key Data Privacy Concerns

Data Collection and Usage: Generative AI systems often rely on large datasets that may include personal and sensitive information. The collection, storage, and usage of this data must comply with stringent privacy regulations such as GDPR and CCPA. Organizations must ensure that data is anonymized and used ethically to prevent misuse.

Transparency and Accountability: One of the major concerns is the lack of transparency in how generative AI models operate. Users and stakeholders need to understand how their data is being used and the decisions being made by these systems. Establishing clear accountability mechanisms is crucial to build trust and ensure ethical use.

Bias and Discrimination: Generative AI models can inadvertently perpetuate biases present in the training data. This can lead to discriminatory outcomes, particularly in sensitive areas like hiring, lending, and law enforcement. Addressing these biases requires continuous monitoring and updating of the models to ensure fairness and equity.

Security Risks: The integration of generative AI into various systems can introduce new security vulnerabilities. Cyberattacks targeting AI systems can lead to data breaches, exposing sensitive information. Robust security measures and regular audits are essential to safeguard against such threats.

Ethical Considerations and Trust

80% of respondents are required to complete mandatory technology ethics training, marking a 7% increase since 2022.  Nearly three-quarters of IT and business professionals rank data privacy among their top three ethical concerns related to generative AI:

  • Developing and implementing ethical frameworks for AI usage is crucial. These frameworks should outline principles for data privacy, transparency, and accountability, guiding organizations in the responsible deployment of generative AI.
  • Engaging with stakeholders, including employees, customers, and regulatory bodies, is essential to build trust. Open dialogues about the benefits and risks of generative AI can help in addressing concerns and fostering a culture of transparency.
  • The dynamic nature of AI technologies necessitates continuous monitoring and improvement. Regular assessments of AI systems for biases, security vulnerabilities, and compliance with privacy regulations are vital to ensure ethical use.