Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Critical Vulnerabilities Found in React Server Components and Next.js

Open in the wild flaw The US Cybersecurity and Infrastructure Security Agency (CISA) added a critical security flaw affecting React Server C...

All the recent news you need to know

How Security Teams Can Turn AI Into a Practical Advantage

 



Artificial intelligence is now built into many cybersecurity tools, yet its presence is often hidden. Systems that sort alerts, scan emails, highlight unusual activity, or prioritise vulnerabilities rely on machine learning beneath the surface. These features make work faster, but they rarely explain how their decisions are formed. This creates a challenge for security teams that must rely on the output while still bearing responsibility for the outcome.

Automated systems can recognise patterns, group events, and summarise information, but they cannot understand an organisation’s mission, risk appetite, or ethical guidelines. A model may present a result that is statistically correct yet disconnected from real operational context. This gap between automated reasoning and practical decision-making is why human oversight remains essential.

To manage this, many teams are starting to build or refine small AI-assisted workflows of their own. These lightweight tools do not replace commercial products. Instead, they give analysts a clearer view of how data is processed, what is considered risky, and why certain results appear. Custom workflows also allow professionals to decide what information the system should learn from and how its recommendations should be interpreted. This restores a degree of control in environments where AI often operates silently.

AI can also help remove friction in routine tasks. Analysts often lose time translating a simple question into complex SQL statements, regular expressions, or detailed log queries. AI-based utilities can convert plain language instructions into the correct technical commands, extract relevant logs, and organise the results. When repetitive translation work is reduced, investigators can focus on evaluating evidence and drawing meaningful conclusions.

However, using AI responsibly requires a basic level of technical fluency. Many AI-driven tools rely on Python for integration, automation, and data handling. What once felt intimidating is now more accessible because models can draft most of the code when given a clear instruction. Professionals still need enough understanding to read, adjust, and verify what the model generates. They also need awareness of how AI interprets instructions and where its logic might fail, especially when dealing with vague or incomplete information.

A practical starting point involves a few structured steps. Teams can begin by reviewing their existing tools to see where AI is already active and what decisions it is influencing. Treating AI outputs as suggestions rather than final answers helps reinforce accountability. Choosing one recurring task each week and experimenting with partial automation builds confidence and reduces workload over time. Developing a basic understanding of machine learning concepts makes it easier to anticipate errors and keep automated behaviours aligned with organisational priorities. Finally, engaging with professional communities exposes teams to shared tools, workflows, and insights that accelerate safe adoption.

As AI becomes more common, the goal is not to replace human expertise but to support it. Automated tools can process large datasets and reduce repetitive work, but they cannot interpret context, weigh consequences, or understand the nuance behind security decisions. Cybersecurity remains a field where judgment, experience, and critical thinking matter. When organisations use AI with intention and oversight, it becomes a powerful companion that strengthens investigative speed without compromising professional responsibility.



Global Executives Rank Misinformation, Cyber Insecurity and AI Risks as Top Threats: WEF Survey 2025

 

Business leaders across major global economies are increasingly concerned about the rapid rise of misinformation, cyber threats and the potential negative impacts of artificial intelligence, according to new findings from the World Economic Forum (WEF).

The WEF Executive Opinion Survey 2025, based on responses from 11,000 executives in 116 countries, asked participants to identify the top five risks most likely to affect their nations over the next two years from a list of 34 possible threats.

While economic issues such as inflation and downturns, along with societal challenges like polarization and inadequate public services, remained dominant, technology-driven risks stood out prominently in this year’s results.

Within G20 nations, concerns over AI were especially visible. “Adverse outcomes of AI technologies” emerged as the leading risk in Germany and the fourth most significant in the US. Australian executives similarly flagged “adverse outcomes of frontier technologies,” including quantum innovations, as a top threat.

Misinformation and disinformation ranked as the third-largest concern for executives in the US, UK and Canada. Meanwhile, in India, cyber insecurity—including threats to critical infrastructure—was identified as the number one risk.

Regionally, mis/disinformation ranked second in North America, third in Europe and fourth in East Asia. Cyber insecurity was the third-highest risk in Central Asia, while concerns around harmful AI outcomes placed fourth in South-east Asia.

AI’s influence is clearly woven through most of the technological risks highlighted. The technology is enabling more sophisticated disinformation efforts, including realistic deepfake audio and video. At the same time, AI is heightening cyber risks by empowering threat actors with advanced capabilities in social engineering, reconnaissance, vulnerability analysis and exploit development, according to the UK’s National Cyber Security Centre (NCSC).

The NCSC’s recent threat outlook cautions that AI will “almost certainly” make several stages of cyber intrusion “more effective and efficient” in the next few years.

The survey’s references to “adverse outcomes” of AI also include potential misuse of agentic or generative AI tools and the manipulation of AI models for disruptive, espionage-related or malicious purposes.

A study released in September found that 26% of US and UK organizations experienced a data poisoning attack in the past year, underscoring the growing risks.

“With the rise of AI, the proliferation of misinformation and disinformation is enabling bad actors to operate more broadly,” said Andrew George, president of Marsh Specialty. “As such, the challenges posed by the rapid adoption of AI and associated cyber threats now top boardroom agendas.”

London Councils Hit by Cyberattacks Disrupting Public Services and Raising Security Concerns

 

Multiple local authorities across London have been hit by cyber incidents affecting operations and public services, according to reports emerging overnight. The attacks have disrupted essential council functions, including communication systems and digital access, prompting heightened concern among officials and cybersecurity experts. 

Initial reporting from the BBC confirmed that several councils experienced operational setbacks due to the attack. Hackney Council elevated its cybersecurity alert level to the highest classification, while Westminster City Council acknowledged challenges with public contact systems. The Royal Borough of Kensington and Chelsea also confirmed an active investigation into the breach. Internal messages seen by the Local Democracy Reporting Service reportedly advised employees to follow emergency cybersecurity protocols and noted that at least one affected council temporarily shut down its networks to prevent further compromise. 

In a public statement, Kensington and Chelsea Council confirmed the incident and stated that it was working alongside cybersecurity consultants and the U.K. National Cyber Security Centre to secure systems and restore functionality. The council also confirmed that it shares certain IT infrastructure with Westminster City Council, and both organisations are coordinating their response. However, Hackney Council later clarified that it was not impacted by this specific incident, describing reports linking it to the breach as inaccurate. 

The council stated that its systems remain operational and emphasised that staff have been reminded of ongoing data protection responsibilities. Mayor of London Sadiq Khan commented that cybercriminals are increasingly targeting public-sector systems and stressed the importance of improving resilience across government infrastructure. Security specialists have also issued warnings following the incident. Dray Agha, senior director of security operations at Huntress, described the attack as a stark example of the risks associated with shared government IT frameworks. Agha argued that while shared digital systems may be cost-efficient, they can significantly increase exposure if an attacker gains access to one connected organisation. 

Rebecca Moody, head of data research at Comparitech, said the disruption aligns with common indicators of ransomware activity, noting both operational outages and possible data exposure. She added that government bodies remain among the most frequent targets of cyber extortion, with global data showing 174 confirmed attacks on government institutions so far in 2025, affecting more than 780,000 records and averaging ransom demands of roughly $2.5 million. Ian Nicholson, head of incident response at Pentest People, warned that the consequences extend beyond system outages. 

Councils hold highly sensitive and regulated personal information, he noted, and cyber incidents affecting the public sector can directly impact citizen-facing services, particularly those tied to social care and emergency support. As investigations continue, affected authorities have stated that their primary focus remains on safeguarding resident data, restoring services, and preventing further disruption.

The New Content Provenance Report Will Address GenAI Misinformation


The GenAI problem 

Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.

The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing. 

What is AI slop?

The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle. 

The content provenance report 

For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.

NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.” 

What is next for Content Integrity?

The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft. 

Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage. 

What is needed?

The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.

The present technology places the pressure on the end user to understand the provenance data. 

A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.


Balancing Rapid Innovation and Risk in the New Era of SaaS Security


 

The accelerating pace of technological innovation is leaving a growing number of organizations unwittingly exposing their organization to serious security risks as they expand their reliance on SaaS platforms and experiment with emerging agent-based AI algorithms in an effort to thrive in the age of digital disruption. Businesses are increasingly embracing cloud-based services to deliver enterprise software to their employees at breakneck speed. 

With this shift toward cloud-delivered services, it has become necessary for them to adopt new features at breakneck speed-often without pausing to implement, or even evaluate, the basic safeguards necessary to protect sensitive corporate information. There has been an unchecked acceleration of the pace of adoption of SaaS, creating a widening security gap that has renewed the urgent need for action from the Information Security community to those who are responsible for managing SaaS ecosystems. 

Despite the fact that frameworks such as the NIST Cybersecurity Framework (CSF) have served as a guide for InfoSec professionals for many years, many SaaS teams are only now beginning to use its rigorously defined functions—Govern, Identify, Protect, Detect, Respond, and Recover—particularly considering that NIST 2.0 emphasizes identity as the cornerstone of cyber defenses in a manner unparalleled to previous versions. 

Silverfort's identity-security approach is one of many new approaches emerging to help organizations meet these ever-evolving standards against this backdrop, allowing them to extend MFA to vulnerable systems, monitor lateral movements in real-time, and enforce adaptive controls more accurately. All of these developments are indicative of a critical moment for enterprises in which they need to balance relentless innovation with uncompromising security in a SaaS-driven, AI-driven world that is increasingly moving towards a SaaS-first model. 

The enterprise SaaS architecture is evolving into expansive, distributed ecosystems built on a multitenant infrastructure, microservices, and an ever-expanding web of open APIs, keeping up with the sheer scale and fluidity of modern operations is becoming increasingly difficult for traditional security models. 

The increasing complexity within an organization has led to enterprises focusing more on intelligent and autonomous security measures, making use of behavioral analytics, anomaly detection, and artificial intelligence-driven monitoring to identify threats much in advance of them becoming active. 

As opposed to conventional signature-based tools, advanced systems can detect subtle deviations from user behavior in real-time, neutralize risks that would otherwise remain undetected, and map user behavior in a way that will never be seen in the future. Innovators in the SaaS security space, such as HashRoot, are leading the way by integrating AI into the core of SaaS security workflows. 

A combination of predictive analytics and intelligent misconfiguration detection in HashRoot's AI Transformation Services can be used to improve aging infrastructures, enhance security postures, and construct proactive defense mechanisms that can keep up with the evolving threat landscape of 2025 and the unpredictable threats ahead of us. 

During the past two years, there has been a rapid growth in the adoption of artificial intelligence within enterprise software, which has drastically transformed the SaaS landscape at a rapid pace. According to new research, 99.7 percent of businesses rely on applications with AI capabilities built into them, which demonstrates how the technology is proven to boost efficiency and speed up decision-making for businesses. 

There is a growing awareness that the use of AI-enhanced SaaS tools is becoming increasingly common in the workplace, and that these systems have become increasingly integrated in every aspect of the work process. However, as organizations begin to grapple with the sweeping integration of AI into their businesses, a whole new set of risks emerge. 

As one of the most pressing concerns arises, a loss of control of sensitive information and intellectual property is a significant concern, raising complex concerns about confidentiality and governance, as well as long-term competitive exposure, as AI models often consume sensitive data and intellectual property. 

Meanwhile, the threat landscape is shifting as malicious actors are deploying sophisticated impersonator applications to mimic legitimate SaaS platforms in an attempt to trick users into granting them access to confidential corporate data through impersonation applications. It is even more challenging because AI-related vulnerabilities are traditionally identified and responded to manually—an approach which requires significant resources as well as slowing down the speed at which fast-evolving threats can be countered. 

Due to the growing reliance on cloud-based AI-driven software as a service, there has never been a greater need for automated, intelligent security mechanisms. It is also becoming increasingly apparent to CISOs and IT teams that disciplined SaaS configuration management is a critical priority. This is in line with CSF's Protect function under Platform Security, which has a strong alignment with the CSF's Protect function. In the recent past, organizations were forced to realize that they cannot rely solely on cloud vendors for secure operation. 

A significant share of cloud-related incidents can be traced back to preventable misconfigurations. Modern risk governance has become increasingly reliant on establishing clear configuration baselines and ensuring visibility across multiple platforms. While centralized tools can simplify oversight, there are no single solutions that can cover the full spectrum of configuration challenges. As a result of the recent development of multi-SaaS management systems, native platform controls and the judgment of skilled security professionals working within the defense-in-depth model, effective protection has become increasingly important. 

It is important to recognize that SaaS security is never static, so continuous monitoring is indispensable to protect against persistent threats such as authorized changes, accidental modifications, and gradual drifts from baseline security. It is becoming increasingly apparent that Agentic AI is playing a transformative role here. 

By detecting configuration drift at scale, correcting excessive permissions, and maintaining secure settings at a pace that humans alone can never match, it has begun to play a transformative role. In spite of this, configuration and identity controls are not all that it takes to secure an organization. Many organizations continue to rely on what is referred to as an “M&M security model” – a hardened outer shell with a soft, vulnerable center.

Once a valid user credential or API key is compromised, an attacker may be able to pass through perimeter defenses and access sensitive data without getting into the system. A strong SaaS data governance model based on the principles of identifying, protecting, and recovering critical information, including SaaS data governance, is essential to overcoming these challenges. This effort relies on accurate classification of data, which ensures that high-value assets are protected from unauthorised access, field level encryption, and adequate protection when they are copied into environments that are of lower security. 

There is now a critical role that automated data masking plays in preventing production data from being leaked into these environments, where security controls are often weak and third parties often have access to the data. In order to ensure compliance with evolving privacy regulations when personal information is used in testing, the same level of oversight is required as it is with production data. This evaluation must also be repeated periodically as policies and administrative practices change in the future. 

Within SaaS ecosystems, it is equally important to ensure that data is maintained in a manner that is both accurate and available. Although the NIST CSF emphasizes the need to implement a backup strategy that preserves data, allows precise recovery, and maintains uninterrupted operation, the service provider is responsible for maintaining the reliability of the underlying infrastructure. 

Modern SaaS environments require the ability to recover only the affected data without causing a lot of disruption, as opposed to traditional enterprise IT, which often relies on broad rollbacks to previous system states. It is crucial to maintain continuity in an enterprise-like environment by using granular resilience, especially because in order for agentic AI systems to function effectively and securely, they must have accurate, up-to-date information. 

Together, these measures demonstrate that safeguarding SaaS environments has evolved into a challenging multidimensional task - one that requires continuous coordination between technology teams, information security leaders, and risk committees in order to ensure that innovation can take place in a secure and scalable manner. 

Organizations are increasingly relying on cloud applications to conduct business, which means that SaaS risk management is becoming a significant challenge for security vendors hoping to meet the demands of enterprises. Businesses nowadays need more than simple discovery tools that identify which applications are being used to determine which application is being used. 

There is a growing expectation that platforms will be able to classify SaaS tools accurately, assess their security postures, and take into consideration the rapidly growing presence of artificial intelligence assistants, large language model-based applications, which are now able to operate independently across corporate environments, as well as the growing presence of AI assistants. A shift in SaaS intelligence has led to the need for enriched SaaS intelligence, an advanced level of insight that allows vendors to provide services that go beyond basic visibility. 

The ability to incorporate detailed application classification, function-level profiling, dynamic risk scoring, and the detection of shadow SaaS and unmanaged AI-driven services can provide security providers with a more comprehensive, relevant and accurate platform that will enable a more accurate assessment of an organization's risks. 

Vendors that are able to integrate enriched SaaS application insights into their architectures will be at an advantage in the future. Vendors that are able to do this will be able to gain a competitive edge as they begin to address the next generation of SaaS and AI-related risks. Businesses can close persistent blind spots by using enriched SaaS application insights into their architectures. 

In an increasingly artificial intelligence-enabled world, which will essentially become a machine learning-enabled future, it will be the ability of platforms to anticipate emerging vulnerabilities, rather than just responding to them, that will determine which platforms will remain trusted partners in safeguarding enterprise ecosystems in the future. 

A company's path forward will ultimately be shaped by its ability to embrace security as a strategic enabler rather than a roadblock to innovation. Using continuous monitoring, identity-centric controls, SaaS-enhanced intelligence, and AI-driven automation as a part of its operational fabric, enterprises are able to modernize at a speed without compromising trust or resilience in their organizations. 

It is imperative that companies that invest now, strengthening governance, enforcing data discipline, and demanding greater transparency from vendors, will have the greatest opportunity to take full advantage of SaaS and agentic AI, while also navigating the risks associated with an increasingly volatile digital future.

FBI Warns of Cybercriminals Impersonating IC3 to Steal Personal Data

 

The FBI has issued a public service announcement warning that cybercriminals are impersonating the FBI’s Internet Crime Complaint Center (IC3) and even cloning its website to steal victims’ personal and financial data.Attackers are exploiting public trust in federal law enforcement by creating fake IC3-branded domains and lookalike reporting portals, then driving victims to these sites via phishing emails, messages, and search engine manipulation so people think they are filing a legitimate cybercrime report. 

The alert—referenced as PSA I-091925—describes threat actors spoofing the official IC3 website and related communications, with the goal of harvesting names, home addresses, phone numbers, email addresses, and banking details under the pretext of gathering evidence for an investigation or helping recover lost funds.The FBI stresses that visiting these fake sites or responding to unsolicited “IC3” outreach could lead not only to identity theft and financial fraud but also to further compromise through follow‑on scams using the stolen data.

Security experts situates this campaign within a broader surge in impersonation attacks, noting that law enforcement, government agencies, and major brands have all been targets of cloned sites and spoofed communications, often enhanced by AI to appear more convincing. It highlights that scammers may blend IC3 impersonation with other fraud patterns, such as bogus refund or recovery services, “phantom hacker” style tech‑support narratives, or messages claiming to fix account compromises, all framed as official FBI assistance. 

The FBI has issued guidelines to safeguard Americans from phishing campaign. The real IC3 does not charge fees, will never ask for payment or direct victims to third‑party companies to recover funds, and does not operate any official presence on social media. Genuine IC3 reporting should be done only through the official ic3.gov domain, accessed by typing the URL directly into the browser or using trusted bookmarks, rather than clicking on links in unsolicited messages or search ads. 

Additionally, to mitigate risk the FBI recommends treating any unexpected communication claiming to be from the FBI or IC3 with skepticism, independently verify contact details through official channels, and avoid sharing sensitive information or making payments based on pressure tactics. It closes by urging individuals and organizations to train staff on recognizing impersonation scams, double‑check domains and email addresses, and promptly report suspected fake FBI or IC3 activity using confirmed, legitimate FBI contact points.

Featured