Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

Security Breach Reveals "Catwatchful" Spyware is Snooping on Users

A security bug in a stealthy Android spyware operation, “Catwatchful,” has exposed full user databases affecting its 62,000 customers and also its app admin. The vulnerability was found by cybersecurity expert Eric Daigle reported about the spyware app’s full database of email IDs and plaintext passwords used by Catwatchful customers to access stolen data from the devices of their victims. 

Most of the victims were based in India, Argentina, Peru, Mexico, Colombia, Bolivia, and Ecuador. A few records date back to 2018. The leaked database also revealed the identity of the Catwatchful admin called Omar Soca Char.

The Catwatchful database also revealed the identity of the spyware operation’s administrator, Omar Soca Charcov, a developer based in Uruguay.

About Catwatchful

Catwatchful is a spyware that pretends to be a child monitoring app, claiming to be “invisible and can not be detected,” while it uploads the victim’s data to a dashboard accessible to the person who planted the app. The stolen data includes real-time location data, victims’ photos, and messages.  The app can also track live ambient audio from the device’s mic and access the phone camera (both front and rear).

Catwatchful and similar apps are banned on app stores, and depend on being downloaded and deployed by someone having physical access to a victim’s phone. These apps are famous as “stalkerware” or “spouseware” as they are capable of unauthorized and illegal non-consensual surveillance of romantic partners and spouses. 

Rise of spyware apps

The Catwatchful incident is the fifth and latest in this year’s growing list of stalkerware scams that have been breached, hacked, or had their data exposed. 

How was the spyware found?

Daigle has previously discovered stalkerware exploits. Catwatchful uses a custom-made API, which the planted app uses to communicate to send data back to Catwatchful servers. The stalkerware also uses Google Firebase to host and store stolen data. 

According to Techradar, the “data was stored on Google Firebase, sent via a custom API that was unauthenticated, resulting in open access to user and victim data. The report also confirms that, although hosting had initially been suspended by HostGator, it had been restored via another temporary domain."

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch Passwords, Use Passkeys to Secure Your Account

Ditch passwords, use passkeys

Microsoft and Google users, in particular, have been warned about ditching passwords for passkeys. Passwords are easy to steal and can unlock your digital life. Microsoft has been at the forefront, confirming it will delete passwords for more than a billion users. Google, too, has warned that most of its users will have to add passkeys to their accounts. 

What are passkeys?

Instead of a username and password, passkeys use our device security to log into our account. This means that there is no password to hack and no two-factor authentication codes to bypass, making it phishing-resistant.

At the same time, the Okta team warned that it found threat actors exploiting v0, an advanced GenAI tool made by Vercelopens, to create phishing websites that mimic real sign-in webpages

Okta warns users to not use passwords

A video shows how this works, raising concerns about users still using passwords to sign into their accounts, even when backed by multi-factor authentication, and “especially if that 2FA is nothing better than SMS, which is now little better than nothing at all,” according to Forbes. 

According to Okta, “This signals a new evolution in the weaponization of GenAI by threat actors who have demonstrated an ability to generate a functional phishing site from simple text prompts. The technology is being used to build replicas of the legitimate sign-in pages of multiple brands, including an Okta customer.”

Why are passwords not safe?

It is shocking how easy a login webpage can be mimicked. Users should not be surprised that today’s cyber criminals are exploiting and weaponizing GenAI features to advance and streamline their phishing attacks. AI in the wrong hands can have massive repercussions for the cybersecurity industry.

According to Forbes, “Gone are the days of clumsy imagery and texts and fake sign-in pages that can be detected in an instant. These latest attacks need a technical solution.”

Users are advised to add passkeys to their accounts if available and stop using passwords when signing in to their accounts. Users should also ensure that if they use passwords, they should be long and unique, and not backed up by SMS 2-factor authentication. 

Microsoft Phases Out Password Autofill in Authenticator App, Urges Move to Passkeys for Stronger Security

 

Microsoft is ushering in major changes to how users secure their accounts, declaring that “the password era is ending” and warning that “bad actors know it” and are “desperately accelerating password-related attacks while they still can.”

These updates, rolling out immediately, impact the Microsoft Authenticator app. Previously, the app let users securely store and autofill passwords on apps and websites you visit on your phone. However, starting this month, “you will not be able to use autofill with Authenticator.”

A more significant shift is just weeks away. “From August,” Microsoft cautions, “your saved passwords will no longer be accessible in Authenticator.” Users have until August 2025 to transfer their stored passwords elsewhere, or risk losing access altogether. As the company emphasized, “any generated passwords not saved will be deleted.”

These moves are part of Microsoft’s broader initiative to phase out traditional passwords in favor of passkeys. The tech giant, alongside Google and other industry leaders, points out that passwords represent a major security vulnerability. Despite common safeguards like two-factor authentication (2FA), account credentials can still be intercepted or compromised.

Passkeys, by contrast, bind account access to device-level security, requiring biometrics or a PIN to log in. This means there’s no password to steal, phish, or share. The FIDO Alliance explains: “passkeys are phishing resistant and secure by design. They inherently help reduce attacks from cybercriminals such as phishing, credential stuffing, and other remote attacks. With passkeys there are no passwords to steal and there is no sign-in data that can be used to perpetuate attacks.”

For users currently relying on Authenticator’s password storage, Microsoft advises moving credentials to the Edge browser or exporting them to another password manager. But more importantly, this is a chance to upgrade your key accounts to passkeys.

Authenticator will continue to support passkeys going forward. Microsoft advises: “If you have set up Passkeys for your Microsoft Account, ensure that Authenticator remains enabled as your Passkey Provider. Disabling Authenticator will disable your passkeys.”

The Critical Role of Proxy Servers in Modern Digital Infrastructure

In order to connect an individual user or entire network to the broader internet, a proxy server serves as an important gateway that adds a critical level of protection to the broader internet at the same time. In order to facilitate the connection between end users and the online resources they access, proxy servers act as intermediaries between them. 

They receive requests from the user for web content, obtain the information on their behalf, and forward the information to the client. As a result of this process, not only is network traffic streamlined, but internal IP addresses can be hidden, ensuring that malicious actors have a harder time targeting specific devices directly. 

By filtering requests and responses, proxy servers play a vital role in ensuring the safety of sensitive information, ensuring the enforcement of security policies, and ensuring the protection of privacy rights. 

The proxy server has become an indispensable component of modern digital ecosystems, whether it is incorporated into corporate infrastructures or used by individuals seeking anonymity when conducting online activities. As a result of their ability to mitigate cyber threats, regulate access, and optimize performance, businesses and consumers alike increasingly rely on these companies in order to maintain secure and efficient networks.

Whether it is for enterprises or individuals, proxy servers have become a crucial asset, providing a versatile foundation for protecting data privacy, reinforcing security measures, and streamlining content delivery, offering a variety of advantages for both parties. In essence, proxy servers are dedicated intermediaries that handle the flow of internet traffic between a user's device and external servers, in addition to facilitating the flow of information between users and external servers. 

It is the proxy server that receives a request initiated by an individual—like loading a web page or accessing an online service—first, then relays the request to its intended destination on that individual's behalf. In the remote server, a proxy is the only source of communication with the remote server, as the remote server recognizes only the proxy's IP address and not the source's true identity or location. 

In addition to masking the user's digital footprint, this method adds a substantial layer of anonymity to the user's digital footprint. A proxy server not only hides personal details but also speeds up network activity by caching frequently requested content, filtering harmful or restricted content, and controlling bandwidth. 

Business users will benefit from proxy services since they are able to better control their web usage policies and will experience a reduction in their exposure to cyber threats. Individuals will benefit from proxy services because they can access region-restricted resources and browse more safely. 

Anonymity, performance optimization, and robust security have all combined to become the three most important attributes associated with proxy servers, which allow users to navigate the internet safely and efficiently, no matter where they are. It is clear from the definition that proxy servers and virtual private networks (VPNs) serve the same purpose as intermediaries between end users and the broader Internet ecosystem, but that their scope, capabilities, and performance characteristics are very different from one another. 

As the name suggests, proxy servers are primarily created to obscure a user's IP address by substituting it with their own, thus enabling users to remain anonymous while selectively routing particular types of traffic, for example, web browser requests or application data. 

Proxy solutions are targeted towards tasks that do not require comprehensive security measures, such as managing content access, bypassing regional restrictions, or balancing network loads, so they are ideal for tasks requiring light security measures. By contrast, VPNs provide an extremely robust security framework by encrypting all traffic between an individual's computer and a server, thus providing a much more secure connection. 

Because VPNs protect sensitive data from interception or surveillance, they are a great choice for activities that require heightened privacy, such as secure file transfers and confidential communication, since they protect sensitive data from interception or surveillance. While the advanced encryption is used to strengthen VPN security, it can also cause latency and reduce connection speeds, which are not desirable for applications that require high levels of performance, such as online gaming and media streaming. 

Proxy servers are straightforward to operate, but they are still highly effective in their own right. A device that is connected to the internet is assigned a unique Internet Protocol (IP) address, which works a lot like a postal address in order to direct any online requests. When a user connects to the internet using a proxy, the user’s device assumes that the proxy server’s IP address is for all outgoing communications. 

A proxy then passes the user’s request to the target server, retrieves the required data, and transmits the data back to the user’s browser or application after receiving the request. The originating IP address is effectively concealed with this method, minimizing the chance that the user will be targeted, tracked, profiled, or tracked through this method. 

Through masking network identities and selectively managing traffic, proxy servers play a vital role in maintaining user privacy, ensuring compliance, and enabling secure, efficient access to online resources. It has been shown that proxy servers have a number of strategic uses that go far beyond simply facilitating web access for businesses and individuals. 

Proxy servers are effective tools in both corporate and household settings for regulating and monitoring internet usage and control. For example, businesses can configure proxy servers to limit employee access to non-work related websites during office hours, while parents use similar controls to limit their children from seeing inappropriate content. 

 As part of this oversight feature, administrators can log all web activity, enabling them to monitor browsing behaviour, even in instances where specific websites are not explicitly blocked. Additionally, proxy servers allow for considerable bandwidth optimisation and faster network performance in addition to access management. 

The caching of frequently requested websites on proxies reduces redundant data transfers and speeds up load times whenever a large number of people request the same content at once. Doing so not only conserves bandwidth but also allows for a smoother, more efficient browsing experience. Privacy remains an additional compelling advantage as well. 

When a user's IP address is replaced with their own by a proxy server, personal information is effectively masked, and websites are not able to accurately track users' locations or activities if they don't know their IP address. The proxy server can also be configured to encrypt web requests, keeping sensitive data safe from interception, as well as acting as a gatekeeper, blocking access to malicious domains and reducing cybersecurity threats. 

They serve as gatekeepers, thereby reducing the risk of data breaches. The proxy server allows users, in addition to bypassing regional restrictions and censorship, to route traffic through multiple servers in different places. This allows individuals to access resources that would otherwise not be accessible while maintaining anonymity. In addition, when proxies are paired up with Virtual Private Networks (VPN), they make it even more secure and controlled to connect to corporate networks. 

In addition to forward proxies, which function as gateways for internal networks, they are also designed to protect user identities behind a single point of entry. These proxies are available in a wide variety of types, each of which is suited to a specific use case and specific requirements. 

It is quite common to deploy transparent proxies without the user's knowledge to enforce policies discreetly. They deliver a similar experience to direct browsing and are often deployed without the user's knowledge. The anonymous proxy and the high-anonymity proxy both excel at concealing user identities, with the former removing all identifying information before connecting to the target website. 

By using distortion proxies, origins are further obscured by giving false IP addresses, whereas data centre proxies provide fast, cost-effective access with infrastructure that is not dependent upon an internet service provider. It is better to route traffic through authentic devices instead of public or shared proxies but at a higher price. Public or shared proxies are more economical, but they suffer from performance limitations and security issues. 

SSL proxies are used to encrypt data for secure transactions and improve search rankings, while rotating proxies assign dynamic IP addresses for the collection of large amounts of data. In addition, reverse proxies provide additional security and load distribution to web servers by managing incoming traffic. Choosing the appropriate proxy means balancing privacy, speed, reliability, and cost. It is important to note that many factors need to be taken into account when choosing a proxy. 

The use of forward proxies has become significantly more prevalent since web scraping operations combined them with distributed residential connections, which has resulted in an increasing number of forward proxies being created. In comparison to sending thousands of requests for data from a centralized server farm that might be easily detected and blocked, these services route each request through an individual home device instead. 

By using this strategy, it appears as if the traffic originated organically from private users, rather than from an organized scraping effort that gathered vast amounts of data from public websites in order to generate traffic. This can be achieved by a number of commercial scraping platforms, which offer incentives to home users who voluntarily provide a portion of their bandwidth via installed applications to scrape websites. 

On the other hand, malicious actors achieve a similar outcome by installing malware on unwitting devices and exploiting their network resources covertly. As part of regulatory mandates, it is also common for enterprises or internet service providers to implement transparent proxies, also known as intercepting proxies. These proxies quietly record and capture user traffic, which gives organisations the ability to track user behaviour or comply with legal requirements with respect to browsing habits. 

When advanced security environments are in place, transparent proxies are capable of decrypting encrypted SSL and TLS traffic at the network perimeter, thoroughly inspecting its contents for concealed malware, and then re-encrypting the data to allow it to be transmitted to the intended destination. 

A reverse proxy performs an entirely different function, as it manages inbound connections aimed at the web server. This type of proxy usually distributes requests across multiple servers as a load-balancing strategy, which prevents performance bottlenecks and ensures seamless access for end users, especially during periods of high demand. This type of proxy service is commonly used for load balancing. 

In the era of unprecedented volumes of digital transactions and escalating threat landscape, proxy servers are more than just optional safeguards. They have become integral parts of any resilient network strategy that is designed for resilience. A strategic deployment of proxy servers is extremely important given that organizations and individuals are moving forward in an environment that is shaped by remote work, global commerce, and stringent data protection regulations, and it is imperative to take proper consideration before deploying proxy servers. 

The decision-makers of organizations should consider their unique operational needs—whether they are focusing on regulatory compliance, optimizing performance, or gathering discreet intelligence—and choose proxy solutions that align with these objectives without compromising security or transparency in order to achieve these goals. 

As well as creating clear governance policies to ensure responsible use, prevent misuse, and maintain trust among stakeholders, it is crucial to ensure that these policies are implemented. Traditionally, proxy servers have served as a means of delivering content securely and distributing traffic while also fortifying privacy against sophisticated tracking mechanisms that make it possible for users to operate in the digital world with confidence. 

As new technologies and threats continue to develop along with the advancement of security practices, organizations and individuals will be better positioned to remain agile and protect themselves as technological advancements and threats alike continue to evolve.

Microsoft Defender for Office 365 Will Now Block Email Bombing Attacks



Microsoft Defender for Office 365 Will Now Block Email Bombing Attacks

Microsoft Defender for Office 365, a cloud-based email safety suite, will automatically detect and stop email-bombing attacks, the company said.  Previously known as Office 365 Advanced Threat Protection (Office 365 ATP), Defender for Office 365 safeguards businesses operating in high-risk sectors and dealing with advanced threat actors from harmful threats originating from emails, collaboration tools, and links. 

"We're introducing a new detection capability in Microsoft Defender for Office 365 to help protect your organization from a growing threat known as email bombing," Redmond said in a Microsoft 365 message center update. These attacks flood mailboxes with emails to hide important messages and crash systems. The latest ‘Mail Bombing’ identification will spot and block such attempts, increasing visibility for real threats. 

About the new feature

The latest feature was rolled out in June 2025, toggled as default, and would not require manual configuration. Mail Bombing will automatically send all suspicious texts to the Junk folder. It is now available for security analysts and admins in Threat Explorer, Advanced Hunting, the Email entity page, the Email summary panel, and the Email entity page. 

About email bombing attacks

In mail bombing campaigns, the attackers spam their victims’ emails with high volumes of messages. This is done by subscribing users to junk newsletters and using specific cybercrime services that can send thousands or tens of thousands of messages within minutes. The goal is to crash email security systems as a part of social engineering attacks, enabling ransomware attacks and malware to extract sensitive data from victims. These attacks have been spotted for over a year, and used by ransomware gangs. 

Mode of operation

BlackBast gang first used email bombing to spam their victims’ mailboxes. The attackers would later follow up and pretend to be IT support teams to lure victims into allowing remote access to their devices via AnyDesk or the default Windows Quick Assist tool. 

After gaining access, threat actors install malicious tools and malware that help them travel laterally through the corporate networks before installing ransomware payloads.

A Simple Guide to Launching GenAI Successfully

 


Generative AI (GenAI) is one of today’s most exciting technologies, offering potential to improve productivity, creativity, and customer service. But for many companies, it becomes like a forgotten gym membership, enthusiastically started, but quickly abandoned.

So how can businesses make sure they get real value from GenAI instead of falling into the trap of wasted effort? Success lies in four key steps: building a strong plan, choosing the right partners, launching responsibly, and tracking the impact.


1. Set Up a Strong Governance Framework

Before using GenAI, businesses must create clear rules and processes to use it safely and responsibly. This is called a governance framework. It helps prevent problems like privacy violations, data leaks, or misuse of AI tools.

This framework should be created by a group of leaders from different departments—legal, compliance, cybersecurity, data, and business operations. Since AI can affect many parts of a company, it’s important that leadership supports and oversees these efforts.

It’s also crucial to manage data properly. Many companies forget to prepare their data for AI tools. Data should be accurate, anonymous where needed, and well-organized to avoid security risks and legal trouble.

Risk management must be proactive. This includes reducing bias in AI systems, ensuring data security, staying within legal boundaries, and preventing misuse of intellectual property.


2. Choose Technology Partners Carefully

GenAI tools are not like regular software. When selecting a provider, businesses should look beyond basic features and check how the provider handles data, ownership rights, and ethical practices. A lack of transparency is a warning sign.

Companies should know where their data is stored, who can access it, and who owns the results produced by the AI tool. It’s also important to avoid being trapped in systems that make it difficult to switch vendors later. Always review agreements carefully, especially around copyright and data rights.


3. Launch With Care and Strategy

Once planning is complete, the focus should shift to thoughtful execution. Start with small projects that can demonstrate value quickly. Choose use cases where GenAI can clearly improve efficiency or outcomes.

Data used in GenAI must be organized and secured so that no sensitive information is exposed. Also, employees must be trained to work with these tools effectively. Skills like writing proper prompts and verifying AI-generated content are essential.

To build trust and encourage adoption, leaders should clearly explain why GenAI is being introduced and how it will help, not replace employees. GenAI should support teams and improve their performance, not reduce jobs.


4. Track Success and Business Value

Finally, companies need to measure the results. Success isn’t just about using the technology— it’s about making a real impact.

Set clear goals and use simple metrics, like productivity improvements, customer feedback, or employee satisfaction. GenAI should lead to better outcomes for both teams and clients, not just technical performance.

To move beyond the GenAI buzz and unlock real value, companies must approach it with clear goals, responsible use, and long-term thinking. With the right foundation, GenAI can be more than just hype, it can be a lasting asset for innovation and growth.



California Residents Are Protesting Against Waymo Self-Driving Cars

 

Even though self-driving cars are becoming popular worldwide, not everyone is happy about it. In Santa Monica, California, some people who were unfortunate enough to live near the Waymo depot found a terrible side effect of Alphabet's self-driving cars: their incredibly annoying backing noise.

Local laws mandate that autonomous cars make a sound anytime they backup, which is something that frequently occurs when Waymos return to base to recharge, as the Los Angeles Times recently revealed. 

"It is bothersome. This neighbourhood already has a lot of noise, and I don't think it's fair to add another level to it," a local woman told KTLA. "I know some people have been kept up at night, and woken up in the middle of the night.” 

Using a technique pioneered by activist group Safe Street Rebel in 2023, when self-driving cars first showed up on San Francisco streets, Santa Monica citizens blocked Waymos with traffic cones, personal automobiles, and even their own bodies. 

Dubbed "coning," the seemingly petty tactic evolved after the California Public Utilities Commission decided 3-1 to allow Waymo and Cruise to operate self-driving vehicles in California neighbourhoods at all times. Prior to the vote, public comments lasted more than six hours, indicating that the proposal was not well received.

Safe Street Rebel conducted a period of direct action known as "The Week of the Cone" in protest of the plan to grant Cruise and Waymo complete control over public streets. The California DMV quickly revoked Cruise's licence to drive in the state as a result of the group's campaign, in addition to multiple mishaps involving autonomous vehicles. 

"This is a clear victory for direct action and the power of people getting in the street," Safe Street Rebel noted in a statement. "Our shenanigans made this an international story and forced a spotlight on the many issues with [self-driving cars].”

However, Waymo isn't going down without a fight back in Santa Monica. According to the LA Times, the company has sued several peaceful protesters and has even called the local police to drive out angry residents.

"My client engaged in justifiable protest, and Waymo attempted to obtain a restraining order against him, which was denied outright," stated Rebecca Wester, an attorney representing a local resident. 

The most recent annoyance Waymo has imposed on its neighbours is the backup noise. Residents of San Francisco reported hearing horns blaring from a nearby Waymo depot nine months ago. This happens when dozens of the cars congest the small parking lot.

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

How AI Impacts KYC and Financial Security

How AI Impacts KYC and Financial Security

Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.

Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”

Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.

Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident. 

As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally. 

More than 1,100 deepfake attacks in Indonesia

Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.

Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s  liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”

The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.

How does the deepfake KYC fraud work

Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities. 

After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.

Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions. 

This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes. 

Navigating AI Security Risks in Professional Settings


 

There is no doubt that generative artificial intelligence is one of the most revolutionary branches of artificial intelligence, capable of producing entirely new content across many different types of media, including text, image, audio, music, and even video. As opposed to conventional machine learning models, which are based on executing specific tasks, generative AI systems learn patterns and structures from large datasets and are able to produce outputs that aren't just original, but are sometimes extremely realistic as well. 

It is because of this ability to simulate human-like creativity that generative AI has become an industry leader in technological innovation. Its applications go well beyond simple automation, touching almost every sector of the modern economy. As generative AI tools reshape content creation workflows, they produce compelling graphics and copy at scale in a way that transforms the way content is created. 

The models are also helpful in software development when it comes to generating code snippets, streamlining testing, and accelerating prototyping. AI also has the potential to support scientific research by allowing the simulation of data, modelling complex scenarios, and supporting discoveries in a wide array of areas, such as biology and material science.

Generative AI, on the other hand, is unpredictable and adaptive, which means that organisations are able to explore new ideas and achieve efficiencies that traditional systems are unable to offer. There is an increasing need for enterprises to understand the capabilities and the risks of this powerful technology as adoption accelerates. 

Understanding these capabilities has become an essential part of staying competitive in a digital world that is rapidly changing. In addition to reproducing human voices and creating harmful software, generative artificial intelligence is rapidly lowering the barriers for launching highly sophisticated cyberattacks that can target humans. There is a significant threat from the proliferation of deepfakes, which are realistic synthetic media that can be used to impersonate individuals in real time in convincing ways. 

In a recent incident in Italy, cybercriminals manipulated and deceived the Defence Minister Guido Crosetto by leveraging advanced audio deepfake technology. These tools demonstrate the alarming ability of such tools for manipulating and deceiving the public. Also, a finance professional recently transferred $25 million after being duped into transferring it by fraudsters using a deepfake simulation of the company's chief financial officer, which was sent to him via email. 

Additionally, the increase in phishing and social engineering campaigns is concerning. As a result of the development of generative AI, adversaries have been able to craft highly personalised and context-aware messages that have significantly enhanced the quality and scale of these attacks. It has now become possible for hackers to create phishing emails that are practically indistinguishable from legitimate correspondence through the analysis of publicly available data and the replication of authentic communication styles. 

Cybercriminals are further able to weaponise these messages through automation, as this enables them to create and distribute a huge volume of tailored lures that are tailored to match the profile and behaviour of each target dynamically. Using the power of AI to generate large language models (LLMs), attackers have also revolutionised malicious code development. 

A large language model can provide attackers with the power to design ransomware, improve exploit techniques, and circumvent conventional security measures. Therefore, organisations across multiple industries have reported an increase in AI-assisted ransomware incidents, with over 58% of them stating that the increase has been significant.

It is because of this trend that security strategies must be adapted to address threats that are evolving at machine speed, making it crucial for organisations to strengthen their so-called “human firewalls”. While it has been demonstrated that employee awareness remains an essential defence, studies have indicated that only 24% of organisations have implemented continuous cyber awareness programs, which is a significant amount. 

As companies become more sophisticated in their security efforts, they should update training initiatives to include practical advice on detecting hyper-personalised phishing attempts, detecting subtle signs of deepfake audio and identifying abnormal system behaviours that can bypass automated scanners in order to protect themselves from these types of attacks. Providing a complement to human vigilance, specialised counter-AI solutions are emerging to mitigate these risks. 

In order to protect against AI-driven phishing campaigns, DuckDuckGoose Suite, for example, uses behavioural analytics and threat intelligence to prevent AI-based phishing campaigns from being initiated. Tessian, on the other hand, employs behavioural analytics and threat intelligence to detect synthetic media. As well as disrupting malicious activity in real time, these technologies also provide adaptive coaching to assist employees in developing stronger, instinctive security habits in the workplace. 
Organisations that combine informed human oversight with intelligent defensive tools will have the capacity to build resilience against the expanding arsenal of AI-enabled cyber threats. Recent legal actions have underscored the complexity of balancing AI use with privacy requirements. It was raised by OpenAI that when a judge ordered ChatGPT to keep all user interactions, including deleted chats, they might inadvertently violate their privacy commitments if they were forced to keep data that should have been wiped out.

AI companies face many challenges when delivering enterprise services, and this dilemma highlights the challenges that these companies face. OpenAI and Anthropic are platforms offering APIs and enterprise products that often include privacy safeguards; however, individuals using their personal accounts are exposed to significant risks when handling sensitive information that is about them or their business. 

AI accounts should be managed by the company, users should understand the specific privacy policies of these tools, and they should not upload proprietary or confidential materials unless specifically authorised by the company. Another critical concern is the phenomenon of AI hallucinations that have occurred in recent years. This is because large language models are constructed to predict language patterns rather than verify facts, which can result in persuasively presented, but entirely fictitious content.

As a result of this, there have been several high-profile incidents that have resulted, including fabricated legal citations in court filings, as well as invented bibliographies. It is therefore imperative that human review remains part of professional workflows when incorporating AI-generated outputs. Bias is another persistent vulnerability.

Due to the fact that artificial intelligence models are trained on extensive and imperfect datasets, these models can serve to mirror and even amplify the prejudices that exist within society as a whole. As a result of the system prompts that are used to prevent offensive outputs, there is an increased risk of introducing new biases, and system prompt adjustments have resulted in unpredictable and problematic responses, complicating efforts to maintain a neutral environment. 

Several cybersecurity threats, including prompt injection and data poisoning, are also on the rise. A malicious actor may use hidden commands or false data to manipulate model behaviour, thus causing outputs that are inaccurate, offensive, or harmful. Additionally, user error remains an important factor as well. Instances such as unintentionally sharing private AI chats or recording confidential conversations illustrate just how easy it is to breach confidentiality, even with simple mistakes.

It has also been widely reported that intellectual property concerns complicate the landscape. Many of the generative tools have been trained on copyrighted material, which has raised legal questions regarding how to use such outputs. Before deploying AI-generated content commercially, companies should seek legal advice. 

As AI systems develop, even their creators are not always able to predict the behaviour of these systems, leaving organisations with a challenging landscape where threats continue to emerge in unexpected ways. However, the most challenging risk is the unknown. The government is facing increasing pressure to establish clear rules and safeguards as artificial intelligence moves from the laboratory to virtually every corner of the economy at a rapid pace. 

Before the 2025 change in administration, there was a growing momentum behind early regulatory efforts in the United States. For instance, Executive Order 14110 outlined the appointment of chief AI officers by federal agencies and the development of uniform guidelines for assessing and managing AI risks. As a result of this initiative, a baseline of accountability for AI usage in the public sector was established. 

A change in strategy has taken place in the administration's approach to artificial intelligence since they rescinded the order. This signalled a departure from proactive federal oversight. The future outlook for artificial intelligence regulation in the United States is highly uncertain, however. The Trump-backed One Big Beautiful Bill proposes sweeping restrictions that would prevent state governments from enacting artificial intelligence regulations for at least the next decade. 

As a result of this measure becoming law, it could effectively halt local and regional governance at a time when AI is gaining a greater influence across practically all industries. Meanwhile, the European Union currently seems to be pursuing a more consistent approach to AI. 

As of March 2024, a comprehensive framework titled the Artificial Intelligence Act was established. This framework categorises artificial intelligence applications according to the level of risk they pose and imposes strict requirements for applications that pose a significant risk, such as those in the healthcare field, education, and law enforcement. 

Also included in the legislation are certain practices, such as the use of facial recognition systems in public places, that are outright banned, reflecting a commitment to protecting the individual's rights. In terms of how AI oversight is defined and enforced, there is a widening gap between regions as a result of these different regulatory strategies. 

Technology will continue to evolve, and to ensure compliance and manage emerging risks effectively, organisations will have to remain vigilant and adapt to the changing legal landscape as a result of this.

U.S. Senators Propose New Task Force to Tackle AI-Based Financial Scams

 


In response to the rising threat of artificial intelligence being used for financial fraud, U.S. lawmakers have introduced a new bipartisan Senate bill aimed at curbing deepfake-related scams.

The bill, called the Preventing Deep Fake Scams Act, has been brought forward by Senators from both political parties. If passed, it would lead to the formation of a new task force headed by the U.S. Department of the Treasury. This group would bring together leaders from major financial oversight bodies to study how AI is being misused in scams, identity theft, and data-related crimes and what can be done about it.

The proposed task force would include representatives from agencies such as the Federal Reserve, the Consumer Financial Protection Bureau, and the Federal Deposit Insurance Corporation, among others. Their goal will be to closely examine the growing use of AI in fraudulent activities and provide the U.S. Congress with a detailed report within a year.


This report is expected to outline:

• How financial institutions can better use AI to stop fraud before it happens,

• Ways to protect consumers from being misled by deepfake content, and

• Policy and regulatory recommendations for addressing this evolving threat.


One of the key concerns the bill addresses is the use of AI to create fake voices and videos that mimic real people. These deepfakes are often used to deceive victims—such as by pretending to be a friend or family member in distress—into sending money or sharing sensitive information.

According to official data from the Federal Trade Commission, over $12.5 billion was stolen through fraud in the past year—a 25% increase from the previous year. Many of these scams now involve AI-generated messages and voices designed to appear highly convincing.

While this particular legislation focuses on financial scams, it adds to a broader legislative effort to regulate the misuse of deepfake technology. Earlier this year, the U.S. House passed a bill targeting nonconsensual deepfake pornography. Meanwhile, law enforcement agencies have warned that fake messages impersonating high-ranking officials are being used in various schemes targeting both current and former government personnel.

Another Senate bill, introduced recently, seeks to launch a national awareness program led by the Commerce Department. This initiative aims to educate the public on how to recognize AI-generated deception and avoid becoming victims of such scams.

As digital fraud evolves, lawmakers are urging financial institutions, regulators, and the public to work together in identifying threats and developing solutions that can keep pace with rapidly advancing technologies.

Wyze Launches VerifiedView Metadata to Enhance Security After Past Data Breaches

 


Wyze’s security cameras and platform have earned praise from CNET reviewers in the past. However, over the last few years, recommendations for the company’s affordable cameras and related security products were tempered by a series of significant security breaches that raised concerns among experts and consumers alike.

More than a year has passed since those incidents, and Wyze has now introduced an advanced security feature called VerifiedView, designed to strengthen protections around user footage.

VerifiedView is a new metadata layer that applies to all content generated by Wyze cameras. Metadata refers to supplementary information attached to photos and videos, such as details about when and where they were captured, which helps systems search, organize, and identify files efficiently.

Wyze’s approach goes a step further. VerifiedView assigns every photo or video a unique identifier—an encrypted version of the user’s Wyze ID—that remains permanently tied to the account. Whenever someone tries to stream or view video through a Wyze account, their account identifier must match the one embedded in the metadata. If there is no match, access is denied. Live viewing functions the same way, ensuring that only the account that initially set up the camera can watch the footage.

While companies often embed metadata for various purposes, “this is the first time I've seen metadata used so clearly to manage video access and keep it from strange eyes.” This innovation is intended to directly address some of the most serious security issues, including past incidents in which unauthorized parties or employees were able to access private camera feeds.

Since the breaches and other security failures, Wyze has implemented several measures to bolster user safety and prevent similar problems. Key improvements include:

  • Automatic activation of two-factor authentication for all users, along with additional tools like OAuth, reCAPTCHA, and login abuse detection.
  • Investment in security resources provided by Amazon Web Services (AWS).
  • Expansion of Wyze’s security team to include more professionals dedicated to reviewing and strengthening code.
  • Regular penetration testing by firms such as Bitdefender, Google MASA, ioXT, and the NCC Group.

The introduction of a comprehensive cybersecurity training program for all employees.

“While I wish Wyze had started with security features like these, the changes are good to see.” For those evaluating options to protect their homes, these upgrades represent meaningful progress in Wyze’s approach to safeguarding customer data and privacy.

Meta Introduces Advanced AI Tools to Help Businesses Create Smarter Ads


Meta has rolled out a fresh set of AI-powered tools aimed at helping advertisers design more engaging and personalized promotional content. These new features include the ability to turn images into short videos, brand-focused image generation, AI-powered chat assistants, and tools that enhance shopping experiences within ads.

One of the standout additions is Meta’s video creation feature, which allows businesses to transform multiple images into animated video clips. These clips can include music and text, making it easier for advertisers to produce dynamic visual content without needing video editing skills. Because the videos are short, they’re less likely to appear distorted or lose quality, a common issue in longer AI-generated videos.

Currently, this feature is being tested with select business partners.

Another tool in development is “Video Highlights,” which uses AI to identify the most important parts of a video. Viewers will be able to jump directly to these key scenes, guided by short phrases and image previews chosen by the system. This can help businesses convey their product value more clearly and keep viewers engaged.

Meta is also enhancing its AI image creation tools. Advertisers will now be able to insert their logos and brand colors directly into the images generated by AI. This ensures that their brand identity stays consistent across all marketing content. Additionally, AI-generated ad text can now reflect the personality or style of the brand, offering a more customized tone in promotions.

Another major update is the introduction of “Business AIs”, specialized chat assistants embedded within ads. These bots are designed to answer common customer questions about a product or service. Available in both text and voice formats, these virtual assistants aim to improve customer interaction by addressing queries instantly and guiding users toward making a purchase.

Meta is also experimenting with new features like clickable call-to-action (CTA) stickers for Stories and Reels ads, and virtual try-on tools that use AI to display clothing on digital models of various body types.

These developments are part of Meta’s broader push to make advertising more efficient through automation. The company’s Advantage+ ad system is already showing results, with Meta reporting a 22% average increase in return on ad spend (ROAS) for brands using this approach. Advantage+ uses AI to analyze user behavior, optimize ad formats, and identify potential customers based on real-time data.

While AI is unlikely to replace human creativity entirely, these tools can simplify the ad creation process and help brands connect with their audiences more effectively. 

The True Cost of Legacy Software: A Comprehensive Look

 

Business leaders tend to stay with what they know. It's familiar, comfy, and—above all—seems trustworthy. However, this comfort zone can be costing us more than they realise when it comes to legacy software systems. 

Many leaders focus on the upfront costs of new technology while failing to consider the long-term implications of remaining with outdated systems. As technology advances, it's important to examine how past systems stack up against modern cloud-based options, particularly in terms of scalability, integration, and access to upcoming breakthroughs. 

True cost of legacy systems

The upfront expenses of sustaining legacy systems do not account for all of the challenges that firms should consider. These antiquated systems, for example, often rely on on-site physical servers, necessitating substantial infrastructure expenditure. Setting up a new server can cost up to $10,000, with additional costs for software licenses, maintenance, and support adding up quickly. 

These systems also incur additional operational costs, such as higher power consumption, heat output, supplemental cooling requirements, and a constant demand on bandwidth during data backup. 

Another often-overlooked expense is the knowledge reliance that these systems entail. When key IT personnel leave, they take with them the knowledge required to maintain and troubleshoot these outdated systems. Equally troubling is the increased IT complexity of managing server-based systems, particularly as an organisation grows and scales.

While the disadvantages of legacy software are widely known, there are some legitimate reasons why some organisations continue to use it—at least for the time being. Regulatory or compliance frameworks may require on-premises data storage or auditing transparency, which cloud providers cannot currently provide. In some circumstances, modernisation may be delayed out of necessity rather than choice. 

ROI of cloud-based platforms 

Think of modern cloud-based platforms as growth accelerators rather than cost-cutters. Cloud solutions include scalability, artificial intelligence, and automation. According to McKinsey, firms who go beyond basic cloud adoption and proactively integrate cloud into their operations could unleash up to $3 trillion in global value through faster product creation, better decision-making, and increased operational resilience. 

Cloud solutions also enable the use of open APIs, allowing enterprises to seamlessly integrate technologies. Unlike traditional software, which locks firms into rigid systems, contemporary cloud platforms with open APIs allow for unique technology stacks adapted to specific business requirements. This transition allows organisations to select best-in-class systems for finance, customer management, logistics, and marketing automation. 

These skills are especially important in healthcare, where interconnected systems can simplify operations and improve patient care. According to another McKinsey analysis, 62% of healthcare professionals feel generative AI offers the greatest potential to boost consumer engagement, but just 29% have begun to deploy it, indicating a considerable gap between opportunity and implementation.

Digital Safety 101: Essential Cybersecurity Tips for Everyday Internet Users

 9to5Mac is brought to you by Incogni: a service that helps you wipe your personal data—including your phone number, address, and email—from data brokers and people-search websites. With a 30-day money-back guarantee, Incogni offers peace of mind for anyone looking to guard their privacy.


1. Use a Password Manager

The old advice to create strong, unique passwords for each website still holds true—but is only realistic if you use a password manager. Fortunately, Apple’s built-in Passwords app makes this easy, and there are many third-party options too. Use these tools to generate and save complex passwords every time you sign up for a new service.

2. Update Old Passwords

Accounts created years ago may still have weak or repeated passwords. This makes you vulnerable to credential stuffing attacks—where hackers use stolen logins from one site to access others. Prioritize updating your passwords for financial services, Apple, Google, Amazon, and any accounts that have already been compromised. To check this, enter your email on Have I Been Pwned.

3. Enable Passkeys Where Available

Passkeys are becoming the modern alternative to passwords. Instead of storing a traditional password, your device uses Face ID or Touch ID to verify your identity, and only sends confirmation of that identity to the site—never the actual password. This reduces the risk of your credentials being hacked or stolen.

4. Use Two-Factor Authentication (2FA)

2FA provides an added layer of security by requiring a rolling code each time you log in. Avoid SMS-based 2FA—it's prone to SIM-swap attacks. Instead, opt for an authenticator app like Google Authenticator or use the built-in support in Apple’s Passwords app. Set this up using the QR code provided by the service.

5. Monitor Last Login Activity

Some platforms, especially banking apps, show the date and time of your last login. Get into the habit of checking this regularly. Unexpected logins are an immediate red flag and could signal that your account has been compromised.

6. Use a VPN on Public Wi-Fi

Public Wi-Fi networks can be unsafe and vulnerable to “Man-in-the-Middle” (MitM) attacks. These involve a rogue device impersonating a Wi-Fi hotspot to intercept your internet traffic. While HTTPS reduces the risk, using a VPN is still the best protection. Choose a trusted provider that maintains a no-logs policy and undergoes third-party audits. “I use NordVPN for this reason.”

7. Don’t Share Personal Info With AI Chatbots

Conversations with AI chatbots may be stored or used as training data. Avoid typing anything sensitive, such as passwords, addresses, or identification numbers—just as you wouldn’t post them publicly online.

8. Consider Data Removal Services

Your personal information may already be listed with data brokers, exposing you to spam and scams. Manually removing this data can be tedious, but services like Incogni can automate the process and reduce your digital footprint efficiently.

9. Verify Any Request for Money

If someone asks for money—even if it looks like a friend, family member, or colleague—double-check their identity using a separate communication method.

“If they emailed you, phone them. If they phoned you, email or message them.”

Also, if you're asked to send gift cards or wire money, it's almost always a scam. Be especially cautious if you're told a bank account has changed—confirm directly before transferring funds.

US National Security Leaders Embrace AI to Declassify Documents, Boost Productivity, and Reimagine Talent

 

Artificial intelligence is quickly becoming a powerful force within U.S. national security agencies, streamlining time-consuming tasks and boosting efficiency—from document declassification to enterprise-level productivity enhancements.

“We have released tens of thousands of documents related to the assassinations of JFK and Senator Robert F. Kennedy, and we have been able to do that through the use of AI tools far more quickly than what was done previously, which is to have humans go through and look at every single one of these pages,” said Director of National Intelligence Tulsi Gabbard during the AWS Summit this week.

Gabbard noted that AI deployments are aligned with the agency’s broader objective: ensuring timely access to information. Besides declassification, she cited practical implementations of AI in functions like human resources and internal systems. Notably, an AI chatbot has been rolled out agency-wide, unlocking further possibilities for innovation.

“We’ve made progress, but there’s so much room for growth and more application of AI and machine learning,” Gabbard said.

Agencies remain cautious, however, implementing checks and human oversight to maintain accuracy, compliance, and effectiveness. While the technology shows great promise, it is still evolving, and leaders are aware of the risks tied to generative and autonomous AI tools.

“There are things to be concerned about,” said Lakshmi Raman, Director of AI at the Central Intelligence Agency, referring to potential issues like model or data drift, lack of tool explainability, and concerns around trust.

Private sector leaders share these concerns, according to a recent Cloudera survey. Respondents expressed a need for improved data privacy and security in current AI agents and noted integration and customization as ongoing challenges.

Despite these obstacles, AI adoption continues to rise. Across industries—from retail giants like Walmart to automotive leaders like Toyota—companies are turning to AI to supercharge operations.

In the intelligence space, similar excitement is brewing over agentic AI tools.

“With [concerns] in mind, being able to gather data from multiple spaces and leverage AI agents for cognitive aids is incredibly exciting,” Raman said.

The emergence of generative AI is also reshaping how agencies approach talent development and organizational transformation.

“We think of it at three different levels: our general workforce, which might be the most important user base with those people who are sitting side by side with AI practitioners,” Raman said. “Then we think about it for our practitioners, so they are really keeping up with the latest. And finally, our senior executives, who can think about how they can transform their organization with AI.”

She emphasized the need for talent that blends technological fluency with analytical skills.

“When we’re thinking about analysts, for example, we’re thinking about people who have critical thinking skills, who can demonstrate analytic rigor, who can think multiple steps ahead with incomplete information,” Raman said. “And we’re also looking for people who have digital acumen with understanding of cloud, cyber and AI.”

Gabbard added that this moment calls for employees to reassess existing procedures and identify areas for process improvement.

“A lot of this comes for some folks, who’ve been working in the community for a long time, with a change in culture and a change in … education,” Gabbard said.

One significant area of transformation is the accreditation and authorization process.

“You can imagine a cybersecurity analyst who traditionally has gone through network data very manually in order to block suspicious IP addresses or connections,” Raman explained. “Now, there’s an opportunity to do all of that really easily.”

UEBA: A Smarter Way to Fight AI-Driven Cyberattacks

 



As artificial intelligence (AI) grows, cyberattacks are becoming more advanced and harder to stop. Traditional security systems that protect company networks are no longer enough, especially when dealing with insider threats, stolen passwords, and attackers who move through systems unnoticed.

Recent studies warn that cybercriminals are using AI to make their attacks faster, smarter, and more damaging. These advanced attackers can now automate phishing emails and create malware that changes its form to avoid being caught. Some reports also show that AI is helping hackers quickly gather information and launch more targeted, widespread attacks.

To fight back, many security teams are now using a more intelligent system called User and Entity Behavior Analytics (UEBA). Instead of focusing only on known attack patterns, UEBA carefully tracks how users normally behave and quickly spots unusual activity that could signal a security problem.


How UEBA Works

Older security tools were based on fixed rules and could only catch threats that had already been seen before. They often missed new or hidden attacks, especially when hackers used AI to disguise their moves.

UEBA changed the game by focusing on user behavior. It looks for sudden changes in the way people or systems normally act, which may point to a stolen account or an insider threat.

Today, UEBA uses machine learning to process huge amounts of data and recognize even small changes in behavior that may be too complex for traditional tools to catch.


Key Parts of UEBA

A typical UEBA system has four main steps:

1. Gathering Data: UEBA collects information from many places, including login records, security tools, VPNs, cloud services, and activity logs from different devices and applications.

2. Setting Normal Behavior: The system learns what is "normal" for each user or system—such as usual login times, commonly used apps, or regular network activities.

3. Spotting Unusual Activity: UEBA compares new actions to normal patterns. It uses smart techniques to see if anything looks strange or risky and gives each unusual event a risk score based on its severity.

4. Responding to Risks: When something suspicious is found, the system can trigger alerts or take quick action like locking an account, isolating a device, or asking for extra security checks.

This approach helps security teams respond faster and more accurately to threats.


Why UEBA Matters

UEBA is especially useful in protecting sensitive information and managing user identities. It can quickly detect unusual activities like unexpected data transfers or access from strange locations.

When used with identity management tools, UEBA can make access control smarter, allowing easy entry for low-risk users, asking for extra verification for medium risks, or blocking dangerous activities in real time.


Challenges in Using UEBA

While UEBA is a powerful tool, it comes with some difficulties. Companies need to collect data from many sources, which can be tricky if their systems are outdated or spread out. Also, building reliable "normal" behavior patterns can be hard in busy workplaces where people’s routines often change. This can lead to false alarms, especially in the early stages of using UEBA.

Despite these challenges, UEBA is becoming an important part of modern cybersecurity strategies.

Cisco Introduces New Tools to Protect Networks from Rogue AI Agents

 



As artificial intelligence (AI) becomes more advanced, it also creates new risks for cybersecurity. AI agents—programs that can make decisions and act on their own—are now being used in harmful ways. Some are launched by cybercriminals or even unhappy employees, while others may simply malfunction and cause damage. Cisco, a well-known technology company, has introduced new security solutions aimed at stopping these unpredictable AI agents before they can cause serious harm inside company networks.


The Growing Threat of AI in Cybersecurity

Traditional cybersecurity methods, such as firewalls and access controls, were originally designed to block viruses and unauthorized users. However, these defenses may not be strong enough to deal with intelligent AI agents that can move within networks, find weak spots, and spread quickly. Attackers now have the ability to launch AI-powered threats that are faster, more complex, and cheaper to operate. This creates a huge challenge for cybersecurity teams who are already stretched thin.


Cisco’s Zero Trust Approach

To address this, Cisco is focusing on a security method called Zero Trust. The basic idea behind Zero Trust is that no one and nothing inside a network should be automatically trusted. Every user, device, and application must be verified every time they try to access something new. Imagine a house where every room has its own lock, and just because you entered one room doesn't mean you can walk freely into the next. This layered security helps block the movement of malicious AI agents.

Cisco’s Universal Zero Trust Network Access (ZTNA) applies this approach across the entire network. It covers everything from employee devices to Internet of Things (IoT) gadgets that are often less secure. Cisco’s system also uses AI-powered insights to monitor activity and quickly detect anything unusual.


Building Stronger Defenses

Cisco is also introducing a Hybrid Mesh Firewall, which is not just a single device but a network-wide security system. It is designed to protect companies across different environments, whether their data is stored on-site or in the cloud.

To make identity checks easier and more reliable, Cisco is updating its Duo Identity and Access Management (IAM) service. This tool will help confirm that the right people and devices are accessing the right resources, with features like passwordless logins and location-based verification. Cisco has been improving this service since acquiring Duo Security in 2018.


New Firewalls for High-Speed Data

In addition to its Zero Trust solutions, Cisco is launching two new firewall models: the Secure Firewall 6100 Series and the Secure Firewall 200 Series. These firewalls are built for modern data centers that handle large amounts of information, especially those using AI. The 6100 series, for example, can process high-speed data traffic while taking up minimal physical space.

Cisco’s latest security solutions are designed to help organizations stay ahead in the fight against rapidly evolving AI-powered threats.