Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Rising Cyber Threats Linked to Ongoing Middle East Conflict

A geopolitical crisis has historically been fought on physical battlefields, but its effects are seldom confined to borders in the modern th...

All the recent news you need to know

Cyberattacks Shift Tactics as Hackers Exploit User Behavior and AI, Experts Warn

 

Cybersecurity threats are evolving rapidly, forcing businesses to rethink how they approach digital security. Experts say modern cyberattacks are no longer focused solely on breaking technical defenses but are increasingly designed to exploit everyday user behavior. 
 
According to industry observers, files downloaded by employees have become a common entry point for cybercriminals. Items such as invoices, installers, documents, and productivity tools are often downloaded without careful verification, creating opportunities for attackers. 

“The Downloads folder has quietly become one of the hottest pieces of real estate for cybercriminals,” said Sanket Atal, senior vice president of engineering and country head at OpenText India. 

“Attackers are not trying to break cryptography anymore. They’re hijacking habits.” Research cited by the company indicates that more than one third of consumer malware infections are first detected in the Downloads directory. 

Security specialists say this reflects a broader shift in how cyberattacks are designed, with attackers relying more on social engineering and multi-stage malware. Atal said malicious files frequently appear harmless when first opened. “These files often look completely harmless at first,” he said. 

“They only later pull in ransomware components or credential-stealing payloads. It is a multi-stage approach that is very difficult to catch with signature-based tools.” Experts say the rise in such attacks is also linked to the growing industrialization of cybercrime. 

Modern ransomware groups and information-stealing operations increasingly operate like structured businesses that continuously test and refine their methods. “Ransomware-as-a-service groups and info-stealer operators are constantly refining their lures,” Atal said. 

“They are comfortable using SEO-poisoned websites, fake update prompts, and even ‘productivity tools’ to get users to download something that looks normal.” India’s rapidly expanding digital ecosystem has made it an attractive target for attackers. 

The combination of millions of new internet users, the widespread use of personal devices for work, and the overlap between personal and professional computing environments increases exposure to risk. 

“When a poisoned file lands in a Downloads folder on a personal device, it can easily become an entry point into enterprise systems,” Atal said. “Especially when that same device is used for banking, office work, and email.” Artificial intelligence is further changing the threat landscape. 

Generative AI tools can now produce convincing phishing messages that mimic corporate communication styles and reference real projects. “AI has removed the traditional visual cues people relied on to spot scams,” Atal said. 

“Generative models now write in perfect business language, reuse an organisation’s tone, and reference real projects scraped from public sources.” Security analysts say deepfake technology is also being used to manipulate business processes. 

Synthetic video calls and cloned voices have been used to approve financial transactions in some cases. Another emerging pattern is the rise of malware-free intrusions, where attackers rely on stolen credentials or legitimate remote access tools instead of traditional malicious software. 

“We’re also seeing a rise in malware-free intrusions,” Atal said. “Attackers use stolen credentials and legitimate remote access tools. Nothing matches a known signature, yet the breach is very real.” Experts say these developments are forcing organizations to shift their security strategies. 

Instead of focusing solely on scanning files and attachments, security teams are increasingly monitoring behavior patterns across users, devices, and systems. “The first shift is moving from content to behaviour,” Atal said. 

“Instead of just scanning attachments, organisations need to focus on whether a user or service account is behaving consistently with historical and peer norms.” Security specialists also emphasize the importance of integrating identity verification with threat detection systems. 

When phishing messages become difficult to distinguish from legitimate communication, identity context becomes a key factor in identifying suspicious activity. In addition, companies are beginning to rely on artificial intelligence for defensive purposes. 

Automated systems can help security teams manage the growing volume of alerts by identifying patterns and highlighting potential threats more quickly. “Security teams are overwhelmed by alerts,” Atal said. 

“AI-based triage is essential to reduce noise, correlate weak signals, and generate plain-language narratives so analysts can act faster.” Despite increased awareness of cybersecurity threats, several misconceptions persist. 

Many organizations assume that the most serious cyberattacks originate from sophisticated state-backed actors. “One big myth is that serious attacks only come from exotic nation-state actors,” Atal said. “The truth is, most breaches begin with everyday issues such as phishing, malicious downloads, weak passwords, or cloud misconfigurations.” 

Another misconception is that smaller organizations are less likely to be targeted. However, experts say attackers often focus on industries with weaker security controls, including healthcare providers, hospitality companies, and smaller financial institutions. 

Cybersecurity specialists also warn that many attacks no longer rely on traditional malware. Techniques such as identity-based attacks, business email compromise, and misuse of legitimate administrative tools often bypass standard antivirus defenses. “Identity-based attacks, business email compromise, and abuse of legitimate tools often never trigger traditional antivirus,” Atal said. 

“The starting point can be any user, device, or partner that has access to data.” Industry leaders say the challenge is compounded by the fact that many cybersecurity systems were designed for a different technological environment. 

Vinayak Godse, chief executive of the Data Security Council of India, said existing security frameworks were built before the widespread adoption of digital services and artificial intelligence. 

“In the digitalisation space, we are creating tremendous experiences, productivity gains, and new possibilities,” Godse said. “But the security frameworks we have in place were designed for an older paradigm.” He added that attackers today are capable of identifying and exploiting even a single vulnerability in complex digital systems. 

“The current attack ecosystem can identify and exploit even one vulnerability out of millions, or even billions,” Godse said. Experts say the erosion of traditional network boundaries has further complicated security efforts. Remote work, cloud computing, software-as-a-service platforms, and third-party integrations mean that sensitive systems can now be accessed from a wide range of devices and locations. 

“A user on a personal phone, accessing a SaaS application from home Wi-Fi, is still inside your risk perimeter,” Atal said. As a result, organizations are increasingly focusing on continuous verification and context-aware monitoring rather than relying solely on perimeter defenses. 

According to Atal, the effectiveness of AI-driven security tools ultimately depends on the quality of underlying data. If data sources are fragmented or poorly labeled, even advanced analytics systems may struggle to detect threats. 
 
“Every advanced AI-driven security use case boils down to whether you can see your data and whether you can trust it,” he said. Security experts say that integrating identity signals, access patterns, and data sensitivity into unified monitoring systems can help organizations identify suspicious activity more effectively. 

“When data, identity, and threat signals are unified, security teams can see a connected narrative,” Atal said. “A login, a download, and a data access event stop being isolated alerts and start telling a story.” 

 
Despite advances in technology, experts say human behavior remains a critical factor in cybersecurity. 

“In today’s cyber landscape, the front line is no longer the firewall,” Atal said. “It is the file you choose to open and the behaviour that follows.”

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Mental Health Apps With Million Downloads Filled With Security Vulnerabilities


Mental health apps may have flaws

Various mental health mobile applications with over millions of downloads on Google Play have security flaws that could leak users’ personal medical data.

Researchers found over 85 medium and high-severity vulnerabilities in one of the apps that can be abused to hack users' therapy data and privacy. 

Few products are AI companions built to help people having anxiety, clinical depression, bipolar disorder and stress. 

Six of the ten studied applications said that user chats are private and encoded safely on the vendor's servers. 

Oversecured CEO Sergey Toshin said that “Mental health data carries unique risks. On the dark web, therapy records sell for $1,000 or more per record, far more than credit card numbers.”

More than 1500 security vulnerabilities reported 

Experts scanned ten mobile applications promoted as tools that help with mental health issues, and found 1,575 security flaws: 938 low-severity, 538 medium-severity, and 54 rated high-severity. 

No critical issues were found, a few can be leveraged to hack login credentials, HTML injection, locate the user, or spoof notifications. 

Experts used the Oversecured scanner to analyse the APK files of the mental health apps for known flaw patterns in different categories. 

Using Intent.parseUri() on an externally controlled string, one treatment app with over a million downloads launches the generated messaging object (intent) without verifying the target component. 

This makes it possible for an attacker to compel the application to launch any internal activity, even if it isn't meant for external access.

Oversecured said, “Since these internal activities often handle authentication tokens and session data, exploitation could give an attacker access to a user’s therapy records.”

Another problem is storing data locally that gives read access to all apps on the device. This can expose therapy details, depending on the saved data. Therapy details such as Cognitive Behavioural Therapy (CBT), session notes, therapy entries. Experts found plaintext configuration data and backend API endpoints inside the APK resources. 

 “These apps collect and store some of the most sensitive personal data in mobile: therapy session transcripts, mood logs, medication schedules, self-harm indicators, and in some cases, information protected under HIPAA,” Oversecured said.

AI-Powered Cybercrime Hits 600+ FortiGate Firewalls Across 55 Countries, AWS Warns

 

Cybercriminals using readily available generative AI tools managed to breach more than 600 internet-facing FortiGate firewalls across 55 countries within a little over a month, according to a recent incident analysis released by Amazon Web Services (AWS).

The operation, active between mid-January and mid-February, did not rely on sophisticated zero-day vulnerabilities. Instead, attackers automated large-scale attempts to access exposed systems by rapidly testing weak or reused credentials—essentially the digital equivalent of trying every unlocked door, but at high speed with the assistance of AI.

AWS investigators believe the operation was carried out by a financially motivated Russian-speaking group. The attackers scanned for publicly accessible FortiGate management interfaces, attempted to log in using commonly reused passwords, and once successful, extracted configuration files that provided detailed insight into the victims’ network environments.

According to AWS’s security team, the threat actors leveraged multiple commercially available AI tools to produce attack playbooks, scripts, and operational documentation. This allowed a relatively small or less technically advanced group to conduct a campaign that would typically require greater manpower and development effort. Analysts also discovered traces of AI-generated code and planning materials on compromised systems, indicating that AI tools were used extensively throughout the operation rather than just for occasional scripting tasks.

"The volume and variety of custom tooling would typically indicate a well-resourced development team," said CJ Moses, CISO at Amazon. "Instead, a single actor or very small group generated this entire toolkit through AI-assisted development."

After gaining access to the firewalls, the attackers retrieved configuration data containing administrator and VPN credentials, network architecture information, and firewall policies. Armed with these details, they attempted deeper intrusions by targeting directory services such as Active Directory, harvesting credentials, and exploring options for lateral movement across compromised networks. Backup infrastructure, including servers running Veeam, was also targeted during the intrusions.

AWS researchers noted that although the tools used in the campaign were functional, they appeared somewhat crude. The scripts showed basic parsing methods and repetitive comments often associated with machine-generated drafts. Despite their imperfections, the tools proved effective enough for large-scale automated attacks. When systems proved difficult to compromise, the attackers often abandoned them and shifted focus to easier targets, suggesting that their strategy prioritized volume over precision.

The affected organizations were spread across several regions, including Europe, Asia, Africa, and Latin America. The activity did not appear to focus on a single sector or country, indicating opportunistic targeting. However, investigators observed clusters of incidents suggesting that some breaches may have provided access to managed service providers or shared infrastructure, potentially increasing the scale of downstream exposure.

AWS emphasized that many of the compromises could have been avoided with standard cybersecurity practices. Preventing management interfaces from being publicly accessible, implementing multi-factor authentication, and avoiding password reuse would have significantly reduced the attackers’ chances of success.

The report comes shortly after Google cautioned that cybercriminal groups are increasingly integrating generative AI technologies—including tools such as Gemini AI—into their operations. These technologies are being used for tasks such as reconnaissance, target profiling, phishing campaign creation, and malware development


Researchers Find Critical Zero-Day Vulnerabilities in Foxit and Apryse PDF Platforms

 

PDF files are often seen as simple digital documents, but recent research shows they have evolved into complex software environments that can expose corporate systems to cyber risks. Modern PDF tools now function more like application platforms than basic viewers, potentially giving attackers pathways into private networks. 

A study by Novee Security examined two widely used platforms, Foxit and Apryse. Released on February 18, 2026, the report identified 13 categories of vulnerabilities and 16 potential attack paths that could allow systems to be compromised. 

Researchers say these issues are more than minor bugs. Some zero-day flaws could allow attackers to run commands on backend servers or take over user accounts without needing to compromise a browser or operating system. To find the vulnerabilities, analysts first identified common patterns that signal security weaknesses. These patterns were then used to train an AI system that scanned large volumes of code much faster than manual review alone. 

By combining human insight with automated analysis, the system detected several high-impact issues that conventional scanning tools might miss. One major flaw appeared in Foxit’s digital signature server, which verifies electronically signed documents. Some of the most serious findings involve one-click exploits where simply opening a document or loading a link can trigger malicious activity. Vulnerabilities CVE-2025-70402 and CVE-2025-70400 affect Apryse WebViewer by allowing the software to trust remote configuration files without proper validation, enabling attackers to run malicious scripts. 

Another flaw, CVE-2025-70401, showed that malicious code could be hidden in the “Author” field of a PDF comment and executed when a user interacts with it. Researchers also identified CVE-2025-66500, which affects Foxit browser plugins. In this case, manipulated messages could trick the plugin into running harmful scripts within the application. Testing further showed that certain weaknesses could allow attackers to send a simple request that triggers command execution on a server, granting unauthorized access to parts of the system. 

These vulnerabilities highlight how small interactions or overlooked behaviors can lead to significant security risks. Experts say the core problem lies in how modern PDF platforms are built. Many now rely on web technologies such as iframes and server-side processing, yet organizations still treat PDF files as harmless static documents. This mismatch can create “trust boundary” failures where software accepts external data without sufficient validation. 

Both vendors were notified before the research was published, and the vulnerabilities were assigned official CVE identifiers to support patching efforts. The findings highlight how document-processing systems—often overlooked in security planning—can become complex attack surfaces if not properly secured.

ECB Tightens Oversight of Banks’ Growing AI Sector Risks

 

The European Central Bank is intensifying its oversight of how eurozone lenders finance the fast‑growing artificial intelligence ecosystem, reflecting concern that the boom in data‑centre and AI‑related infrastructure could hide pockets of credit and concentration risk.

In recent weeks, the ECB has sent targeted requests to a select group of major European banks, asking for granular data on their loans and other exposures to AI‑linked activities such as data‑centre construction, vendor financing and large project‑finance structures. Supervisors want to map where credit is clustering around a small set of hyperscalers, cloud providers and specialized hardware suppliers, amid global estimates of trillions of dollars in planned AI‑related capital spending. Officials stress this is a diagnostic exercise rather than an immediate step toward higher capital charges, but it marks a shift from general discussion to hands‑on information gathering.

The push comes as European banks race to harness AI inside their own operations, from credit scoring and fraud detection to automating back‑office tasks and enhancing customer service. Supervisors acknowledge that these technologies promise sizeable efficiency gains and new revenue opportunities, yet warn that many institutions still lack mature governance for AI models, including robust data‑quality controls, explainability, and clear accountability for automated decisions. The ECB has repeatedly argued that AI adoption must be matched by stronger risk‑management frameworks and continuous human oversight over model life cycles.

Regulators are also increasingly uneasy about systemic dependencies created by the dominance of a handful of mostly non‑EU AI and cloud providers. Heavy reliance on these external platforms raises concerns about operational resilience, data protection, and geopolitical risk that could spill over into financial stability if disruptions occur. At the same time, the ECB’s broader financial‑stability assessments have highlighted stretched valuations in some AI‑linked equities, warning that a sharp correction could transmit stress into bank balance sheets through both direct exposures and wider market channels. 

For now, supervisors frame their AI‑sector review as part of a wider effort to “encourage innovation while managing risks,” aligning prudential expectations with Europe’s new AI Act and digital‑operational‑resilience rules. Banks are being nudged to tighten contract terms, strengthen model‑validation teams and improve documentation before scaling AI‑driven business lines. The message from Frankfurt is that AI remains welcome as a driver of competitiveness in European finance—but only if lenders can demonstrate they understand, measure and contain the new concentrations of credit, market and operational risk that accompany the technology’s rapid rise.

Featured