Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

EU Accuses Meta of Violating Digital Services Act Over Content Reporting Rules

  The European Commission has accused Meta of breaching the European Union’s Digital Services Act (DSA), alleging that Facebook and Instagra...

All the recent news you need to know

EU Accuses Meta of Breaching Digital Rules, Raises Questions on Global Tech Compliance

 




The European Commission has accused Meta Platforms, the parent company of Facebook and Instagram, of violating the European Union’s Digital Services Act (DSA) by making it unnecessarily difficult for users to report illegal online content and challenge moderation decisions.

In its preliminary findings, the Commission said both platforms lack a user-friendly “Notice and Action” system — the mechanism that allows people to flag unlawful material such as child sexual abuse content or terrorist propaganda. Regulators noted that users face multiple steps and confusing options before they can file a report. The Commission also claimed that Meta’s interface relies on “dark patterns”, which are design features that subtly discourage users from completing certain actions, such as submitting reports.

According to the Commission, Meta’s appeal process also falls short of DSA requirements. The current system allegedly prevents users from adding explanations or submitting supporting evidence when disputing a moderation decision. This, the regulator said, limits users’ ability to express why they believe a decision was unfair and weakens the overall transparency of Meta’s content moderation practices.

The European Commission’s findings are not final, and Meta has the opportunity to respond before any enforcement action is taken. If the Commission confirms these violations, it could issue a non-compliance decision, which may result in penalties of up to 6 percent of Meta’s global annual revenue. The Commission may also impose recurring fines until the company aligns its operations with EU law.

Meta, in a public statement, said it “disagrees with any suggestion” that it breached the DSA. The company stated that it has already made several updates to comply with the law, including revisions to content reporting options, appeals procedures, and data access tools.

The European Commission also raised similar concerns about TikTok, saying that both companies have limited researchers’ access to public data on their platforms. The DSA requires large online platforms to provide sufficient data access so independent researchers can analyze potential harms — for example, whether minors are exposed to illegal or harmful content. The Commission’s review concluded that the data-access tools of Facebook, Instagram, and TikTok are burdensome and leave researchers with incomplete or unreliable datasets, which hinders academic and policy research.

TikTok responded that it has provided data to almost 1,000 research teams and remains committed to transparency. However, the company noted that the DSA’s data-sharing obligations sometimes conflict with the General Data Protection Regulation (GDPR), making it difficult to comply with both laws simultaneously. TikTok urged European regulators to offer clarity on how these two frameworks should be balanced.

Beyond Europe, the investigation may strain relations with the United States. American officials have previously criticized the EU for imposing regulatory burdens on U.S.-based tech firms. U.S. FTC Chairman Andrew Ferguson recently warned companies that censoring or modifying content to satisfy foreign governments could violate U.S. law. Former President Donald Trump has also expressed opposition to EU digital rules and even threatened tariffs against countries enforcing them.

For now, the Commission’s investigation continues. If confirmed, the case could set a major precedent for how global social media companies manage user safety, transparency, and accountability under Europe’s strict online governance laws.


The Risks of AI-powered Web Browsers for Your Privacy


AI and web browser

The future of browsing is AI, it watches everything you do online. Security and privacy are two different things; they may look same, but it is different for people who specialize in these two. Threats to your security can also be dangers to privacy. 

Threat for privacy and security

Security and privacy aren’t always the same thing, but there’s a reason that people who specialize in one care deeply about the other. 

Recently, OpenAI released its ChatGPT-powered Comet Browser, and Brave Software team disclosed that AI-powered browsers can follow malicious prompts that hide in images on the web. 

AI powered browser good or bad?

We have long known that AI-powered browsers (and AI browser add-ons for other browsers) are vulnerable to a type of attack known as a prompt injection attack. But this is the first time we've seen the browser execute commands that are concealed from the user. 

That is the aspect of security. Experts who evaluated the Comet Browser discovered that it records everything you do while using it, including search and browser history as well as information about the URLs you visit. 

What next?

In short, while new AI-powered browser tools do fulfill the promise of integrating your favorite chatbot into your web browsing experience, their developers have not yet addressed the privacy and security threats they pose. Be careful when using these.

Researchers studied the ten biggest VPN attacks in recent history. Many of them were not even triggered by foreign hostile actors; some were the result of basic human faults, such as leaked credentials, third-party mistakes, or poor management.

Atlas: AI powered web browser

Atlas, an AI-powered web browser developed with ChatGPT as its core, is meant to do more than just allow users to navigate the internet. It is capable of reading, sum up, and even finish internet tasks for the user, such as arranging appointments or finding lodgings.

Atlas looked for social media posts and other websites that mentioned or discussed the story. For the New York Times piece, a summary was created utilizing information from other publications such as The Guardian, The Washington Post, Reuters, and The Associated Press, all of which have partnerships or agreements with OpenAI, with the exception of Reuters.

Toys “R” Us Canada Data Breach Exposes Customer Information, Raising Phishing and Identity Theft Concerns

 

Toys “R” Us Canada has confirmed a data breach that exposed sensitive customer information, including names, postal addresses, email addresses, and phone numbers. Although the company assured that no passwords or payment details were compromised, cybersecurity experts warn that the exposed data could still be exploited for phishing and identity theft schemes. 

The company discovered the breach after hackers leaked stolen information on the dark web, prompting an immediate investigation. Toys “R” Us engaged a third-party cybersecurity firm to conduct forensic analysis and confirm the scope of the incident. Early findings revealed that a “subset of customer records” had been stolen. The retailer began notifying affected customers through official communications, with letters quickly circulating on social media after being shared by recipients.  

According to the company’s statement, the breach did not involve financial information or account credentials, but the exposure of valid contact details still presents significant risk. Cybercriminals often use such data to create convincing phishing emails or impersonate legitimate companies to deceive victims into revealing sensitive information. 

Toys “R” Us stated that its IT systems were already protected by strong security protocols but have since been reinforced with additional defensive measures. The company has not disclosed how the attackers infiltrated its network or how many individuals were impacted. It also confirmed that, to date, there is no evidence suggesting the stolen data has been misused. 

In the aftermath of the incident, Toys “R” Us reported the breach to relevant authorities and advised customers to remain vigilant against phishing attempts. The company urged users not to share personal information with unverified senders, avoid clicking on suspicious links or attachments, and closely monitor any unusual communications that appear to come from the retailer.  

While no hacking group has claimed responsibility for the breach, cybersecurity analysts emphasize that exposed names, emails, and phone numbers can easily be weaponized in future scams. The incident underscores how even non-financial data can lead to significant cybersecurity risks when mishandled or leaked. 

Despite the company’s reassurances and strengthened defenses, the breach highlights the ongoing threat businesses face from cyberattacks that target customer trust and data privacy.

Cyber Attack Exposes Data of 861 Irish Defective Block Grant Applicants

 

An engineering firm that assesses applications for Ireland's defective concrete blocks grant scheme has been hit by a cyberattack, potentially exposing the personal data of approximately 861 homeowners across multiple counties. The breach targeted Sligo-based consulting firm Jennings O'Donovan, which works with the Housing Agency to evaluate applications under the enhanced defective concrete blocks scheme. 

The incident, first reported in October 2025, resulted in unauthorized access to a limited portion of the company's IT systems. Affected data includes applicants' names, local authority reference numbers, contact details, and technical reports containing photographs of damaged dwellings. However, the Housing Agency confirmed that no financial or banking information was compromised, as this data was stored securely on unaffected systems.

Donegal County was the most severely impacted, with approximately 685 applicants affected, representing over 30% of all Donegal applications to the scheme. Mayo County had 47 affected applicants, while 176 applications from other counties were also caught in the breach. The defective concrete blocks scheme, commonly known as the mica or pyrite redress scheme, provides grants to homeowners whose properties have been damaged by defective building materials containing excessive levels of mica or pyrite.

According to Jennings O'Donovan, the firm experienced a network disruption involving temporary unauthorized access and immediately activated established IT security protocols. The company worked with external specialists to identify, isolate, and mitigate the disruption. The Housing Agency emphasized that its own systems remained unaffected and the incident appears isolated to the single engineering company.

The Housing Agency has contacted all impacted applicants, advising that homeowners who were not contacted were not affected by the breach. Security experts warn that exposed personal data could potentially be used for targeted phishing or social engineering attacks against vulnerable homeowners. Despite the breach, the Housing Agency stated that no material delays to grant applications are expected.

The incident adds further complications to a scheme already facing criticism for processing delays and administrative challenges. As of June 2025, only 164 of 2,796 applicants had completed remediation work on their homes, with €163 million paid out in grants. The cyberattack highlights cybersecurity vulnerabilities in government contractor systems handling sensitive citizen data.

AI Browsers Spark Debate Over Privacy and Cybersecurity Risks

 


With the rapid development of artificial intelligence, the digital landscape continues to undergo a reshaping process, and the internet browser itself seems to be the latest frontier in this revolution. After the phenomenal success of AI chatbots such as ChatGPT, Google Gemini, and Perplexity, tech companies are now racing to integrate the same kind of intelligence into the very tool that people use every day to navigate the world online. 

A recent development by Google has been the integration of Gemini into its search engine, while both OpenAI and Perplexity have released their own AI-powered browsers, Atlas and Perplexity, all promising a more personalised and intuitive way to browse online content. In addition to offering unprecedented convenience and conversational search capabilities for users, this innovation marks the beginning of a new era in information access. 

In spite of the excitement, cybersecurity professionals remain increasingly concerned. There is a growing concern among experts that intelligent systems are inadvertently exposing users to sophisticated cyber risks in spite of enhancing their user experience. 

A context-aware interaction or dynamic data retrieval feature that allows users to interact with their environment can be exploited through indirect prompt injection and other manipulation methods, which may allow attackers to exploit the features. 

It is possible that these vulnerabilities may allow malicious actors to access sensitive data such as personal files, login credentials, and financial information, which raises the risk of data breaches and cybercriminals. In these new eras of artificial intelligence, where the boundaries between browsing and AI are blurring, there has become an increasing urgency in ensuring that trust, transparency, and safety are maintained on the Internet. 

AI browsers continue to divide experts when it comes to whether they are truly safe to use, and the issue becomes increasingly complicated as the debate continues. In addition to providing unprecedented ease of use and personalisation, ChatGPT's Atlas and Perplexity's Comet represent the next generation of intelligent browsers. However, they also introduce new levels of vulnerability that are largely unimaginable in traditional web browsers. 

It is important to understand that, unlike conventional browsers, which are just gateways to online content, these artificial intelligence-driven platforms function more like a digital assistant on their own. Aside from learning from user interactions, monitoring browsing behaviours, and even performing tasks independently across multiple sites, humans and machines are becoming increasingly blurred in this evolution, which has fundamentally changed the way we collect and process data today. 

A browser based on Artificial Intelligence watches and interprets each user's digital moves continuously, from clicks to scrolls to search queries and conversations, creating extensive behavioural profiles that outline users' interests, health concerns, consumer patterns, and emotional tendencies based on their data. 

Privacy advocates have argued for years that this level of surveillance is more comprehensive than any cookie or analytics tool on the market today, and represents a turning point in digital tracking. During a recent study by the Electronic Frontier Foundation, organisation discovered that Atlas retained search data related to sensitive medical inquiries, including names of healthcare providers, which raised serious ethical and legal concerns in regions that restricted certain medical procedures.

Due to the persistent memory architecture of these systems, they are even more contentious. While ordinary browsing histories can be erased by the user, AI memories, on the other hand, are stored on remote servers, which are frequently retained indefinitely. By doing so, the browser maintains long-term context. The system can use this to access vast amounts of sensitive data - ranging from financial activities to professional communications to personal messages - even long after the session has ended. 

These browsers are more vulnerable than ever because they require extensive access permissions to function effectively, which includes the ability to access emails, calendars, contact lists, and banking information. Experts have warned that such centralisation of personal data creates a single point of catastrophic failure—one breach could expose an individual's entire digital life. 

OpenAI released ChatGPT Atlas earlier this week, a new browser powered by artificial intelligence that will become a major player in the rapidly expanding marketplace of browsers powered by artificial intelligence. The Atlas browser, marketed as a browser that integrates ChatGPT into your everyday online experience, represents an important step forward in the company’s effort to integrate generative AI into everyday living. 

Despite being initially launched for Mac users, OpenAI promises to continue to refine its features and expand compatibility across a range of platforms in the coming months. As Atlas competes against competitors such as Perplexity's Comet, Dia, and Google's Gemini-enabled Chrome, the platform aims to redefine the way users interact with the internet—allowing ChatGPT to follow them seamlessly as they browse through the web. 

As described by OpenAI, ChatGPT's browser is equipped to interpret open tabs, analyse data on the page, and help users in real time, without requiring users to switch between applications or copy content manually. There have been a number of demonstrations that have highlighted the versatility of the tool, demonstrating its capability of completing a broad range of tasks, from ordering groceries and writing emails to summarising conversations, analysing GitHub repositories and providing research assistance. OpenAI has mentioned that Atlas utilises ChatGPT’s built-in memory in order to be able to remember past interactions and apply context to future queries based on those interactions.

There is a statement from the company about the company's new approach to creating a more intuitive, continuous user experience, in which the browser will function more as a collaborative tool and less as a passive tool. In spite of Atlas' promise, just as with its AI-driven competitors, it has stirred up serious issues around security, data protection and privacy. 

One of the most pressing concerns regarding prompt injection attacks is whether malicious actors are manipulating large language models to order them to perform unintended or harmful actions, which may expose customer information. Experts warn that such "agentic" systems may come at a significant security cost. 

 An attack like this can either occur directly through the user's prompts or indirectly by hiding hidden payloads within seemingly harmless web pages. A recent study by Brave researchers indicates that many AI browsers, including Comet and Fellou, are vulnerable to exploits like this. The attacker is thus able to bypass browser security frameworks and gain unauthorized access to sensitive domains such as banks, healthcare facilities, or corporate systems by bypassing browser security frameworks. 

It has also been noted that many prominent technologists have voiced their reservations. Simon Willison, a well-known developer and co-creator of the Django Web Framework, has urged that giving browsers the freedom to act autonomously on their users' behalf would pose grave risks. Even seemingly harmless requests, like summarising a Reddit post, could, if exploited via an injection vulnerability, be used to reveal personal or confidential information. 

With the advancement of artificial intelligence browsers, the tension between innovation and security becomes more and more intense, prompting the call for stronger safeguards before these tools become mainstream digital companions. There has been an increase in the number of vulnerabilities discovered by security researchers that make AI browsers a lot more dangerous than was initially thought, with prompt injection emerging as the most critical vulnerability. 

A malicious website can use this technique to manipulate AI-driven browser agents secretly, effectively turning them against a user. Researchers at Brave found that attackers are able to hide invisible instructions within webpage code, often rendered as white text on white backgrounds. These instructions are unnoticeable by humans but are easily interpreted by artificial intelligence systems. 

A user may be directed to perform unauthorised actions when they visit a web page containing embedded commands. For example, they may be directed to retrieve private e-mails, access financial data, or transfer money without their consent. Due to the inherent lack of contextual understanding that artificial intelligence systems possess, they can unwittingly execute these harmful instructions, with full user privileges, when they do not have the ability to differentiate between legitimate inputs from deceptive prompts. 

These attacks have caused a lot of attention among the cybersecurity community due to their scale and simplicity. Researchers from LayerX demonstrated the use of a technique called CometJacking, which was demonstrated as a way of hijacking Perplexity’s Comet browser into a sophisticated data exfiltration tool by a single malicious link. 

A simple encoding method known as Base64 encoding was used by attackers to bypass traditional browser security measures and sandboxes, allowing them to bypass the browser's protections. It is therefore important to know that the launch point for a data theft campaign could be a seemingly harmless comment on Reddit, a social media post, or even an email newsletter, which could expose sensitive personal or company information in an innocuous manner. 

The findings of this study illustrate the inherent fragility of artificial intelligence browsers, where independence and convenience often come at the expense of safety. It is important to note that cybersecurity experts have outlined essential defence measures for users who wish to experiment with AI browsers in light of these increasing concerns. 

Individuals should restrict permissions strictly, giving access only to non-sensitive accounts and avoiding involving financial institutions or healthcare institutions until the technology becomes more mature. By reviewing activity logs regularly, you can be sure that you have been alerted to unusual patterns or unauthorised actions in advance. A multi-factor authentication system can greatly enhance security across all linked accounts, while prompt software updates allow users to take advantage of the latest security patches. 

A key safeguard is to maintain manual vigilance-verifying URLs and avoiding automated interactions with unfamiliar or untrusted websites. Some prominent technologists have expressed doubts about these systems as well. A respected developer and co-creator of the Django Web Framework, Simon Willison, has warned that giving browsers the ability to act autonomously on behalf of users comes with profound risks.

It is noted that even benign requests, such as summarising a Reddit post, could inadvertently expose sensitive information if exploited by an injection vulnerability, and this could result in personal information being released into the public domain. With the advancement of artificial intelligence browsers, the tension between innovation and security becomes more and more intense, prompting the call for stronger safeguards before these tools become mainstream digital companions. 

There has been an increase in the number of vulnerabilities discovered by security researchers that make AI browsers a lot more dangerous than was initially thought, with prompt injection emerging as the most critical vulnerability. Using this technique, malicious websites have the ability to manipulate the AI-driven browsers, effectively turning them against their users. 

Brave researchers discovered that attackers are capable of hiding invisible instructions within the code of the webpages, often rendered as white text on a white background. The instructions are invisible to the naked eye, but are easily interpreted by artificial intelligence systems. The embedded commands on such pages can direct the browser to perform unauthorised actions, such as retrieving private emails, accessing financial information, and transferring funds, as a result of a user visiting such a page.

Since AI systems are inherently incapable of distinguishing between legitimate and deceptive user inputs, they can unknowingly execute harmful instructions with full user privileges without realising it. This attack has sparked the interest of the cybersecurity community due to its scale and simplicity. 

Researchers from LayerX have demonstrated a method of hijacking Perplexity's Comet browser by merely clicking on a malicious link, and using this technique, transforming the browser into an advanced data exfiltration tool. Attackers were able to bypass traditional browser security measures and security sandboxes by using simple methods like Base64 encoding.

It means that a seemingly harmless comment on Reddit, a post on social media, or an email newsletter can serve as a launch point for a data theft campaign, thereby potentially exposing sensitive personal and corporate information to a third party. There is no doubt that AI browsers are intrinsically fragile, where autonomy and convenience sometimes come at the expense of safety. The findings suggest that AI browsers are inherently vulnerable. 

It has become clear that security experts have identified essential defence measures to protect users who wish to experiment with AI browsers in the face of increasing concerns. It is suggested that individuals restrict their permissions as strictly as possible, granting access only to non-sensitive accounts and avoiding connecting to financial or healthcare services until the technology is well developed. 

In order to detect unusual patterns or unauthorised actions in time, it is important to regularly review activity logs. Multi-factor authentication is a vital component of protecting linked accounts, as it adds a new layer of security, while prompt software updates ensure that users receive the latest security patches on their systems. 

Furthermore, experts emphasise manual vigilance — verifying URLs and avoiding automated interactions with unfamiliar or untrusted websites remains crucial to protecting their data. There is, however, a growing consensus among professionals that artificial intelligence browsers, despite impressive demonstrations of innovation, remain unreliable for everyday use. 

Analysts at Proton concluded that AI browsers are no longer reliable for everyday use. This argument argues that the issue is not only technical, but is structural as well; privacy risks are a part of the very design of these systems themselves. AI browser developers, who prioritise functionality and personalisation over all else, have inherently created extensive surveillance architectures that rely heavily on user data in order to function as intended. 

It has been pointed out by OpenAI's own security leadership that prompt injection remains an unresolved frontier issue, thus emphasising that this emerging technology is still a very experimental and unsettled one. The consensus among cybersecurity researchers, at the moment, is that the risks associated with artificial intelligence browsers far outweigh their convenience, especially for users dealing with sensitive personal and professional information. 

With the acceleration of the AI browser revolution, it is now crucial to strike a balance between innovation and accountability. Despite the promise of seamless digital assistance and a hyper-personalised browsing experience through tools such as Atlas and Comet, they must be accompanied by robust ethical frameworks, transparent data governance, and stronger security standards to make progress.

A lot of experts stress that the real progress will depend on the way this technology evolves responsibly - prioritising user consent, privacy, and control over convenience for the end user. In the meantime, users and developers alike should not approach AI browsers with fear, but with informed caution and an insistence that trust is built into the browser as a default state.

Cybercriminals Behind “Universe Browser”: A Fake Privacy App Spying on Users and Linked to Chinese Crime Syndicates

 

With online privacy nearly impossible to maintain due to widespread web tracking and advertising, many users are turning to browsers that promise anonymity and data protection—such as Brave, DuckDuckGo, Mullvad, and Tor. However, cybersecurity experts have now identified one so-called “privacy browser” that is doing the exact opposite. The Universe Browser, which has been downloaded millions of times, is allegedly designed by cybercriminals to harvest user data instead of protecting it.

According to a recent Infoblox report prepared in collaboration with the United Nations Office on Drugs and Crime (UNODC), Universe Browser targets users in China and promotes itself as a secure way to bypass online censorship and access gambling websites. But beneath its seemingly protective exterior, the browser is tracking user locations, rerouting traffic through Chinese servers, installing keyloggers, and tampering with network configurations.

“These features are consistent with remote access trojans (RATs) and other malware increasingly being distributed through Chinese online gambling platforms,” says Infoblox. While the report does not directly accuse the developers of criminal activity, it notes that the browser’s operations align closely with cybercrime tactics like identity theft, blackmail, and targeted Trojan attacks.

Built on Google Chrome’s open-source framework, Universe Browser has been heavily marketed to clients of the Baoying Group—a network linked to Triad-affiliated criminal organizations referred to by researchers as “Vault Viper.” These groups are allegedly involved in illegal gambling, cyber fraud, money laundering, and even human trafficking.

Once installed, the malicious browser injects harmful code, evades antivirus scans, and monitors system data, including the clipboard. On Windows systems, it can even replace the original Chrome executable file, embedding itself deeply within the operating system. Users lose control of most browser settings, while a built-in extension can capture screenshots and upload them to remote servers.

Researchers found that encrypted user data from the browser is being transmitted to servers tied to Vault Viper. The app appears to be custom-developed for the Baoying Group, promoted exclusively on their gambling-related websites, and primarily targets users in China and Taiwan, where online betting is banned.

Universe Browser is also available on iOS App Store and as a sideloaded Android app, though it remains unclear whether these mobile versions contain the same level of malicious behavior as the Windows release. Still, experts warn that the safest move is to avoid the browser entirely.

Featured