Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Google. Show all posts

Indian Government Proposes Compulsory Location Tracking in Smartphones, Faces Backlash


Government faces backlash over location-tracking proposal

The Indian government is pushing a telecom industry proposal that will compel smartphone companies to allow satellite location tracking that will be activated 24x7 for surveillance. 

Tech giants Samsung, Google, and Apple have opposed this move due to privacy concerns. Privacy debates have stirred in India after the government was forced to repeal an order that mandated smartphone companies to pre-install a state run cyber safety application on all devices. Activists and opposition raised concerns about possible spying. 

About the proposal 

Recently, the government had been concerned that agencies didn't get accurate locations when legal requests were sent to telecom companies during investigations. Currently, the firm only uses cellular tower data that provides estimated area location, this can be sometimes inaccurate.

The Cellular Operators Association of India (COAI) representing Bharti Airtel and Reliance Jio suggested accurate user locations be provided if the government mandates smartphone firms to turn on A-GPS technology which uses cellular data and satellite signals.

Strong opposition from tech giants 

If this is implemented, location services will be activated in smartphones with no disable option. Samsung, Google, and Apple strongly oppose this proposal. A proposal to track user location is not present anywhere else in the world, according to lobbying group India Cellular & Electronics Association (ICEA), representing Google and Apple. 

Reuters reached out to the India's IT and home ministries for clarity on the telecom industry's proposal but have received no replies. According to digital forensics expert Junade Ali, the "proposal would see phones operate as a dedicated surveillance device." 

According to technology experts, utilizing A-GPS technology, which is normally only activated when specific apps are operating or emergency calls are being made, might give authorities location data accurate enough to follow a person to within a meter.  

Telecom vs government 

Globally, governments are constantly looking for new ways to improve in tracking the movements or data of mobile users. All Russian mobile phones are mandated to have a state-sponsored communications app installed. With 735 million smartphones as of mid-2025, India is the second-largest mobile market in the world. 

According to Counterpoint Research, more than 95% of these gadgets are running Google's Android operating system, while the remaining phones are running Apple's iOS. 

Apple and Google cautioned that their user base will include members of the armed forces, judges, business executives, and journalists, and that the proposed location tracking would jeopardize their security because they store sensitive data.

According to the telecom industry, even the outdated method of location tracking is becoming troublesome because smartphone manufacturers notify users via pop-up messages that their "carrier is trying to access your location."



End to End-to-end Encryption? Google Update Allows Firms to Read Employee Texts


Your organization can now read your texts

Microsoft stirred controversy when it revealed a Teams update that could tell your organization when you're not at work. Google did the same. Say goodbye to end-to-end encryption. With this new RCS and SMS Android update, your RCS and SMS texts are no longer private. 

According to Android Authority, "Google is rolling out Android RCS Archival on Pixel (and other Android) phones, allowing employers to intercept and archive RCS chats on work-managed devices. In simpler terms, your employer will now be able to read your RCS chats in Google Messages despite end-to-end encryption.”

Only for organizational devices 

This is only applicable to work-managed devices and doesn't impact personal devices. In regulated industries, it will only add RCS archiving to existing SMS archiving. In an organization, however, texting is different than emailing. In the former, employees sometimes share about their non-work life. End-to-end encryptions keep these conversations safe, but this will no longer be the case.

The end-to-end question 

There is alot of misunderstanding around end-to-end encryption. It protects messages when they are being sent, but once they are on your device, they are decrypted and no longer safe. 

According to Google, this is "a dependable, Android-supported solution for message archival, which is also backwards compatible with SMS and MMS messages as well. Employees will see a clear notification on their device whenever the archival feature is active.”

What will change?

With this update, getting a phone at work is no longer as good as it seems. Employees have always been insecure about the risks in over-sharing on email, as it is easy to spy. But not texts. 

The update will make things different. According to Google, “this new capability, available on Google Pixel and other compatible Android Enterprise devices gives your employees all the benefits of RCS — like typing indicators, read receipts, and end-to-end encryption between Android devices — while ensuring your organization meets its regulatory requirements.”

Promoting organizational surveillance 

Because of organizational surveillance, employees at times turn to shadow IT systems such as Whatsapp and Signal to communicate with colleagues. The new Google update will only make things worse. 

“Earlier,” Google said, ““employers had to block the use of RCS entirely to meet these compliance requirements; this update simply allows organizations to support modern messaging — giving employees messaging benefits like high-quality media sharing and typing indicators — while maintaining the same compliance standards that already apply to SMS messaging."

Google’s New Update Allows Employers To Archive Texts On Work-Managed Android Phones

 




A recent Android update has marked a paradigm shifting change in how text messages are handled on employer-controlled devices. This means Google has introduced a feature called Android RCS Archival, which lets organisations capture and store all RCS, SMS, and MMS communications sent through Google Messages on fully managed work phones. While the messages remain encrypted in transport, they can now be accessed on the device itself once delivered.

This update is designed to help companies meet compliance and record-keeping requirements, especially in sectors that must retain communication logs for regulatory reasons. Until now, many organizations had blocked RCS entirely because of its encryption, which made it difficult to archive. The new feature gives them a way to support richer messaging while still preserving mandatory records.

Archiving occurs via authorized third-party software that integrates directly with Google Messages on work-managed devices. Once enabled by a company's IT, the software will log every interaction inside of a conversation, including messages received, sent, edited, or later deleted. Employees using these devices will see a notification when archiving is active, signaling their conversations are being logged.

Google's indicated that this functionality only refers to work-managed Android devices, personal phones and personal profiles are not impacted, and the update doesn't allow employers access to user data on privately-owned devices. The feature must also be intentionally switched on by the organisation; it is not automatically on.

The update also brings to the surface a common misconception about encrypted messaging: End-to-end encryption protects content only while it's in transit between devices. When a message lands on a device that is owned and administered by an employer, the organization has the technical ability to capture it. It does not extend to over-the-top platforms - such as WhatsApp or Signal - that manage their own encryption. Those apps can expose data as well in cases where backups aren't encrypted or when the device itself is compromised.

This change also raises a broader issue: one of counterparty risk. A conversation remains private only if both ends of it are stored securely. Screenshots, unsafe backups, and linked devices outside the encrypted environment can all leak message content. Work-phone archiving now becomes part of that wider set of risks users should be aware of.

For employees, the takeaway is clear: A company-issued phone is a workplace tool, not a private device. Any communication that originates from a fully managed device can be archived, meaning personal conversations should stay on a personal phone. Users reliant on encrypted platforms have reason to review their backup settings and steer clear of mixing personal communication with corporate technology.

Google's new archival option gives organisations a compliance solution that brings RCS in line with traditional SMS logging, while for workers it is a further reminder that privacy expectations shift the moment a device is brought under corporate management. 


Gainsight Breach Spread into Salesforce Environments; Scope Under Investigation

 



An ongoing security incident at Gainsight's customer-management platform has raised fresh alarms about how deeply third-party integrations can affect cloud environments. The breach centers on compromised OAuth tokens connected with Gainsight's Salesforce connectors, leaving unclear how many organizations touched and the type of information accessed.

Salesforce was the first to flag suspicious activity originating from Gainsight's connected applications. As a precautionary measure, Salesforce revoked all associated access tokens and, for some time, disabled the concerned integrations. The company also released detailed indicators of compromise, timelines of malicious activity, and guidance urging customers to review authentication logs and API usage within their own environments.

Gainsight later confirmed that unauthorized parties misused certain OAuth tokens linked to its Salesforce-connected app. According to its leadership, only a small number of customers have so far reported confirmed data impact. However, several independent security teams-including Google's Threat Intelligence Group-reported signs that the intrusion may have reached far more Salesforce instances than initially acknowledged. These differing numbers are not unusual: supply-chain incidents often reveal their full extent only after weeks of log analysis and correlation.

At this time, investigators understand the attack as a case of token abuse, not a failure of Salesforce's underlying platform. OAuth tokens are long-lived keys that let approved applications make API calls on behalf of customers. Once attackers have them, they can access the CRM records through legitimate channels, and the detection is far more challenging. This approach enables the intruders to bypass common login checks, and therefore Salesforce has focused on log review and token rotation as immediate priorities.

To enhance visibility, Gainsight has onboarded Mandiant to conduct a forensic investigation into the incident. The company is investigating historical logs, token behavior, connector activity, and cross-platform data flows to understand the attacker's movements and whether other services were impacted. As a precautionary measure, Gainsight has also worked with platforms including HubSpot, Zendesk, and Gong to temporarily revoke related tokens until investigators can confirm they are safe to restore.

The incident is similar to other attacks that happened this year, where other Salesforce integrations were used to siphon customer records without exploiting any direct vulnerability in Salesforce. Repeated patterns here illustrate a structural challenge: organizations may secure their main cloud platform rigorously, but one compromised integration can open a path to wider unauthorized access.

But for customers, the best steps are as straightforward as ever: monitor Salesforce authentication and API logs for anomalous access patterns; invalidate or rotate existing OAuth tokens; reduce third-party app permissions to the bare minimum; and, if possible, apply IP restrictions or allowlists to further restrict the range of sources from which API calls can be made.

Both companies say they will provide further updates and support customers who have been affected by the issue. The incident served as yet another wake-up call that in modern cloud ecosystems, the security of one vendor often relies on the security practices of all in its integration chain. 



Google Confirms Data Breach from 200 Companies


Google has confirmed that hackers stole data from more than 200 companies after exploiting apps developed by Gainsight, a customer success software provider. The breach targeted Salesforce systems and is being described as one of the biggest supply chain attacks in recent months. 
 
Salesforce said last week that “certain customers’ Salesforce data” had been accessed through Gainsight applications, which are widely used by companies to manage customer relationships at scale. According to Google’s Threat Intelligence Group, more than 200 Salesforce instances were affected, indicating that the attackers targeted the ecosystem strategically rather than going after individual companies one by one. The incident has already raised deep concern across industries that depend heavily on third-party integrations to run core business functions. 
 
A group calling itself Scattered Lapsus$ Hunters, which includes members of the well-known ShinyHunters gang, has claimed responsibility. This collective has previously targeted prominent global firms and leaked confidential datasets online, earning a reputation for bold, high-impact intrusions. In this case, the hackers have published a list of alleged victims, naming companies such as Atlassian, CrowdStrike, DocuSign, GitLab, LinkedIn, Malwarebytes, SonicWall, Thomson Reuters, and Verizon. Some of these organisations have denied being affected, while others are still conducting internal investigations to determine whether their environments were touched. 
 
This attack underscores a growing reality: compromising a widely trusted application is often more efficient for attackers than breaching a single company. By infiltrating Gainsight’s software, the threat actors gained access to a broad swath of organisations simultaneously, effectively bypassing individual perimeter defences. TechCrunch notes that supply chain attacks remain among the most dangerous vectors because they exploit deeply rooted trust. Once a vendor’s application is subverted, it can become an invisible doorway leading directly into multiple corporate systems. 
 
Salesforce has stated that it is working closely with affected customers to secure environments and limit the impact, while Google continues to analyse the breadth of data exfiltration. Gainsight has not yet released a detailed public statement, prompting experts to call for greater transparency from vendors responsible for critical integrations. Cybersecurity firms advise all companies using third-party SaaS tools to review access permissions, rotate credentials, monitor logs for anomalies, and ensure stronger compliance frameworks for integrated platforms. 
 
The larger picture here reflects an industry-wide challenge. As enterprises increasingly rely on cloud services and SaaS tools, attackers are shifting their attention to these interconnected layers, where a single weak link can expose hundreds of organisations. This shift has prompted analysts to warn that due diligence on app vendors, once considered a formality, must now become a non-negotiable element of cybersecurity strategy. 
 
In light of the attack, experts believe companies will need to adopt a more vigilant posture, treating all integrations as potential threat surfaces, rather than assuming safety through trust. The Gainsight incident serves as a stark reminder that in a cloud-driven world, security is only as strong as the least protected partner in the chain.

Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

New Google Study Reveals Threat Protection Against Text Scams


As Cybersecurity Awareness Month comes to an end, we're concentrating on mobile scams, one of the most prevalent digital threats of our day. Over $400 billion in funds have been stolen globally in the past 12 months as a result of fraudsters using sophisticated AI tools to create more convincing schemes. 

Google study about smartphone threat protection 

Android has been at the forefront of the fight against scammers for years, utilizing the best AI to create proactive, multi-layered defenses that can detect and stop scams before they get to you. Every month, over 10 billion suspected malicious calls and messages are blocked by Android's scam defenses. In order to preserve the integrity of the RCS service, Google claims to conduct regular safety checks. It has blocked more than 100 million suspicious numbers in the last month alone.

About the research 

To highlight how fraud defenses function in the real world, Google invited consumers and independent security experts to compare how well Android and iOS protect you from these dangers. Additionally, Google is releasing a new report that describes how contemporary text scams are planned, giving you insight into the strategies used by scammers and how to identify them.

Key insights 

  • Those who reported not receiving any scam texts in the week before the survey were 58% more likely to be Android users than iOS users. The benefit was even greater on Pixel, where users were 96% more likely to report no scam texts than iPhone owners.
  • Whereas, reports of three or more scam texts in a week were 65% more common among iOS users than Android users. When comparing iPhone and Pixel, the disparity was even more noticeable, with 136% more iPhone users reporting receiving a high volume of scam messages.
  • Compared to iPhone users, Android users were 20% more likely to say their device's scam protections were "very effective" or "extremely effective." Additionally, iPhone users were 150% more likely to say their device was completely ineffective at preventing mobile fraud.  

Android smartphones were found to have the strongest AI-powered protections in a recent assessment conducted by the international technology market research firm Counterpoint Research.  

Herodotus Trojan Mimics Human Typing to Steal Banking Credentials

 



A newly discovered Android malware, Herodotus, is alarming cybersecurity experts due to its unique ability to imitate human typing. This advanced technique allows the malware to avoid fraud detection systems and secretly steal sensitive financial information from unsuspecting users.

According to researchers from Dutch cybersecurity firm ThreatFabric, Herodotus combines elements from older malware families like Brokewell with newly written code, creating a hybrid trojan that is both deceptive and technically refined. The malware’s capabilities include logging keystrokes, recording screen activity, capturing biometric data, and hijacking user inputs in real time.


How users get infected

Herodotus spreads mainly through side-loading, a process where users install applications from outside the official Google Play Store. Attackers are believed to use SMS phishing (smishing) campaigns that send malicious links disguised as legitimate messages. Clicking on these links downloads a small installer, also known as a dropper, that delivers the actual malware to the device.

Once installed, the malware prompts victims to enable Android Accessibility Services, claiming it is required for app functionality. However, this permission gives the attacker total control,  allowing them to read content on the screen, click buttons, swipe, and interact with any open application as if they were the device owner.


The attack mechanism

After the infection, Herodotus collects a list of all installed apps and sends it to its command-and-control (C2) server. Based on this data, the operator pushes overlay pages, fake screens designed to look identical to genuine banking or cryptocurrency apps. When users open their actual financial apps, these overlays appear on top, tricking victims into entering login details, card numbers, and PINs.

The malware can also intercept one-time passwords (OTPs) sent via SMS, record keystrokes, and even stream live footage of the victim’s screen. With these capabilities, attackers can execute full-scale device takeover attacks, giving them unrestricted access to the user’s financial accounts.


The human-like typing trick

What sets Herodotus apart is its behavioral deception technique. To appear human during remote-control sessions, the malware adds random time delays between keystrokes, ranging from 0.3 to 3 seconds. This mimics natural human typing speed instead of the instant input patterns of automated tools.

Fraud detection systems that rely solely on input timing often fail to recognize these attacks because the malware’s simulated typing appears authentic. Analysts warn that as Herodotus continues to evolve, it may become even harder for traditional detection tools to identify.


Active regions and underground sale

ThreatFabric reports that the malware has already been used in Italy and Brazil, disguising itself as apps named “Banca Sicura” and “Modulo Seguranca Stone.” Researchers also found fake login pages imitating popular banking and cryptocurrency platforms in the United States, United Kingdom, Turkey, and Poland.

The malware’s developer, who goes by the alias “K1R0” on underground forums, began offering Herodotus as a Malware-as-a-Service (MaaS) product in September. This means other cybercriminals can rent or purchase it for use in their own campaigns, further increasing the likelihood of global spread.

Google confirmed that Play Protect already blocks known versions of Herodotus. Users can stay protected by avoiding unofficial downloads, ignoring links in unexpected text messages, and keeping Play Protect active. It is also crucial to avoid granting Accessibility permissions unless an app’s legitimacy is verified.

Security professionals advise enabling stronger authentication methods, such as app-based verification instead of SMS-based codes, and keeping both system and app software regularly updated.


Google Probes Weeks-Long Security Breach Linked to Contractor Access

 




Google has launched a detailed investigation into a weeks-long security breach after discovering that a contractor with legitimate system privileges had been quietly collecting internal screenshots and confidential files tied to the Play Store ecosystem. The company uncovered the activity only after it had continued for several weeks, giving the individual enough time to gather sensitive technical data before being detected.

According to verified cybersecurity reports, the contractor managed to access information that explained the internal functioning of the Play Store, Google’s global marketplace serving billions of Android users. The files reportedly included documentation describing the structure of Play Store infrastructure, the technical guardrails that screen malicious apps, and the compliance systems designed to meet international data protection laws. The exposure of such material presents serious risks, as it could help malicious actors identify weaknesses in Google’s defense systems or replicate its internal processes to deceive automated security checks.

Upon discovery of the breach, Google initiated a forensic review to determine how much information was accessed and whether it was shared externally. The company has also reported the matter to law enforcement and begun a complete reassessment of its third-party access procedures. Internal sources indicate that Google is now tightening security for all contractor accounts by expanding multi-factor authentication requirements, deploying AI-based systems to detect suspicious activities such as repeated screenshot captures, and enforcing stricter segregation of roles and privileges. Additional measures include enhanced background checks for third-party employees who handle sensitive systems, as part of a larger overhaul of Google’s contractor risk management framework.

Experts note that the incident arrives during a period of heightened regulatory attention on Google’s data protection and antitrust practices. The breach not only exposes potential security weaknesses but also raises broader concerns about insider threats, one of the most persistent and challenging issues in cybersecurity. Even companies that invest heavily in digital defenses remain vulnerable when authorized users intentionally misuse their access for personal gain or external collaboration.

The incident has also revived discussion about earlier insider threat cases at Google. In one of the most significant examples, a former software engineer was charged with stealing confidential files related to Google’s artificial intelligence systems between 2022 and 2023. Investigators revealed that he had transferred hundreds of internal documents to personal cloud accounts and even worked with external companies while still employed at Google. That case, which resulted in multiple charges of trade secret theft and economic espionage, underlined how intellectual property theft by insiders can evolve into major national security concerns.

For Google, the latest breach serves as another reminder that internal misuse, whether by employees or contractors remains a critical weak point. As the investigation continues, the company is expected to strengthen oversight across its global operations. Cybersecurity analysts emphasize that organizations managing large user platforms must combine strong technical barriers with vigilant monitoring of human behavior to prevent insider-led compromises before they escalate into large-scale risks.



Gmail Credentials Appear in Massive 183 Million Infostealer Data Leak, but Google Confirms No New Breach




A vast cache of 183 million email addresses and passwords has surfaced in the Have I Been Pwned (HIBP) database, raising concern among Gmail users and prompting Google to issue an official clarification. The newly indexed dataset stems from infostealer malware logs and credential-stuffing lists collected over time, rather than a fresh attack targeting Gmail or any other single provider.


The Origin of the Dataset

The large collection, analyzed by HIBP founder Troy Hunt, contains records captured by infostealer malware that had been active for nearly a year. The data, supplied by Synthient, amounted to roughly 3.5 terabytes, comprising nearly 23 billion rows of stolen information. Each entry typically includes a website name, an email address, and its corresponding password, exposing a wide range of online accounts across various platforms.

Synthient’s Benjamin Brundage explained that this compilation was drawn from continuous monitoring of underground marketplaces and malware operations. The dataset, referred to as the “Synthient threat data,” was later forwarded to HIBP for indexing and public awareness.


How Much of the Data Is New

Upon analysis, Hunt discovered that most of the credentials had appeared in previous breaches. Out of a 94,000-record sample, about 92 percent matched older data, while approximately 8 percent represented new and unseen credentials. This translates to over 16 million previously unrecorded email addresses, fresh data that had not been part of any known breaches or stealer logs before.

To test authenticity, Hunt contacted several users whose credentials appeared in the sample. One respondent verified that the password listed alongside their Gmail address was indeed correct, confirming that the dataset contained legitimate credentials rather than fabricated or corrupted data.


Gmail Accounts Included, but No Evidence of a Gmail Hack

The inclusion of Gmail addresses led some reports to suggest that Gmail itself had been breached. However, Google has publicly refuted these claims, stating that no new compromise has taken place. According to Google, the reports stem from a misunderstanding of how infostealer databases operate, they simply aggregate previously stolen credentials from different malware incidents, not from a new intrusion into Gmail systems.

Google emphasized that Gmail’s security systems remain robust and that users are protected through ongoing monitoring and proactive account protection measures. The company said it routinely detects large credential dumps and initiates password resets to protect affected accounts.

In a statement, Google advised users to adopt stronger account protection measures: “Reports of a Gmail breach are false. Infostealer databases gather credentials from across the web, not from a targeted Gmail attack. Users can enhance their safety by enabling two-step verification and adopting passkeys as a secure alternative to passwords.”


What Users Should Do

Experts recommend that individuals check their accounts on Have I Been Pwned to determine whether their credentials appear in this dataset. Users are also advised to enable multi-factor authentication, switch to passkeys, and avoid reusing passwords across multiple accounts.

Gmail users can utilize Google’s built-in Password Manager to identify weak or compromised passwords. The password checkup feature, accessible from Chrome’s settings, can alert users about reused or exposed credentials and prompt immediate password changes.

If an account cannot be accessed, users should proceed to Google’s account recovery page and follow the verification steps provided. Google also reminded users that it automatically requests password resets when it detects exposure in large credential leaks.


The Broader Security Implications

Cybersecurity professionals stress that while this incident does not involve a new system breach, it reinforces the ongoing threat posed by infostealer malware and poor password hygiene. Sachin Jade, Chief Product Officer at Cyware, highlighted that credential monitoring has become a vital part of any mature cybersecurity strategy. He explained that although this dataset results from older breaches, “credential-based attacks remain one of the leading causes of data compromise.”

Jade further noted that organizations should integrate credential monitoring into their broader risk management frameworks. This helps security teams prioritize response strategies, enforce adaptive authentication, and limit lateral movement by attackers using stolen passwords.

Ultimately, this collection of 183 million credentials serves as a reminder that password leaks, whether new or recycled, continue to feed cybercriminal activity. Continuous vigilance, proactive password management, and layered security practices remain the strongest defenses against such risks.


Is ChatGPT's Atlas Browser the Future of Internet?

Is ChatGPT's Atlas Browser the Future of Internet?

After using ChatGPT Atlas, OpenAI's new web browser, users may notice few issues. This is not the same as Google Chrome, which about 60% of users use. It is based on a chatbot that you are supposed to converse with in order to browse the internet.  

One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.  

Following that: "You've hit the free plan limit for GPT-5."  

Paid browser 

According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.

According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.

Competitors, data, and money

The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.

The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.  

Will it kill Google?

Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.

Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.

Google’s Quantum Breakthrough Rekindles Concerns About Bitcoin’s Long-Term Security

 




Google has announced a verified milestone in quantum computing that has once again drawn attention to the potential threat quantum technology could pose to Bitcoin and other digital systems in the future.

The company’s latest quantum processor, Willow, has demonstrated a confirmed computational speed-up over the world’s leading supercomputers. Published in the journal Nature, the findings mark the first verified example of a quantum processor outperforming classical machines in a real experiment.

This success brings researchers closer to the long-envisioned goal of building reliable quantum computers and signals progress toward machines that could one day challenge the cryptography protecting cryptocurrencies.


What Google Achieved

According to Google’s study, the 105-qubit Willow chip ran a physics algorithm faster than any known classical system could simulate. This achievement, often referred to as “quantum advantage,” shows that quantum processors are starting to perform calculations that are practically impossible for traditional computers.

The experiment used a method called Quantum Echoes, where researchers advanced a quantum system through several operations, intentionally disturbed one qubit, and then reversed the sequence to see if the information would reappear. The re-emergence of this information, known as a quantum echo, confirmed the system’s interference patterns and genuine quantum behavior.

In measurable terms, Willow completed the task in just over two hours, while Frontier, one of the world’s fastest publicly benchmarked supercomputers, would need about 3.2 years to perform the same operation. That represents a performance difference of nearly 13,000 times.

The results were independently verified and can be reproduced by other quantum systems, a major step forward from previous experiments that lacked reproducibility. Google CEO Sundar Pichai noted on X that this outcome is “a substantial step toward the first real-world application of quantum computing.”

Willow’s superconducting transmon qubits achieved an impressive level of stability. The chip recorded median two-qubit gate errors of 0.0015 and maintained coherence times above 100 microseconds, allowing scientists to execute 23 layers of quantum operations across 65 qubits. This pushed the system beyond what classical models can reproduce and proved that complex, multi-layered quantum circuits can now be managed with high accuracy.


From Sycamore to Willow

The Willow processor, unveiled in December 2024, is a successor to Google’s Sycamore chip from 2019, which first claimed quantum supremacy but lacked experimental consistency. Willow bridges that gap by introducing stronger error correction and better coherence, enabling experiments that can be repeated and verified within the same hardware.

While the processor is still in a research phase, its stability and reproducibility represent significant engineering progress. The experiment also confirmed that quantum interference can persist in systems too complex for classical simulation, which strengthens the case for practical quantum applications.


Toward Real-World Uses

Google now plans to move beyond proof-of-concept demonstrations toward practical quantum simulations, such as modeling atomic and molecular interactions. These tasks are vital for fields like drug discovery, battery design, and material science, where classical computers struggle to handle the enormous number of variables involved.

In collaboration with the University of California, Berkeley, Google recently demonstrated a small-scale quantum experiment to model molecular systems, marking an early step toward what the company calls a “quantum-scope” — a tool capable of observing natural phenomena that cannot be measured using classical instruments.


The Bitcoin Question

Although Willow’s success does not pose an immediate threat to Bitcoin, it has revived discussions about how close quantum computers are to breaking elliptic-curve cryptography (ECC), which underpins most digital financial systems. ECC is nearly impossible for classical computers to reverse-engineer, but it could theoretically be broken by a powerful quantum system running algorithms such as Shor’s algorithm.

Experts caution that this risk remains distant but credible. Christopher Peikert, a professor of computer science and engineering at the University of Michigan, told Decrypt that quantum computing has a small but significant chance, over five percent, of becoming a major long-term threat to cryptocurrencies.

He added that moving to post-quantum cryptography would address these vulnerabilities, but the trade-offs include larger keys and signatures, which would increase network traffic and block sizes.


Why It Matters

Simulating Willow’s circuits using tensor-network algorithms would take more than 10 million CPU-hours on Frontier. The contrast between two hours of quantum computation and several years of classical simulation offers clear evidence that practical quantum advantage is becoming real.

The Willow experiment transitions quantum research from theory to testable engineering. It shows that real hardware can perform verified calculations that classical computers cannot feasibly replicate.

For cybersecurity professionals and blockchain developers, this serves as a reminder that quantum resistance must now be part of long-term security planning. The countdown toward a quantum future has already begun, and with each verified advance, that future moves closer to reality.



Hackers Exploit Blockchain Networks to Hide and Deliver Malware, Google Warns

 



Google’s Threat Intelligence Group has uncovered a new wave of cyberattacks where hackers are using public blockchains to host and distribute malicious code. This alarming trend transforms one of the world’s most secure and tamper-resistant technologies into a stealthy channel for cybercrime.

According to Google’s latest report, several advanced threat actors, including one group suspected of operating on behalf of North Korea have begun embedding harmful code into smart contracts on major blockchain platforms such as Ethereum and the BNB Smart Chain. The technique, known as “EtherHiding,” allows attackers to conceal malware within the blockchain itself, creating a nearly untraceable and permanent delivery system.

Smart contracts were originally designed to enable transparent and trustworthy transactions without intermediaries. However, attackers are now exploiting their immutability to host malware that cannot be deleted or blocked. Once malicious code is written into a blockchain contract, it becomes permanently accessible to anyone who knows how to retrieve it.

This innovation replaces the need for traditional “bulletproof hosting” services, offshore servers that cybercriminals once used to evade law enforcement. By using blockchain networks instead, hackers can distribute malicious software at a fraction of the cost, often paying less than two dollars per contract update.

The decentralized nature of these systems eliminates any single point of failure, meaning there is no authority capable of taking down the malicious data. Even blockchain’s anonymity features benefit attackers, as retrieving code from smart contracts leaves no identifiable trace in transaction logs.


How the Attacks Unfold

Google researchers observed that hackers often begin their campaigns with social engineering tactics targeting software developers. Pretending to be recruiters, they send job offers that require the victims to complete “technical tasks.” The provided test files secretly install the initial stage of malware.

Once the system is compromised, additional malicious components are fetched directly from smart contracts stored on Ethereum or BNB Smart Chain. This multi-layered strategy enables attackers to modify or update their payloads anytime without being detected by conventional cybersecurity tools.

Among the identified actors, UNC5342, a North Korea-linked hacking collective, uses a downloader called JadeSnow to pull secondary payloads hidden within blockchain contracts. In several incidents, the group switched between Ethereum and BNB Smart Chain mid-operation; a move possibly motivated by lower transaction fees or operational segmentation. Another financially driven group, UNC5142, has reportedly adopted the same approach, signaling a broader trend among sophisticated threat actors.


The findings stress upon how cybercriminals are reimagining blockchain’s purpose. A tool built for transparency and trust is now being reshaped into an indestructible infrastructure for malware delivery.

Analysts also note that North Korea’s cyber operations have become more advanced in recent years. Blockchain research firm Elliptic estimated earlier this month that North Korean-linked hackers have collectively stolen over $2 billion in digital assets since early 2025.

Security experts warn that as blockchain adoption expands, defenders must develop new strategies to monitor and counter such decentralized threats. Traditional takedown mechanisms will no longer suffice when malicious data resides within a public, unchangeable ledger.



Incognito Mode Is Not Private, Use These Instead


Incognito (private mode) is a famous privacy feature in web browsers. Users may think that using Incognito mode ensures privacy while surfing the web, allowing them to browse without restrictions, and that everything disappears when the tab is closed. 

With no sign of browsing history in Incognito mode, you may believe you are safe. However, this is not entirely accurate, as Incognito has its drawbacks and doesn’t guarantee private browsing. But this doesn’t mean that the feature is useless. 

What Incognito mode does

Private browsing mode is made to keep your local browsing history secret. When a user opens an incognito window, their browser starts a different session and temporarily saves browsing in the session, such as history and cookies. Once the private session is closed, the temporary information is self-deleted and is not visible in your browsing history. 

What Incognito mode can’t do

Incognito mode helps to keep your browsing data safe from other users who use your device

A common misconception among users is that it makes them invisible on the internet and hides everything they browse online. But that is not true.

Why Incognito mode doesn't guarantee privacy

1. It doesn’t hide user activity from the Internet Service Provider (ISP)

Every request you send travels via the ISP network (encrypted DNS providers are an exception). Your ISPs can track user activity on their networks, and can monitor your activity and all the domains you visit, and even your unencrypted traffic. If you are on a corporate Wi-Fi network, your network admin can see the visited websites. 

2. Incognito mode doesn’t stop websites from tracking users

When you are using Incognito, cookies are deleted, but websites can still track your online activity via device and browser fingerprinting. Sites create user profiles based on unique device characteristics such as resolution, installed extensions, and screen size.

3. Incognito mode doesn’t hide your IP address

If you are blocked from a website, using Incognito mode won’t make it accessible. It can’t change your I address.

Should you use Incognito mode?

It may give a false sense of benefits, but Incognito mode doesn’t ensure privacy. It is only helpful for shared devices.

What can you use?

There are other options to protect your online privacy, such as:

  1. Using a virtual private network (VPN)
  2. Privacy-focused browsers: Browsers such as Tor are by default designed to block trackers, ads, and fingerprinting.
  3. Using private search engines: Instead of Google and Bing, you can use private search engines such as DuckDuckGo and Startpage.

Social Event App Partiful Did Not Collect GPS Locations from Photos

 

Social event planning app Partiful, also known as "Facebook events for hot people," has replaced Facebook as the go-to place for sending party invites. However, like Facebook, Partiful also collects user data. 

The hosts can create online invitations in a retro style, which allows users to RSVP to events easily. The platform strives to be user-friendly and trendy, which has made the app No.9 on the Apple store, and Google has called it "the best app" of 2024. 

About Partiful

Partiful has recently developed into a Facebook-like social graph; it maps your friends and also friends of friends, what you do, where you go, and your contact numbers. When the app became famous, people started doubting its origins, alleging that the app had former employees of a data-mining company. TechCrunch, however, found that the app was not storing any location data from user-uploaded images, which include public profile pictures. 

Metadata in photos

The photos that you have on your phones have metadata, which consists of file size, date of capture. With videos, Metadata can include information such as the type of camera used, the settings, and latitude/longitude coordinates. TechCrunch discovered that anyone could use the developer tools in a web browser to get raw user profile photos access from Partiful’s back-end database on Google Firebase. 

About the bug

The flaw could have been problematic, as it could have exposed the location of a person’s profile photo if someone used Partiful. 

According to TechCrunch, “Some Partiful user profile photos contained highly granular location data that could be used to identify the person’s home or work, particularly in rural areas where individual homes are easier to distinguish on a map.”

It is a common norm for companies hosting user photos and videos to automatically remove metadata once uploaded to prevent privacy issues, such as Partiful.

Gemini in Chrome: Google Can Now Track Your Phone

Gemini in Chrome: Google Can Now Track Your Phone

Is the Gemini browser collecting user data?

A new warning for 2 billion Chrome users, Google has announced that its browser will start collecting “sensitive data” on smartphones. “Starting today, we’re rolling out Gemini in Chrome,” Google said, which will be the “biggest upgrade to Chrome in its history.” The data that can be collected includes the device ID, username, location, search history, and browsing history. 

Agentic AI and browsers

Surfshark investigated the user privacy of AI browsers after Google’s announcement and found that if you use Chrome with Gemini on your smartphone, Google can collect 24 types of data. According to Surfshark, this is bigger than any other agentic AI browsers that have been analyzed. 

For instance, Microsoft’s Edge browser, which has Copilot, only collects half the data compared to Chrome and Gemini. Even Brave, Opera, and Perplexity collect less data. With the Gemini-in-Chrome extension, however, users should be more careful. 

Now that AI is everywhere, a lot of browsers like Firefox, Chrome, and Edge allow users to integrate agentic AI extensions. Although these tools are handy, relying on them can expose your privacy and personal data to third-party companies.

There have been incidents recently where data harvesting resulted from browser extensions, even those downloaded from official stores. 

The new data collection warning comes at the same time as the Gemini upgrade this month, called “Nano Banana.” This new update will also feed on user data. 

According to Android Authority, “Google may be working on bringing Nano Banana, Gemini’s popular image editing tool, to Google Photos. We’ve uncovered a GIF for a new ‘Create’ feature in the Google Photos app, suggesting it’ll use Nano Banana inside the app. It’s unclear when the feature will roll out.”

AI browser concerns

Experts have warned that every photo you upload has a biometric fingerprint which consists of your micro-expressions, unique facial geometry, body proportions, and micro-expressions. The biometric data included device fingerprinting, behavioural biometrics, social network mapping, and GPS coordinates.

Besides this, Apple’s Safari now has anti-fingerprinting technology as the default browsing for iOS 26. However, users should only use their own browser for it to work. For instance, if you use Chrome on an Apple device, it won’t work. Another reason why Apply is advising users to use the Safari browser and not Chrome. 

Google Messages Adds QR Code Verification to Prevent Impersonation Scams

 

Google is preparing to roll out a new security feature in its Messages app that adds another layer of protection against impersonation scams. The update, now available in beta, introduces a QR code system to verify whether the person you are chatting with is using a legitimate device. The move is part of Google’s broader effort to strengthen end-to-end encryption and make it easier for users to confirm the authenticity of their contacts.  

Previously, Google Messages allowed users to verify encryption by exchanging and manually comparing an 80-digit code. While effective, the process was cumbersome and rarely used by everyday users. The new QR code option simplifies this verification method by allowing contacts to scan each other’s codes directly. Once scanned, Google can confirm the identity of the devices involved in the conversation and alert users if suspicious or unauthorized activity is detected. This makes it harder for attackers to impersonate contacts or intercept conversations unnoticed. 

According to reports, the feature will be available on devices running Android 9 and higher later this year. For those enrolled in the beta program, it can already be found within the Google Messages app. Users can access it by opening a conversation, tapping on the contact’s name, and navigating to the “End-to-end encryption” section under the details menu. Within that menu, the “Verify encryption” option now provides two methods: manually comparing the 80-digit code or scanning a QR code. 

To complete the process, both participants must scan each other’s codes, after which the devices are marked as verified. Though integration with the “Connected apps” section in the Contacts app has been hinted at, this functionality has not yet gone live. The addition of QR-based verification comes as part of a larger wave of updates designed to modernize and secure Google Messages. Recently, Google introduced a “Delete for everyone” option, giving users more control over sent messages. 

The company also launched a sensitive content warning system and an unsubscribe button to block unwanted spam, following its announcement in October of last year about bolstering protections against abusive messaging practices. With growing concerns about phishing, identity theft, and messaging fraud, the QR code feature provides a more user-friendly safeguard. By reducing friction in the verification process, Google increases the likelihood that more people will adopt it as part of their everyday communication. 

While there is no official release date, the company is expected to roll out this security enhancement before the end of the year, continuing its push to position Google Messages as a secure and competitive alternative in the messaging app market.

Google to Confirm Identity of Every Android App Developer

 







Google announced a new step to make Android apps safer: starting next year, developers who distribute apps to certified Android phones and tablets, even outside Google Play, will need to verify their legal identity. The change ties every app on certified devices to a named developer account, while keeping Android’s ability to run apps from other stores or direct downloads intact. 

What this means for everyday users and small developers is straightforward. If you download an app from a website or a third-party store, the app will now be linked to a developer who has provided a legal name, address, email and phone number. Google says hobbyists and students will have a lighter account option, but many independent creators may choose to register as a business to protect personal privacy. Certified devices are the ones that ship with Google services and pass Google’s compatibility tests; devices that do not include Google Play services may follow different rules. 

Google’s stated reason is security. The company reported that apps installed from the open internet are far more likely to contain malware than apps on the Play Store, and it says those risks come mainly from people hiding behind anonymous developer identities. By requiring identity verification, Google intends to make it harder for repeat offenders to publish harmful apps and to make malicious actors easier to track. 

The rollout is phased so developers and device makers can prepare. Early access invitations begin in October 2025, verification opens to all developers in March 2026, and the rules take effect for certified devices in Brazil, Indonesia, Singapore and Thailand in September 2026. Google plans a wider global rollout in 2027. If you are a developer, review Google’s new developer pages and plan to verify your account well before your target markets enforce the rule. 

A similar compliance pattern already exists in some places. For example, Apple requires developers who distribute apps in the European Union to provide a “trader status” and contact details to meet the EU Digital Services Act. These kinds of rules aim to increase accountability, but they also raise questions about privacy, the costs for small creators, and how “open” mobile platforms should remain. Both companies are moving toward tighter oversight of app distribution, with the goal of making digital marketplaces safer and more accountable.

This change marks one of the most significant shifts in Android’s open ecosystem. While users will still have the freedom to install apps from multiple sources, developers will now be held accountable for the software they release. For users, it could mean greater protection against scams and malicious apps. For developers, especially smaller ones, it signals a new balance between maintaining privacy and ensuring trust in the Android platform.


How ChatGPT prompt can allow cybercriminals to steal your Google Drive data


Chatbots and other AI tools have made life easier for threat actors. A recent incident highlighted how ChatGPT can be exploited to obtain API keys and other sensitive data from cloud platforms.

Prompt injection attacks leads to cloud access

Experts have discovered a new prompt injection attack that can turn ChatGPT into a hacker’s best friend in data thefts. Known as AgentFlayer, the exploit uses a single document to hide “secret” prompt instructions that target OpenAI’s chatbot. An attacker can share what appears to be a harmless document with victims through Google Drive, without any clicks.

Zero-click threat: AgentFlayer

AgentFlayer is a “zero-click” threat as it abuses a vulnerability in Connectors, for instance, a ChatGPT feature that connects the assistant to other applications, websites, and services. OpenAI suggests that Connectors supports a few of the world’s most widely used platforms. This includes cloud storage platforms such as Microsoft OneDrive and Google Drive.

Experts used Google Drive to expose the threats possible from chatbots and hidden prompts. 

GoogleDoc used for injecting prompt

The malicious document has a 300-word hidden malicious prompt. The text is size one, formatted in white to hide it from human readers but visible to the chatbot.

The prompt used to showcase AgentFlayer’s attacks prompts ChatGPT to find the victim’s Google Drive for API keys, link them to a tailored URL, and an external server. When the malicious document is shared, the attack is launched. The threat actor gets the hidden API keys when the target uses ChatGPT (the Connectors feature has to be enabled).

Othe cloud platforms at risk too

AgentFlayer is not a bug that only affects the Google Cloud. “As with any indirect prompt injection attack, we need a way into the LLM's context. And luckily for us, people upload untrusted documents into their ChatGPT all the time. This is usually done to summarize files or data, or leverage the LLM to ask specific questions about the document’s content instead of parsing through the entire thing by themselves,” said expert Tamir Ishay Sharbat from Zenity Labs.

“OpenAI is already aware of the vulnerability and has mitigations in place. But unfortunately, these mitigations aren’t enough. Even safe-looking URLs can be used for malicious purposes. If a URL is considered safe, you can be sure an attacker will find a creative way to take advantage of it,” Zenith Labs said in the report.

Tech Giant Google Introduces an Open-Source AI Agent to Automate Coding Activities

 

Google has launched Gemini CLI GitHub Actions, an open-source AI agent that automates routine coding tasks directly within GitHub repositories. This tool, now in beta and available globally, acts as an AI coding teammate that works both autonomously and on-demand to handle repetitive development workflows.

Key features

The Gemini CLI GitHub Actions is triggered by repository events such as new issues or pull requests, working asynchronously to triage problems, review code, and assist developers. Developers can directly interact with the agent by tagging @gemini-cli in issues or pull requests and assigning specific tasks like writing tests, implementing changes, or fixing bugs. The tool ships with three default intelligent workflows:

  • Issue triage and auto-labeling 
  • Accelerated pull request reviews
  • On-demand collaboration for targeted task delegation 

Built on Google's earlier Gemini CLI tool, the GitHub Actions version extends AI assistance from individual terminals to collaborative team environments. The agent provides powerful AI capabilities including code understanding, file manipulation, command execution, and dynamic troubleshooting. 

For individual developers, Google offers generous free usage limits of 60 model requests per minute and 1,000 requests per day at no charge when using a personal Google account. The tool integrates with Google's Gemini Code Assist, giving developers access to Gemini 2.5 Pro and its massive 1 million token context window.

The platform prioritizes security with credential-less authentication through Google Cloud's Workload Identity Federation, eliminating the need for long-lived API keys. Additional security measures include granular control with command allowlisting and complete transparency through OpenTelemetry integration for real-time monitoring.

Market positioning 

This launch represents Google's broader push into open-source AI development tools, positioning the agent as a direct competitor to GitHub Copilot and other AI coding assistants. Unlike traditional coding assistants that primarily suggest code, Gemini CLI GitHub Actions actively automates core developer workflows and can push commits autonomously.

The tool is part of Google's wider ecosystem of AI agents, following the company's release of other tools like the Agent2Agent Protocol for inter-agent communication and the "Big Sleep" security vulnerability detection system. 

The developer community has also responded enthusiastically to these releases, with thousands of developers already utilizing the tools during beta phases. During Jules' testing period alone, users completed tens of thousands of tasks and contributed over 140,000 public code improvements, demonstrating the practical value and adoption potential of these AI-powered development tools.