Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

$1 Million WhatsApp Hack That Never Happened: Inside Pwn2Own’s Biggest Mystery

  The world of ethical hacking saw an unexpected turn at the Pwn2Own Ireland 2025 competition, where an eagerly anticipated attempt to explo...

All the recent news you need to know

Gmail Credentials Appear in Massive 183 Million Infostealer Data Leak, but Google Confirms No New Breach




A vast cache of 183 million email addresses and passwords has surfaced in the Have I Been Pwned (HIBP) database, raising concern among Gmail users and prompting Google to issue an official clarification. The newly indexed dataset stems from infostealer malware logs and credential-stuffing lists collected over time, rather than a fresh attack targeting Gmail or any other single provider.


The Origin of the Dataset

The large collection, analyzed by HIBP founder Troy Hunt, contains records captured by infostealer malware that had been active for nearly a year. The data, supplied by Synthient, amounted to roughly 3.5 terabytes, comprising nearly 23 billion rows of stolen information. Each entry typically includes a website name, an email address, and its corresponding password, exposing a wide range of online accounts across various platforms.

Synthient’s Benjamin Brundage explained that this compilation was drawn from continuous monitoring of underground marketplaces and malware operations. The dataset, referred to as the “Synthient threat data,” was later forwarded to HIBP for indexing and public awareness.


How Much of the Data Is New

Upon analysis, Hunt discovered that most of the credentials had appeared in previous breaches. Out of a 94,000-record sample, about 92 percent matched older data, while approximately 8 percent represented new and unseen credentials. This translates to over 16 million previously unrecorded email addresses, fresh data that had not been part of any known breaches or stealer logs before.

To test authenticity, Hunt contacted several users whose credentials appeared in the sample. One respondent verified that the password listed alongside their Gmail address was indeed correct, confirming that the dataset contained legitimate credentials rather than fabricated or corrupted data.


Gmail Accounts Included, but No Evidence of a Gmail Hack

The inclusion of Gmail addresses led some reports to suggest that Gmail itself had been breached. However, Google has publicly refuted these claims, stating that no new compromise has taken place. According to Google, the reports stem from a misunderstanding of how infostealer databases operate, they simply aggregate previously stolen credentials from different malware incidents, not from a new intrusion into Gmail systems.

Google emphasized that Gmail’s security systems remain robust and that users are protected through ongoing monitoring and proactive account protection measures. The company said it routinely detects large credential dumps and initiates password resets to protect affected accounts.

In a statement, Google advised users to adopt stronger account protection measures: “Reports of a Gmail breach are false. Infostealer databases gather credentials from across the web, not from a targeted Gmail attack. Users can enhance their safety by enabling two-step verification and adopting passkeys as a secure alternative to passwords.”


What Users Should Do

Experts recommend that individuals check their accounts on Have I Been Pwned to determine whether their credentials appear in this dataset. Users are also advised to enable multi-factor authentication, switch to passkeys, and avoid reusing passwords across multiple accounts.

Gmail users can utilize Google’s built-in Password Manager to identify weak or compromised passwords. The password checkup feature, accessible from Chrome’s settings, can alert users about reused or exposed credentials and prompt immediate password changes.

If an account cannot be accessed, users should proceed to Google’s account recovery page and follow the verification steps provided. Google also reminded users that it automatically requests password resets when it detects exposure in large credential leaks.


The Broader Security Implications

Cybersecurity professionals stress that while this incident does not involve a new system breach, it reinforces the ongoing threat posed by infostealer malware and poor password hygiene. Sachin Jade, Chief Product Officer at Cyware, highlighted that credential monitoring has become a vital part of any mature cybersecurity strategy. He explained that although this dataset results from older breaches, “credential-based attacks remain one of the leading causes of data compromise.”

Jade further noted that organizations should integrate credential monitoring into their broader risk management frameworks. This helps security teams prioritize response strategies, enforce adaptive authentication, and limit lateral movement by attackers using stolen passwords.

Ultimately, this collection of 183 million credentials serves as a reminder that password leaks, whether new or recycled, continue to feed cybercriminal activity. Continuous vigilance, proactive password management, and layered security practices remain the strongest defenses against such risks.


IPv6: The Future of the Internet That’s Quietly Already Here

 

IPv6 was once envisioned as the next great leap for the internet — a future-proof upgrade designed to solve IP address shortages, simplify networks, and make online connections faster and more secure. Yet, decades later, most of the world still runs on IPv4. So, what really happened?

Every device that connects to the internet needs a unique identifier known as an IP address — essentially a digital address that tells other devices where to send data. When you visit a website, your device sends a request to a server’s IP address, and that server sends information back to yours. Without IP addresses, the web simply wouldn’t function.

“IP” stands for Internet Protocol, and the numbers “v4” and “v6” refer to its versions. IPv4 has been the backbone of the internet since its early days, but it’s running out of space. The IPv4 system uses a 32-bit structure, allowing for about 4.3 billion unique addresses — a number that seemed vast in the 1980s but quickly fell short as smartphones, laptops, routers, and even smart home devices came online.

To solve this, IPv6 was introduced. Using a 128-bit system, IPv6 offers an almost limitless supply of addresses — around 340 undecillion (that’s 340 followed by 36 zeros). But IPv6 wasn’t just about more numbers. It also brought smarter features like automatic device configuration, stronger encryption, and improved routing efficiency.

However, switching from IPv4 to IPv6 was far from simple. The two systems can’t communicate directly, meaning the entire internet — from ISPs to data centers — needed new configurations and equipment capable of handling both versions in a setup called “dual-stack networking.” This approach allowed both protocols to run side by side, but it was costly and complex to implement.

Many organizations hesitated to upgrade. Since IPv4 still worked, and technologies like NAT (Network Address Translation) allowed multiple devices to share one address, there wasn’t an immediate need to move away from it. For years, IPv6 adoption remained stuck in the single digits.

That’s slowly changed. Today, according to Google’s IPv6 adoption statistics, nearly 45% of global users connect using IPv6, especially through mobile networks. Major companies like Google, Meta, Microsoft, Netflix, and Cloudflare have been IPv6-ready for years, ensuring most users are already using it — often without realizing it.

In fact, IPv6 is quietly running in the background for most people. Modern smartphones, broadband connections, and cloud services already use it, managed automatically by routers and ISPs. Thanks to dual-stack networking, users rarely notice which version they’re on.

This has created a unique moment: we’re living in two internets at once. IPv4 continues to support older systems and legacy infrastructure, while IPv6 powers the growing digital world — silently, efficiently, and almost invisibly. IPv4 isn’t disappearing anytime soon, but IPv6 isn’t a failure either. It’s the quiet successor, steadily ensuring the internet keeps expanding for the billions of new devices still to come.

Connected Car Privacy Risks: How Modern Vehicles Secretly Track and Sell Driver Data

 

The thrill of a smooth drive—the roar of the engine, the grip of the tires, and the comfort of a high-end cabin—often hides a quieter, more unsettling reality. Modern cars are no longer just machines; they’re data-collecting devices on wheels. While you enjoy the luxury and performance, your vehicle’s sensors silently record your weight, listen through cabin microphones, track your every route, and log detailed driving behavior. This constant surveillance has turned cars into one of the most privacy-invasive consumer products ever made. 

The Mozilla Foundation recently reviewed 25 major car brands and declared that modern vehicles are “the worst product category we have ever reviewed for privacy.” Not a single automaker met even basic standards for protecting user data. The organization found that cars collect massive amounts of information—from location and driving patterns to biometric data—often without explicit user consent or transparency about where that data ends up. 

The Federal Trade Commission (FTC) has already taken notice. The agency recently pursued General Motors (GM) and its subsidiary OnStar for collecting and selling drivers’ precise location and behavioral data without obtaining clear consent. Investigations revealed that data from vehicles could be gathered as frequently as every three seconds, offering an extraordinarily detailed picture of a driver’s habits, destinations, and lifestyle. 

That information doesn’t stay within the automaker’s servers. Instead, it’s often shared or sold to data brokers, insurers, and marketing agencies. Driver behavior, acceleration patterns, late-night trips, or frequent stops at specific locations could be used to adjust insurance premiums, evaluate credit risk, or profile consumers in ways few drivers fully understand. 

Inside the car, the illusion of comfort and control masks a network of tracking systems. Voice assistants that adjust your seat or temperature remember your commands. Smartphone apps that unlock the vehicle transmit telemetry data back to corporate servers. Even infotainment systems and microphones quietly collect information that could identify you and your routines. The same technology that powers convenience features also enables invasive data collection at an unprecedented scale. 

For consumers, awareness is the first defense. Before buying a new vehicle, it’s worth asking the dealer what kind of data the car collects and how it’s used. If they cannot answer directly, it’s a strong indication of a lack of transparency. After purchase, disabling unnecessary connectivity or data-sharing features can help protect privacy. Declining participation in “driver score” programs or telematics-based insurance offerings is another step toward reclaiming control. 

As automakers continue to blend luxury with technology, the line between innovation and intrusion grows thinner. Every drive leaves behind a digital footprint that tells a story—where you live, work, shop, and even who rides with you. The true cost of modern convenience isn’t just monetary—it’s the surrender of privacy. The quiet hum of the engine as you pull into your driveway should represent freedom, not another connection to a data-hungry network.

Is ChatGPT's Atlas Browser the Future of Internet?

Is ChatGPT's Atlas Browser the Future of Internet?

After using ChatGPT Atlas, OpenAI's new web browser, users may notice few issues. This is not the same as Google Chrome, which about 60% of users use. It is based on a chatbot that you are supposed to converse with in order to browse the internet.  

One of the notes said, "Messages limit reached," "No models that are currently available support the tools in use," another stated.  

Following that: "You've hit the free plan limit for GPT-5."  

Paid browser 

According to OpenAI, it will simplify and improve internet usage. One more step toward becoming "a true super-assistant." Super or not, however, assistants are not free, and the corporation must start generating significantly more revenue from its 800 million customers.

According to OpenAI, Atlas allows us to "rethink what it means to use the web". It appears to be comparable to Chrome or Apple's Safari at first glance, with one major exception: a sidebar chatbot. These are early days, but there is the potential for significant changes in how we use the Internet. What is certain is that this will be a high-end gadget that will only function properly if you pay a monthly subscription price. Given how accustomed we are to free internet access, many people would have to drastically change their routines.

Competitors, data, and money

The founding objective of OpenAI was to achieve artificial general intelligence (AGI), which roughly translates to AI that can match human intelligence. So, how does a browser assist with this mission? It actually doesn't. However, it has the potential to increase revenue. The company has persuaded venture capitalists and investors to spend billions of dollars in it, and it must now demonstrate a return on that investment. In other words, it needs to generate revenue. However, obtaining funds through typical internet advertising may be risky. Atlas might also grant the corporation access to a large amount of user data.

The ultimate goal of these AI systems is scale; the more data you feed them, the better they will become. The web is built for humans to use, so if Atlas can observe how we order train tickets, for example, it will be able to learn how to better traverse these processes.  

Will it kill Google?

Then we get to compete. Google Chrome is so prevalent that authorities throughout the world are raising their eyebrows and using terms like "monopoly" to describe it. It will not be easy to break into that market.

Google's Gemini AI is now integrated into the search engine, and Microsoft has included Copilot to its Edge browser. Some called ChatGPT the "Google killer" in its early days, predicting that it would render online search as we know it obsolete. It remains to be seen whether enough people are prepared to pay for that added convenience, and there is still a long way to go before Google is dethroned.

Dublin Airport Data Breach Exposes 3.8 Million Passengers

 

Dublin Airport has confirmed a significant data breach affecting potentially 3.8 million passengers who traveled through the Irish facility during August 2025, following a cyberattack on aviation technology supplier Collins Aerospace. The breach compromised boarding pass data for all flights departing Dublin Airport from August 1-31, 2025, a period during which the airport processed over 3.7 million passengers across more than 110,000 daily passenger movements.

The Dublin Airport Authority (DAA), which operates both Dublin and Cork airports, first learned of the compromise on September 18, 2025, when Collins Aerospace notified them of a breach affecting its IT systems. By September 19, intelligence gathered by airport authorities confirmed that boarding pass information had been published online by a cybercriminal group. Cork Airport officials clarified that none of the compromised data relates to flights through their facility.

The exposed data includes passenger booking references, first and last names, frequent flyer numbers, contact information such as email addresses and phone numbers, and travel itineraries. Airlines including Swedish carrier SAS have sent notifications to affected passengers warning that other booking-related details may have been accessed. However, the breach did not involve passport information, payment card details, or other financial data.

The incident is directly linked to the devastating Collins Aerospace ransomware attack that crippled multiple European airports in September 2025. Collins Aerospace's MUSE (Multi-User System Environment) software, which powers check-in and boarding operations at approximately 170 airports globally, fell victim to HardBit ransomware on the night of September 19, 2025. Dublin Airport was particularly hard hit, with officials confirming they had to rebuild servers "from scratch" with no clear timeline for resolution.

Additionally, the Russia-linked Everest ransomware gang has claimed responsibility for a separate attack on Dublin Airport, threatening to leak data of over 1.5 million records on the dark web unless the airport pays a ransom. This claim includes device information, workstation IDs, timestamps, departure dates and times, and barcode formats.

The DAA immediately reported the breach to multiple authorities on September 19, 2025, including the Data Protection Commission (DPC), Irish Aviation Authority, and National Cyber Security Centre. Graham Doyle, Deputy Commissioner at the Data Protection Commission, confirmed the agency is conducting a full investigation into the breach's scope and impact.

Security experts warn that the compromised information provides sufficient detail for sophisticated phishing campaigns, social engineering attacks, frequent flyer account takeover attempts, and identity theft operations targeting affected passengers.

Deepfakes Are More Polluting Than People Think

 


Artificial intelligence, while blurring the lines between imagination and reality, is causing a new digital controversy to unfold at a time when ethics and creativity have become less important and the digital realm has become a much more fluid one. 

With the advent of advanced artificial intelligence platforms such as OpenAI's Sora, deepfake videos have been able to flood social media feeds with astoundingly lifelike representations of celebrities and historic figures, resurrected in scenes that at times appear sensational but at other times are deeply offensive, thanks to advanced artificial intelligence platforms.

In fact, the phenomenon has caused widespread concern amongst families of revered personalities such as Dr Martin Luther King Jr. Several people are publicly urging technology companies to put more safeguards in place to prevent the unauthorised use of their loved ones' likenesses.

However, as the debate over the ethical boundaries of synthetic media intensifies, there is one hidden aspect of the issue that is quietly surfacing, namely, the hidden environmental impact that synthetic media has on the environment. 

The creation of these hyperrealistic videos requires a great deal of computational power, as explained by Dr Kevin Grecksch, a professor at the University of Oxford. They also require a substantial amount of energy and water to maintain the necessary cooling systems within the data centres. Despite appearing as a fleeting piece of digital art, it has a significant environmental cost hidden beneath it, adding an unexpected layer of concerns surrounding the digital revolution that is a growing concern. 

As social media platforms have grown, there has been an increasing prevalence of deepfake videos, whose uncanny realism has captured audiences while blurring the line between truth and fabrication, while also captivating them. 

As AI-powered tools such as OpenAI's Sora have become more widely available, these videos have become viral as a result of being able to conjure up lifelike portraits of individuals – some of whom have long passed away – into fabricated scenes that range from bizarre to deeply inappropriate in nature. 

Several families who have been portrayed in this disturbing trend, including that of Dr Martin Luther King Jr., have raised alarm over the trend and have called on technology companies to prevent unauthorised digital resurrections of their loved ones. However, there is much more to the controversy surrounding deepfakes than just issues of dignity and consent at play. 

Despite the convincingly rendered nature of these videos, Dr Kevin Grecksch, a lecturer at Oxford University, has stressed that these videos have a significant ecological impact that is often overlooked. Generating such content is dependent upon the installation of powerful data centres that consume vast amounts of electricity and water to cool, resources that contribute substantially to the growing environmental footprint of this technology on a large scale. 

It has emerged that deep fakes, a form of synthetic media that is rapidly advancing, are one of the most striking examples of how artificial intelligence is reshaping digital communication. By combining complex deep learning algorithms with a massive dataset, these technologies can convincingly replace or manipulate faces, voices, and even gestures with ease. 

The likeness of one person is seamlessly merged with the likeness of another, creating a seamless transition from one image to the next. Additionally, shallow fakes, which are less technologically complex but equally important, are also closely related, but they rely on simple editing techniques to distort reality to an alarming degree, blurring the line between authenticity and fabrication. The proliferation of deepfakes has accelerated rapidly at an unprecedented pace over the past few years. 

A new report suggests that the number of such videos that circulate online has doubled every six months. It is estimated that there will be 500,000 deepfake videos and audio clips being shared globally by 2023, and if current trends hold true, this number is expected to reach almost 8 million by 2025. Using advanced artificial intelligence tools and the availability of publicly available data, experts attribute this explosive growth to the fact that these tools are widely accessible and there is a tremendous amount of public data, which creates an ideal environment for manipulated media to flourish. 

As a result of the rise of deepfake technology, legal and policy circles have been riddled with intense debate, underscoring the urgency of redefining the boundaries of accountability in an era in which synthetic media is so prevalent. With hyper-realistic digital forgeries created by advanced deep learning algorithms, which are very realistic, it poses a complex challenge that goes well beyond the technological edge. 

Legal scholars have warned that deep fakes pose a significant threat to privacy, intellectual property, and dignity, while also undermining public trust in the information they provide. A growing body of evidence suggests that these fabrications may carry severe social, ethical, and legal consequences not only from their ability to mislead but also from their ability to influence electoral outcomes and facilitate non-consensual pornography, all of which carry severe social, ethical, and legal consequences. 

In an effort to contain the threat, the European Union is enforcing legislation such as its Artificial Intelligence Act and Digital Services Act, which aim to assign responsibility for large online platforms and establish standards for the governance of Artificial Intelligence. Even so, experts contend that such initiatives remain insufficient due to the absence of comprehensive definitions, enforcement mechanisms, and protocols for assisting victims. 

Moreover, the situation is compounded by a fragmented international approach: although many states in the United States have enacted laws addressing fake media, there are still inconsistencies across jurisdictions, and countries like Canada continue to face challenges in regulating deepfake pornography and other forms of synthetic nonconsensual media. 

Social media has become an increasingly important platform for spreading manipulated media, which in turn increases these risks. Scholars have advocated sweeping reforms, which range from stricter privacy laws to recalibrating free speech to preemptive restrictions on deepfake generation, to mitigate future damage that is likely to occur. This includes identity theft, fraud, and other harms that existing legal systems are incapable of dealing with. 

It is important to note that ethical concerns are also emerging outside of the policy arena, in unexpected circumstances, such as the use of deep fakes in grief therapy and entertainment, where the line between emotional comfort and manipulation becomes dangerously blurred during times of emotional distress. 

The researchers, who are calling for better detection and prevention frameworks, are reaching a common conclusion: ideepfakes must be regulated in a manner that strikes a delicate balance between innovation and protection, ensuring that technological advances do not take away truth, justice, or human dignity as a result. This era of synthetic media has provoked a heated debate within legal and policy circles concerning the rise of deepfake technology, emphasising the importance of redefining the boundaries of accountability in a world where deep fakes have become a common phenomenon.

The hyper-realistic digital forgeries produced by advanced deep learning algorithms pose a challenge that goes well beyond the novelty of technology. There is considerable concern that deepfakes may threaten the integrity of information, as well as undermine public trust in it, while also undermining the core principles of privacy, intellectual property, and personal dignity. 

As a result of the fabrications' ability to distort reality, it has already been reported that they are capable of spreading misinformation, influencing elections, and facilitating nonconsensual pornography, all of which can have serious social, ethical, and legal repercussions. In an effort to contain the threat, the European Union is enforcing legislation such as its Artificial Intelligence Act and Digital Services Act, which aim to assign responsibility for large online platforms and establish standards for the governance of Artificial Intelligence.

Even so, experts contend that such initiatives remain insufficient due to the absence of comprehensive definitions, enforcement mechanisms, and protocols for assisting victims. Inconsistencies persist across jurisdictions, which further complicates the situation. While some U.S. states have enacted laws to address false media, there are still inconsistencies across jurisdictions, and countries such as Canada are still struggling with how to regulate non-consensual synthetic materials, including deepfake pornography and other forms of pseudo-synthetic material. 

Social media has become an increasingly important platform for spreading manipulated media, which in turn increases these risks. Scholars have advocated sweeping reforms, which range from stricter privacy laws to recalibrating free speech to preemptive restrictions on deepfake generation, to mitigate future damage that is likely to occur. This includes identity theft, fraud, and other harms that existing legal systems are incapable of dealing with. 

Additionally, ethical concerns are emerging beyond the realm of policy as well, in unexpected contexts such as those regarding deepfakes' use in grief therapy and entertainment, where the line between emotional comfort and manipulative behaviours becomes a dangerous blur to the point of becoming dangerously indistinguishable. 

Research suggests that more robust detection and prevention frameworks are needed to detect and prevent deepfakes. One conclusion becomes increasingly evident as a result of these findings: the regulation of deepfakes requires a delicate balance between innovation and protection, so that progress in technology does not trample upon truth, justice, and human dignity in its wake. 

A growing number of video generation tools powered by artificial intelligence have become so popular that they have transformed online content creation, but have also raised serious concerns about the environmental consequences of these tools. Data centres are the vast digital backbones that make such technologies possible, and they use large quantities of electricity and fresh water to cool servers on a large scale. 

The development of applications like OpenAI’s Sora has made it easy for users to create and share hyperrealistic videos quickly, but social media platforms have also seen an increase in deepfake content, which has helped such apps rise to the top of the global download charts. Within just five days, Sora had over one million downloads within just five days, cementing its position as the dominant app in the US Apple App Store. 

In the midst of this surge of creative enthusiasm, however, there is a growing environmental dilemma that has been identified by DDrKevin Grecksch of the University of Oxford in his recently published warning against ignoring the water and energy demands of AI infrastructure. He urged users and policymakers alike to be aware that digital innovation has a significant ecological footprint, and that it takes a lot of water to carry out, and that it needs to be carefully considered when using water. 

It has been argued that the "cat is out of the sack" with the adoption of artificial intelligence, but that more integrated planning is imperative when it comes to determining where and how data-centric systems should be built and cooled. 

A warning he made was that even though the government envisions South Oxfordshire as a potential hub for the development of artificial intelligence, insufficient attention has been paid to the environmental logistics, particularly where the necessary water supply will come from. Since enthusiasm for the development of generative technologies continues to surge, experts insist that the conversation regarding the future of AI needs to go beyond innovation and efficiency, encompassing sustainability, resource management, and long-term environmental responsibility. 

There is no denying that the future of artificial intelligence demands more than admiration for its brilliance as it stands at a crossroads between innovation and accountability, but responsibility as to how it is applied. Even though deepfake technology is a testament to human ingenuity, it should be governed by ethics, regulation, and sustainability, as well as other factors.

There is a need to collaborate between policymakers, technology firms, and environmental authorities in order to develop frameworks which protect both digital integrity as well as natural resources. For a safer and more transparent digital era, we must encourage the use of renewdatacentresin datacentress, enforce stricter consent-based media laws, and invest in deepfake detection systems in order to ensure that deepfake detection systems are utilised. 

AI offers the promise of creating a world without human intervention, yet its promise lies in our capacity to control its outcomes - ensuring that in a world increasingly characterised by artificial intelligence, progress remains a force for truth, equity, and ecological balance.

Featured