Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Google Confirms Data Breach from 200 Companies


Google has confirmed that hackers stole data from more than 200 companies after exploiting apps developed by Gainsight, a customer success software provider. The breach targeted Salesforce systems and is being described as one of the biggest supply chain attacks in recent months.

What Happened

Salesforce said last week that “certain customers’ Salesforce data” had been accessed through Gainsight applications. These apps are widely used by companies to manage customer relationships. According to Google’s Threat Intelligence Group, over 200 Salesforce instances were affected.  

Who Is Behind the Attack

A group calling itself Scattered Lapsus$ Hunters, which includes members of the well-known ShinyHunters gang, has claimed responsibility. The gang has a history of targeting large firms and leaking stolen data online.  

The hackers have already published a list of alleged victims. Names include Atlassian, CrowdStrike, DocuSign, GitLab, LinkedIn, Malwarebytes, SonicWall, Thomson Reuters and Verizon. Some of these companies have denied being impacted, while others are still investigating.

What next?

This is a serious case of risks of third-party apps in enterprise ecosystems. By compromising Gainsight’s software, attackers were able to reach hundreds of companies at once.  

According to Tech Crunch, supply chain attacks are especially dangerous because they exploit trust in vendors. Once a trusted app is breached, it can open doors to sensitive data across multiple organisations.

Industry Response

Salesforce has said it is working with affected customers to secure systems. Gainsight has not yet issued a detailed statement. Google continues to track the attackers and assess the scale of the stolen data.  

Cybersecurity firms are advising companies to review their integrations, tighten access controls and monitor for suspicious activity. Analysts believe this breach will renew calls for stricter checks on third-party apps used in cloud platforms.

The Larger Picture

The attack comes at a time when businesses are increasingly dependent on SaaS platforms like Salesforce. With more reliance on external apps, attackers are shifting focus to these weak links. This makes the issue dangerous.

Hackers Use Look-Alike Domain Trick to Imitate Microsoft and Capture User Credentials

 




A new phishing operation is misleading users through an extremely subtle visual technique that alters the appearance of Microsoft’s domain name. Attackers have registered the look-alike address “rnicrosoft(.)com,” which replaces the single letter m with the characters r and n positioned closely together. The small difference is enough to trick many people into believing they are interacting with the legitimate site.

This method is a form of typosquatting where criminals depend on how modern screens display text. Email clients and browsers often place r and n so closely that the pair resembles an m, leading the human eye to automatically correct the mistake. The result is a domain that appears trustworthy at first glance although it has no association with the actual company.

Experts note that phishing messages built around this tactic often copy Microsoft’s familiar presentation style. Everything from symbols to formatting is imitated to encourage users to act without closely checking the URL. The campaign takes advantage of predictable reading patterns where the brain prioritizes recognition over detail, particularly when the user is scanning quickly.

The deception becomes stronger on mobile screens. Limited display space can hide the entire web address and the address bar may shorten or disguise the domain. Criminals use this opportunity to push malicious links, deliver invoices that look genuine, or impersonate internal departments such as HR teams. Once a victim believes the message is legitimate, they are more likely to follow the link or download a harmful attachment.

The “rn” substitution is only one example of a broader pattern. Typosquatting groups also replace the letter o with the number zero, add hyphens to create official-sounding variations, or register sites with different top level domains that resemble the original brand. All of these are intended to mislead users into entering passwords or sending sensitive information.

Security specialists advise users to verify every unexpected message before interacting with it. Expanding the full sender address exposes inconsistencies that the display name may hide. Checking links by hovering over them, or using long-press previews on mobile devices, can reveal whether the destination is legitimate. Reviewing email headers, especially the Reply-To field, can also uncover signs that responses are being redirected to an external mailbox controlled by attackers.

When an email claims that a password reset or account change is required, the safest approach is to ignore the provided link. Instead, users should manually open a new browser tab and visit the official website. Organisations are encouraged to conduct repeated security awareness exercises so employees do not react instinctively to familiar-looking alerts.


Below are common variations used in these attacks:

Letter Pairing: r and n are combined to imitate m as seen in rnicrosoft(.)com.

Number Replacement: the letter o is switched with the number zero in addresses like micros0ft(.)com.

Added Hyphens: attackers introduce hyphens to create domains that appear official, such as microsoft-support(.)com.

Domain Substitution: similar names are created by altering only the top level domain, for example microsoft(.)co.


This phishing strategy succeeds because it relies on human perception rather than technical flaws. Recognising these small changes and adopting consistent verification habits remain the most effective protections against such attacks.



North Korean APT Collaboration Signals Escalating Cyber Espionage and Financial Cybercrime

 

Security analysts have identified a new escalation in cyber operations linked to North Korea, as two of the country’s most well-known threat actors—Kimsuky and Lazarus—have begun coordinating attacks with unprecedented precision. A recent report from Trend Micro reveals that the collaboration merges Kimsuky’s extensive espionage methods with Lazarus’s advanced financial intrusion capabilities, creating a two-part operation designed to steal intelligence, exploit vulnerabilities, and extract funds at scale. 

Rather than operating independently, the two groups are now functioning as a complementary system. Kimsuky reportedly initiates most campaigns by collecting intelligence and identifying high-value victims through sophisticated phishing schemes. One notable 2024 campaign involved fraudulent invitations to a fake “Blockchain Security Symposium.” Attached to the email was a malicious Hangul Word Processor document embedded with FPSpy malware, which stealthily installed a keylogger called KLogEXE. This allowed operators to record keystrokes, steal credentials, and map internal systems for later exploitation. 

Once reconnaissance was complete, data collected by Kimsuky was funneled to Lazarus, which then executed the second phase of attacks. Investigators found Lazarus leveraged an unpatched Windows zero-day vulnerability, identified as CVE-2024-38193, to obtain full system privileges. The group distributed infected Node.js repositories posing as legitimate open-source tools to compromise server environments. With this access, the InvisibleFerret backdoor was deployed to extract cryptocurrency wallet contents and transactional logs. Advanced anti-analysis techniques, including Fudmodule, helped the malware avoid detection by enterprise security tools. Researchers estimate that within a 48-hour window, more than $30 million in digital assets were quietly stolen. 

Further digital forensic evidence reveals that both groups operated using shared command-and-control servers and identical infrastructure patterns previously observed in earlier North Korean cyberattacks, including the 2014 breach of a South Korean nuclear operator. This shared ecosystem suggests a formalized, state-aligned operational structure rather than ad-hoc collaboration.  

Threat activity has also expanded beyond finance and government entities. In early 2025, European energy providers received a series of targeted phishing attempts aimed at collecting operational power grid intelligence, signaling a concerning pivot toward critical infrastructure sectors. Experts believe this shift aligns with broader strategic motivations: bypassing sanctions, funding state programs, and positioning the regime to disrupt sensitive systems if geopolitical tensions escalate. 

Cybersecurity specialists advise organizations to strengthen resilience through aggressive patch management, multi-layered email security, secure cryptocurrency storage practices, and active monitoring for indicators of compromise such as unexpected execution of winlogon.exe or unauthorized access to blockchain-related directories. 

Researchers warn that the coordinated activity between Lazarus and Kimsuky marks a new phase in North Korea’s cyber posture—one blending intelligence gathering with highly organized financial theft, creating a sustained and evolving global threat.

X’s New Location Feature Exposes Foreign Manipulation of US Political Accounts

 

X's new location feature has revealed that many high-engagement US political accounts, particularly pro-Trump ones, are actually operated from countries outside the United States such as Russia, Iran, and Kenya. 

This includes accounts that strongly claim to represent American interests but are based abroad, misleading followers and potentially influencing US political discourse. Similarly, some anti-Trump accounts that seemed to be run by Americans are also found to be foreign-operated. For example, a prominent anti-Trump account with 52,000 followers was based in Kenya and was deleted after exposure. 

The feature exposed widespread misinformation and deception as these accounts garner millions of interactions, often resulting in financial compensation through X's revenue-sharing scheme, allowing both individuals and possibly state-backed groups to exploit the platform for monetary or political gain.

Foreign influence and misinformation

The new location disclosure highlighted significant foreign manipulation of political conversations on X, which raises concerns about authenticity and trust in online discourse. Accounts that present themselves as authentic American voices may actually be linked to troll farms or nation-state actors aiming to amplify divisive narratives or to profit financially. 

This phenomenon is exacerbated by X’s pay-for-play blue tick verification system, which some experts, including Alexios Mantzarlis from Cornell Tech, criticize as a revenue scheme rather than a meaningful validation effort. Mantzarlis emphasizes that financial incentives often motivate such deceptive activities, with operators stoking America's cultural conflicts on social media.

Additional geographic findings

Beyond US politics, BBC Verify found accounts supporting Scottish independence that are purportedly based in Iran despite having smaller followings. This pattern aligns with previous coordinated networks flagged for deceptive political influence. Such accounts often use AI-generated profile images and post highly similar content, generating substantial views while hiding their actual geographic origins.

While the location feature is claimed to be about 99% accurate, there are limitations such as the use of VPNs, proxies, and other methods that can mask true locations, causing some data inaccuracies. The tool's launch also sparked controversy as some users claim their locations are inaccurately displayed, causing breaches of user trust. Experts caution that despite the added transparency, it is a developing tool, and bad actors will likely find ways to circumvent these measures.

Platform responses and transparency efforts

X’s community notes feature, allowing users to add context to viral posts, is viewed as a step toward enhanced transparency, though deception remains widespread. The platform indicates ongoing efforts to introduce more ways to authenticate content and maintain integrity in the "global town square" of social media.

However, researchers emphasize the need for continuous scrutiny given the high stakes of political misinformation and manipulation.This new feature exposes deep challenges in ensuring authenticity and trust in political discourse on X, uncovering foreign manipulation that spans multiple political ends, and revealing the complexities of combating misinformation amid financial and geopolitical motives.

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

Google’s High-Stakes AI Strategy: Chips, Investment, and Concerns of a Tech Bubble

 

At Google’s headquarters, engineers work on Google’s Tensor Processing Unit, or TPU—custom silicon built specifically for AI workloads. The device appears ordinary, but its role is anything but. Google expects these chips to eventually power nearly every AI action across its platforms, making them integral to the company’s long-term technological dominance. 

Pichai has repeatedly described AI as the most transformative technology ever developed, more consequential than the internet, smartphones, or cloud computing. However, the excitement is accompanied by growing caution from economists and financial regulators. Institutions such as the Bank of England have signaled concern that the rapid rise in AI-related company valuations could lead to an abrupt correction. Even prominent industry leaders, including OpenAI CEO Sam Altman, have acknowledged that portions of the AI sector may already display speculative behavior. 

Despite those warnings, Google continues expanding its AI investment at record speed. The company now spends over $90 billion annually on AI infrastructure, tripling its investment from only a few years earlier. The strategy aligns with a larger trend: a small group of technology companies—including Microsoft, Meta, Nvidia, Apple, and Tesla—now represents roughly one-third of the total value of the U.S. S&P 500 market index. Analysts note that such concentration of financial power exceeds levels seen during the dot-com era. 

Within the secured TPU lab, the environment is loud, dominated by cooling units required to manage the extreme heat generated when chips process AI models. The TPU differs from traditional CPUs and GPUs because it is built specifically for machine learning applications, giving Google tighter efficiency and speed advantages while reducing reliance on external chip suppliers. The competition for advanced chips has intensified to the point where Silicon Valley executives openly negotiate and lobby for supply. 

Outside Google, several AI companies have seen share value fluctuations, with investors expressing caution about long-term financial sustainability. However, product development continues rapidly. Google’s recently launched Gemini 3.0 model positions the company to directly challenge OpenAI’s widely adopted ChatGPT.  

Beyond financial pressures, the AI sector must also confront resource challenges. Analysts estimate that global data centers could consume energy on the scale of an industrialized nation by 2030. Still, companies pursue ever-larger AI systems, motivated by the possibility of reaching artificial general intelligence—a milestone where machines match or exceed human reasoning ability. 

Whether the current acceleration becomes a long-term technological revolution or a temporary bubble remains unresolved. But the race to lead AI is already reshaping global markets, investment patterns, and the future of computing.

Massive Leak Exposes 1.3 Billion Passwords and 2 Billion Emails — Check If Your Credentials Are at Risk

 

If you haven’t recently checked whether your login details are floating around online, now is the time. A staggering 1.3 billion unique passwords and 2 billion unique email addresses have surfaced publicly — and not due to a fresh corporate breach.

Instead, this massive cache was uncovered after threat-intelligence firm Synthient combed through both the open web and the dark web for leaked credentials. You may recognize the company, as they previously discovered 183 million compromised email accounts.

Much of this enormous collection is made up of credential-stuffing lists, which bundle together login details stolen from various older breaches. Cybercriminals typically buy and trade these lists to attempt unauthorized logins across multiple platforms.

This time, Synthient pulled together all 2 billion emails and 1.3 billion passwords, and with help from Troy Hunt and Have I Been Pwned (HIBP), the entire dataset can now be searched so users can determine if their personal information is exposed.

The compilation was created by Synthient founder Benjamin Brundage, who spent months gathering leaked credentials from countless sources across hacker forums and malware dumps. The dataset includes both older breach data and newly stolen information harvested through info-stealing malware, which quietly extracts passwords from infected devices.

According to Troy Hunt, Brundage provided the raw data while Hunt independently verified its authenticity.

To test its validity, Hunt used one of his old email addresses — one he already knew had appeared in past credential lists. As expected, that address and several associated passwords were included in the dataset.

After that, Hunt contacted a group of HIBP subscribers for verification. By choosing some users whose data had never appeared in a breach and others with previously exposed data, he confirmed that the new dataset wasn’t just recycled information — fresh, previously unseen credentials were indeed present.

HIBP has since integrated the exposed passwords into its Pwned Passwords service. Importantly, this database never links email addresses to passwords, maintaining privacy while still allowing users to check if their passwords are compromised.

To see if any of your current passwords have been leaked, visit the Pwned Passwords page and enter them. Your passwords are never sent to a server — the entire check is processed locally in your browser through an anonymity-preserving method.

If any password you use appears in the results, change it immediately. You can rely on a password manager to generate strong replacements, or use free password generators from tools like Bitwarden, LastPass, and ProtonPass.

The single most important cybersecurity rule remains the same: never reuse passwords. When criminals obtain one set of login credentials, they try them across other platforms — an attack method known as credential stuffing. Because so many people still repeat passwords, these attacks remain highly successful.

Make sure every account you own uses a strong, complex, and unique password. Password managers and built-in password generators are the easiest way to handle this.

Even the best password may not protect you if it’s stolen through a breach or malware. That’s why Two-Factor Authentication (2FA) is crucial. With a second verification step — such as an authenticator app or security key — criminals won’t be able to access your account even if they know the password.

You should also safeguard your devices against malware using reputable antivirus tools on Windows, Mac, and Android. Info-stealing malware, often spread through phishing attacks, remains one of the most common ways passwords are siphoned directly from user devices.

If you’re interested in going beyond passwords altogether, consider switching to passkeys. These use cryptographic key pairs rather than passwords, making them unguessable, non-reusable, and resistant to phishing attempts.

Think of your password as the lock on your home’s front door: the stronger it is, the harder it is for intruders to break in. But even with strong habits, your information can still be exposed through breaches outside your control — one reason many experts, including Hunt, see passkeys as the future.

While it’s easy to panic after reading about massive leaks like this, staying consistent with good digital hygiene and regularly checking your exposure will keep you one step ahead of cybercriminals.

Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.