Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Former Google Engineer Convicted in U.S. for Stealing AI Trade Secrets to Aid China-Based Startup

 

A former Google software engineer has been found guilty in the United States for unlawfully taking thousands of confidential Google documents to support a technology venture in China, according to an announcement made by the Department of Justice (DoJ) on Thursday.

Linwei Ding, also known as Leon Ding, aged 38, was convicted by a federal jury on 14 charges—seven counts of economic espionage and seven counts of theft of trade secrets. Prosecutors established that Ding illegally copied more than 2,000 internal Google files containing highly sensitive artificial intelligence (AI) trade secrets with the intent of benefiting the People’s Republic of China (PRC).

"Silicon Valley is at the forefront of artificial intelligence innovation, pioneering transformative work that drives economic growth and strengthens our national security," said U.S. Attorney Craig H. Missakian. "We will vigorously protect American intellectual capital from foreign interests that seek to gain an unfair competitive advantage while putting our national security at risk."

Ding was initially indicted in March 2024 after investigators discovered that he had transferred proprietary data from Google’s internal systems to his personal Google Cloud account. The materials allegedly stolen included detailed information on Google’s supercomputing data center architecture used to train and run AI models, its Cluster Management System (CMS), and the AI models and applications operating on that infrastructure.

The misappropriated trade secrets reportedly covered several critical technologies, including the design and functionality of Google’s custom Tensor Processing Unit (TPU) chips and GPU systems, software that enables chip-level communication and task execution, systems that coordinate thousands of chips into AI supercomputers, and SmartNIC technology used for high-speed networking within Google’s AI and cloud platforms.

Authorities stated that the theft occurred over an extended period between May 2022 and April 2023. Ding, who began working at Google in 2019, allegedly maintained undisclosed ties with two China-based technology firms during his employment, one of which was Shanghai Zhisuan Technologies Co., a startup he founded in 2023. Investigators noted that Ding downloaded large volumes of confidential files in December 2023, just days before resigning from the company.

"Around June 2022, Ding was in discussions to be the Chief Technology Officer for an early-stage technology company based in the PRC; by early 2023, Ding was in the process of founding his own technology company in the PRC focused on AI and machine learning and was acting as the company's CEO," the DoJ said.

The case further alleged that Ding attempted to conceal his actions by copying Google source code into the Apple Notes app on his work-issued MacBook, converting the files into PDFs, and uploading them to his personal Google account. Prosecutors also claimed that he asked a colleague to use his access badge to enter a Google facility, creating the false appearance that he was working from the office while he was actually in China.

The investigation reportedly accelerated in late 2023 after Google learned that Ding had delivered a public presentation in China to prospective investors promoting his startup. According to Courthouse News, Ding’s defense attorney Grant Fondo argued that the information could not qualify as trade secrets because it was accessible to a large number of Google employees. "Google chose openness over security," Fonda said.

In a superseding indictment filed in February 2025, Ding was additionally charged with economic espionage, with prosecutors alleging that he applied to a Beijing-backed Shanghai talent program. Such initiatives were described as efforts to recruit overseas researchers to bolster China’s technological and economic development.

"Ding's application for this talent plan stated that he planned to 'help China to have computing power infrastructure capabilities that are on par with the international level,'" the DoJ said. "The evidence at trial also showed that Ding intended to benefit two entities controlled by the government of China by assisting with the development of an AI supercomputer and collaborating on the research and development of custom machine learning chips."

Ding is set to attend a status conference on February 3, 2026. If sentenced to the maximum penalties, he could face up to 10 years in prison for each trade secret theft charge and up to 15 years for each count of economic espionage.

Google Owned Mandiant Finds Vishing Attacks Against SaaS Platforms


Mandiant recently said that it found an increase in threat activity that deploys tradecraft for extortion attacks carried out by a financially gained group ShinyHunters.

  • These attacks use advanced voice phishing (vishing) and fake credential harvesting sites imitating targeted organizations to get illicit access to victims systems by collecting sign-on (SSO) credentials and two factor authentication codes. 
  • The attacks aim to target cloud-based software-as-a-service (SaaS) apps to steal sensitive data and internal communications and blackmail victims. 

Google owned Mandiant’s threat intelligence team is tracking the attacks under various clusters: UNC6661, UNC6671, and UNC6240 (aka ShinyHunters). These gangs might be improving their attack tactics. "While this methodology of targeting identity providers and SaaS platforms is consistent with our prior observations of threat activity preceding ShinyHunters-branded extortion, the breadth of targeted cloud platforms continues to expand as these threat actors seek more sensitive data for extortion," Mandiant said. 

"Further, they appear to be escalating their extortion tactics with recent incidents, including harassment of victim personnel, among other tactics.”

Theft details

UNC6661 was pretending to be IT staff sending employees to credential harvesting links tricking them into multi-factor authentication (MFA) settings. This was found during mid-January 2026.

Threat actors used stolen credentials to register their own device for MFA and further steal data from SaaS platforms. In one incident, the hacker exploited their access to infected email accounts to send more phishing emails to users in cryptocurrency based organizations.

The emails were later deleted to hide the tracks. Experts also found UNC6671 mimicking IT staff to fool victims to steal credentials and MFA login codes on credential harvesting websites since the start of this year. In a few incidents, the hackers got access to Okta accounts. 

UNC6671 leveraged PowerShell to steal sensitive data from OneDrive and SharePoint. 

Attack tactic 

The use of different domain registrars to register the credential harvesting domains (NICENIC for UNC6661 and Tucows for UNC6671) and the fact that an extortion email sent after UNC6671 activity did not overlap with known UNC6240 indicators are the two main differences between UNC6661 and UNC6671. 

This suggests that other groups of people might be participating, highlighting how nebulous these cybercrime organizations are. Furthermore, the targeting of bitcoin companies raises the possibility that the threat actors are searching for other opportunities to make money.

Ivanti Issues Emergency Fixes After Attackers Exploit Critical Flaws in Mobile Management Software




Ivanti has released urgent security updates for two serious vulnerabilities in its Endpoint Manager Mobile (EPMM) platform that were already being abused by attackers before the flaws became public. EPMM is widely used by enterprises to manage and secure mobile devices, which makes exposed servers a high-risk entry point into corporate networks.

The two weaknesses, identified as CVE-2026-1281 and CVE-2026-1340, allow attackers to remotely run commands on vulnerable servers without logging in. Both flaws were assigned near-maximum severity scores because they can give attackers deep control over affected systems. Ivanti confirmed that a small number of customers had already been compromised at the time the issues were disclosed.

This incident reflects a broader pattern of severe security failures affecting enterprise technology vendors in January in recent years. Similar high-impact vulnerabilities have previously forced organizations to urgently patch network security and access control products. The repeated targeting of these platforms shows that attackers focus on systems that provide centralized control over devices and identities.

Ivanti stated that only on-premises EPMM deployments are affected. Its cloud-based mobile management services, other endpoint management products, and environments using Ivanti cloud services with Sentry are not impacted by these flaws.

If attackers exploit these vulnerabilities, they can move within internal networks, change system settings, grant themselves administrative privileges, and access stored information. The exposed data may include basic personal details of administrators and device users, along with device-related information such as phone numbers and location data, depending on how the system is configured.

Ivanti has not provided specific indicators of compromise because only a limited number of confirmed cases are known. However, the company published technical analysis to support investigations. Security teams are advised to review web server logs for unusual requests, particularly those containing command-like input. Exploitation attempts may appear as abnormal activity involving internal application distribution or Android file transfer functions, sometimes producing error responses instead of successful ones. Requests sent to error pages using unexpected methods or parameters should be treated as highly suspicious.

Previous investigations show attackers often maintain access by placing or modifying web shell files on application error pages. Security teams should also watch for unexpected application archive files being added to servers, as these may be used to create remote connections back to attackers. Because EPMM does not normally initiate outbound network traffic, any such activity in firewall logs should be treated as a strong warning sign.

Ivanti advises organizations that detect compromise to restore systems from clean backups or rebuild affected servers before applying updates. Attempting to manually clean infected systems is not recommended. Because these flaws were exploited before patches were released, organizations that had vulnerable EPMM servers exposed to the internet at the time of disclosure should treat those systems as compromised and initiate full incident response procedures rather than relying on patching alone. 

CRIL Uncovers ShadowHS: Fileless Linux Post-Exploitation Framework Built for Stealthy Long-Term Access

 

Operating entirely in system memory, Cyble Research & Intelligence Labs (CRIL) uncovered ShadowHS, a Linux post-exploitation toolkit built for covert persistence after an initial breach. Instead of dropping binaries on disk, it runs filelessly, helping it bypass standard security checks and leaving minimal forensic traces. ShadowHS relies on a weaponized version of hackshell, enabling attackers to maintain long-term remote control through interactive sessions. This fileless approach makes detection harder because many traditional tools focus on scanning stored files rather than memory-resident activity. 

CRIL found that ShadowHS is delivered using an encrypted shell loader that deploys a heavily modified hackshell component. During execution, the loader reconstructs the payload in memory using AES-256-CBC decryption, along with Perl byte skipping routines and gzip decompression. After rebuilding, the payload is executed via /proc//fd/ with a spoofed argv[0], a method designed to avoid leaving artifacts on disk and evade signature-based detection tools. 

Once active, ShadowHS begins with reconnaissance, mapping system defenses and identifying installed security tools. It checks for evidence of prior compromise and keeps background activity intentionally low, allowing operators to selectively activate functions such as credential theft, lateral movement, privilege escalation, cryptomining, and covert data exfiltration. CRIL noted that this behavior reflects disciplined operator tradecraft rather than opportunistic attacks. 

ShadowHS also performs extensive fingerprinting for commercial endpoint tools such as CrowdStrike, Tanium, Sophos, and Microsoft Defender, as well as monitoring agents tied to cloud platforms and industrial control environments. While runtime activity appears restrained, CRIL emphasized the framework contains a wider set of dormant capabilities that can be triggered when needed. 

A key feature highlighted by CRIL is ShadowHS’s stealthy data exfiltration method. Instead of using standard network channels, it leverages user-space tunneling over GSocket, replacing rsync’s default transport to move data through firewalls and restrictive environments. Researchers observed two variants: one using DBus-based tunneling and another using netcat-style GSocket tunnels, both designed to preserve file metadata such as timestamps, permissions, and partial transfer state. 

The framework also includes dormant modules for memory dumping to steal credentials, SSH-based lateral movement and brute-force scanning, and privilege escalation using kernel exploits. Cryptomining support is included through tools such as XMRig, GMiner, and lolMiner. ShadowHS further contains anti-competition routines to detect and terminate rival malware like Rondo and Kinsing, as well as credential-stealing backdoors such as Ebury, while checking kernel integrity and loaded modules to assess whether the host is already compromised or under surveillance.

CRIL concluded that ShadowHS highlights growing challenges in securing Linux environments against fileless threats. Since these attacks avoid disk artifacts, traditional antivirus and file-based detection fall short. Effective defense requires monitoring process behavior, kernel telemetry, and memory-resident activity, focusing on live system behavior rather than static indicators.

Malicious Chrome Extensions Hijack Affiliate Links and Steal ChatGPT Tokens

 

Cybersecurity researchers have uncovered a alarming surge in malicious Google Chrome extensions that hijack affiliate links, steal sensitive data, and siphon OpenAI ChatGPT authentication tokens. These deceptive add-ons, masquerading as handy shopping aids and AI enhancers, infiltrate the Chrome Web Store to exploit user trust. Disguised tools like Amazon Ads Blocker from "10Xprofit" promise ad-free browsing but secretly swap creators' affiliate tags with the developer's own, robbing influencers of commissions across Amazon, AliExpress, Best Buy, Shein, Shopify, and Walmart.

Socket Security identified 29 such extensions in this cluster, uploaded as recently as January 19, 2026, which scan product URLs without user interaction to inject tags like "10xprofit-20." They also scrape product details to attacker servers at "app.10xprofit[.]io" and deploy fake "LIMITED TIME DEAL" countdowns on AliExpress pages to spur impulse buys. Misleading store listings claim mere "small commissions" from coupons, violating policies that demand clear disclosures, user consent for injections, and single-purpose designs.

Broadcom's Symantec separately flagged four data-thieving extensions with over 100,000 installs, including Good Tab, which relays clipboard access to "api.office123456[.]com," and Children Protection, which harvests cookies, injects ads, and executes remote JavaScript. DPS Websafe hijacks searches to malicious sites, while Stock Informer exposes users to an old XSS flaw (CVE-2020-28707). Researchers Yuanjing Guo and Tommy Dong stress caution even with trusted sources, as broad permissions enable unchecked surveillance.

LayerX exposed 16 coordinated "ChatGPT Mods" extensions—downloaded about 900 times—that pose as productivity boosters like voice downloaders and prompt managers. These inject scripts into chatgpt.com to capture session tokens, granting attackers full account access to conversations, metadata, and code. Natalie Zargarov notes this leverages AI tools' high privileges, turning trusted brands into deception vectors amid booming enterprise AI adoption.

Compounding risks, the "Stanley" malware-as-a-service toolkit, sold on Russian forums for $2,000-$6,000, generates note-taking extensions that overlay phishing iframes on bank sites while faking legitimate URLs. Premium buyers get Chrome Store approval guarantees and C2 panels for victim management; it vanished January 27, 2025, post-exposure but may rebrand. Varonis' Daniel Kelley warns browsers are now prime endpoints in BYOD and remote setups.

Users must audit extensions for mismatched features, excessive permissions, and vague disclosures—remove suspects via Chrome settings immediately. Limit installs to verified needs, favoring official apps over third-party tweaks. As e-commerce and AI extensions multiply, proactive vigilance thwarts financial sabotage and data breaches in this evolving browser battlefield.

CISA Issues New Guidance on Managing Insider Cybersecurity Risks

 



The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance warning that insider threats represent a major and growing risk to organizational security. The advisory was issued during the same week reports emerged about a senior agency official mishandling sensitive information, drawing renewed attention to the dangers posed by internal security lapses.

In its announcement, CISA described insider threats as risks that originate from within an organization and can arise from either malicious intent or accidental mistakes. The agency stressed that trusted individuals with legitimate system access can unintentionally cause serious harm to data security, operational stability, and public confidence.

To help organizations manage these risks, CISA published an infographic outlining how to create a structured insider threat management team. The agency recommends that these teams include professionals from multiple departments, such as human resources, legal counsel, cybersecurity teams, IT leadership, and threat analysis units. Depending on the situation, organizations may also need to work with external partners, including law enforcement or health and risk professionals.

According to CISA, these teams are responsible for overseeing insider threat programs, identifying early warning signs, and responding to potential risks before they escalate into larger incidents. The agency also pointed organizations to additional free resources, including a detailed mitigation guide, training workshops, and tools to evaluate the effectiveness of insider threat programs.

Acting CISA Director Madhu Gottumukkala emphasized that insider threats can undermine trust and disrupt critical operations, making them particularly challenging to detect and prevent.

Shortly before the guidance was released, media reports revealed that Gottumukkala had uploaded sensitive CISA contracting documents into a public version of an AI chatbot during the previous summer. According to unnamed officials, the activity triggered automated security alerts designed to prevent unauthorized data exposure from federal systems.

CISA’s Director of Public Affairs later confirmed that the chatbot was used with specific controls in place and stated that the usage was limited in duration. The agency noted that the official had received temporary authorization to access the tool and last used it in mid-July 2025.

By default, CISA blocks employee access to public AI platforms unless an exception is granted. The Department of Homeland Security, which oversees CISA, also operates an internal AI system designed to prevent sensitive government information from leaving federal networks.

Security experts caution that data shared with public AI services may be stored or processed outside the user’s control, depending on platform policies. This makes such tools particularly risky when handling government or critical infrastructure information.

The incident adds to a series of reported internal disputes and security-related controversies involving senior leadership, as well as similar lapses across other US government departments in recent years. These cases are a testament to how poor internal controls and misuse of personal or unsecured technologies can place national security and critical infrastructure at risk.

While CISA’s guidance is primarily aimed at critical infrastructure operators and regional governments, recent events suggest that insider threat management remains a challenge across all levels of government. As organizations increasingly rely on AI and interconnected digital systems, experts continue to stress that strong oversight, clear policies, and leadership accountability are essential to reducing insider-related security risks.

SK hynix Launches New AI Company as Data Center Demand Drives Growth

 

A surge in demand for data center hardware has lifted SK hynix into stronger market standing, thanks to limited availability of crucial AI chips. Though rooted in memory production, the company now pushes further - launching a dedicated arm centered on tailored AI offerings. Rising revenues reflect investor confidence, fueled by sustained component shortages. Growth momentum builds quietly, shaped more by timing than redirection. Market movements align closely with output constraints rather than strategic pivots. 

Early next year, the business will launch a division known as “AI Company” (AI Co.), set to begin operations in February. This offshoot aims to play a central role within the AI data center landscape, positioning itself alongside major contributors. As demand shifts toward bundled options, clients prefer complete packages - ones blending infrastructure, programs, and support - over isolated gear. According to SK hynix, such changes open doors previously unexplored through traditional component sales alone. 

Though little is known so far, news has emerged that AI Co., according to statements given to The Register, plans industry-specific AI tools through dedicated backing of infrastructure tied to processing hubs. Starting out, attention turns toward programs meant to refine how artificial intelligence operates within machines. From there, financial commitments may stretch into broader areas linked to computing centers as months pass. Alongside funding external ventures and novel tech, reports indicate turning prototypes into market-ready offerings might shape a core piece of its evolving strategy.  

About $10 billion is being set aside by SK hynix for the fresh venture. Next month should bring news of a temporary leadership group and governing committee. Instead of staying intact, the California-focused SSD unit known as Solidigm will undergo reorganization. What was once Solidigm becomes AI Co. under the shift. Meanwhile, production tied to SSDs shifts into a separate entity named Solidigm Inc., built from the ground up.  

Now shaping up, the AI server industry leans into tailored chips instead of generic ones. By 2027, ASIC shipments for these systems could rise threefold, according to Counterpoint Research. Come 2028, annual units sold might go past fifteen million. Such growth appears set to overtake current leaders - data center GPUs - in volume shipped. While initial prices for ASICs sometimes run high, their running cost tends to stay low compared to premium graphics processors. Inference workloads commonly drive demand, favoring efficiency-focused designs. Holding roughly six out of every ten units delivered in 2027, Broadcom stands positioned near the front. 

A wider shortage of memory chips keeps lifting SK hynix forward. Demand now clearly exceeds available stock, according to IDC experts, because manufacturers are directing more output into server and graphics processing units instead of phones or laptops. As a result, prices throughout the sector have climbed - this shift directly boosting the firm's earnings. Revenue for 2025 reached ₩97.14 trillion ($67.9 billion), up 47%. During just the last quarter, income surged 66% compared to the same period the previous year, hitting ₩32.8 trillion ($22.9 billion). 

Suppliers such as ASML are seeing gains too, thanks to rising demand in semiconductor production. Though known mainly for photolithography equipment, its latest quarterly results revealed €9.7 billion in revenue - roughly $11.6 billion. Even so, forecasts suggest a sharp rise in orders for their high-end EUV tools during the current year. Despite broader market shifts, performance remains strong across key segments. 

Still, experts point out that a lack of memory chips might hurt buyers, as devices like computers and phones could become more expensive. Predictions indicate computer deliveries might drop during the current year because supplies are tight and expenses are climbing.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.