Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Cyber Security. Show all posts

Ivanti Issues Emergency Fixes After Attackers Exploit Critical Flaws in Mobile Management Software




Ivanti has released urgent security updates for two serious vulnerabilities in its Endpoint Manager Mobile (EPMM) platform that were already being abused by attackers before the flaws became public. EPMM is widely used by enterprises to manage and secure mobile devices, which makes exposed servers a high-risk entry point into corporate networks.

The two weaknesses, identified as CVE-2026-1281 and CVE-2026-1340, allow attackers to remotely run commands on vulnerable servers without logging in. Both flaws were assigned near-maximum severity scores because they can give attackers deep control over affected systems. Ivanti confirmed that a small number of customers had already been compromised at the time the issues were disclosed.

This incident reflects a broader pattern of severe security failures affecting enterprise technology vendors in January in recent years. Similar high-impact vulnerabilities have previously forced organizations to urgently patch network security and access control products. The repeated targeting of these platforms shows that attackers focus on systems that provide centralized control over devices and identities.

Ivanti stated that only on-premises EPMM deployments are affected. Its cloud-based mobile management services, other endpoint management products, and environments using Ivanti cloud services with Sentry are not impacted by these flaws.

If attackers exploit these vulnerabilities, they can move within internal networks, change system settings, grant themselves administrative privileges, and access stored information. The exposed data may include basic personal details of administrators and device users, along with device-related information such as phone numbers and location data, depending on how the system is configured.

Ivanti has not provided specific indicators of compromise because only a limited number of confirmed cases are known. However, the company published technical analysis to support investigations. Security teams are advised to review web server logs for unusual requests, particularly those containing command-like input. Exploitation attempts may appear as abnormal activity involving internal application distribution or Android file transfer functions, sometimes producing error responses instead of successful ones. Requests sent to error pages using unexpected methods or parameters should be treated as highly suspicious.

Previous investigations show attackers often maintain access by placing or modifying web shell files on application error pages. Security teams should also watch for unexpected application archive files being added to servers, as these may be used to create remote connections back to attackers. Because EPMM does not normally initiate outbound network traffic, any such activity in firewall logs should be treated as a strong warning sign.

Ivanti advises organizations that detect compromise to restore systems from clean backups or rebuild affected servers before applying updates. Attempting to manually clean infected systems is not recommended. Because these flaws were exploited before patches were released, organizations that had vulnerable EPMM servers exposed to the internet at the time of disclosure should treat those systems as compromised and initiate full incident response procedures rather than relying on patching alone. 

CRIL Uncovers ShadowHS: Fileless Linux Post-Exploitation Framework Built for Stealthy Long-Term Access

 

Operating entirely in system memory, Cyble Research & Intelligence Labs (CRIL) uncovered ShadowHS, a Linux post-exploitation toolkit built for covert persistence after an initial breach. Instead of dropping binaries on disk, it runs filelessly, helping it bypass standard security checks and leaving minimal forensic traces. ShadowHS relies on a weaponized version of hackshell, enabling attackers to maintain long-term remote control through interactive sessions. This fileless approach makes detection harder because many traditional tools focus on scanning stored files rather than memory-resident activity. 

CRIL found that ShadowHS is delivered using an encrypted shell loader that deploys a heavily modified hackshell component. During execution, the loader reconstructs the payload in memory using AES-256-CBC decryption, along with Perl byte skipping routines and gzip decompression. After rebuilding, the payload is executed via /proc//fd/ with a spoofed argv[0], a method designed to avoid leaving artifacts on disk and evade signature-based detection tools. 

Once active, ShadowHS begins with reconnaissance, mapping system defenses and identifying installed security tools. It checks for evidence of prior compromise and keeps background activity intentionally low, allowing operators to selectively activate functions such as credential theft, lateral movement, privilege escalation, cryptomining, and covert data exfiltration. CRIL noted that this behavior reflects disciplined operator tradecraft rather than opportunistic attacks. 

ShadowHS also performs extensive fingerprinting for commercial endpoint tools such as CrowdStrike, Tanium, Sophos, and Microsoft Defender, as well as monitoring agents tied to cloud platforms and industrial control environments. While runtime activity appears restrained, CRIL emphasized the framework contains a wider set of dormant capabilities that can be triggered when needed. 

A key feature highlighted by CRIL is ShadowHS’s stealthy data exfiltration method. Instead of using standard network channels, it leverages user-space tunneling over GSocket, replacing rsync’s default transport to move data through firewalls and restrictive environments. Researchers observed two variants: one using DBus-based tunneling and another using netcat-style GSocket tunnels, both designed to preserve file metadata such as timestamps, permissions, and partial transfer state. 

The framework also includes dormant modules for memory dumping to steal credentials, SSH-based lateral movement and brute-force scanning, and privilege escalation using kernel exploits. Cryptomining support is included through tools such as XMRig, GMiner, and lolMiner. ShadowHS further contains anti-competition routines to detect and terminate rival malware like Rondo and Kinsing, as well as credential-stealing backdoors such as Ebury, while checking kernel integrity and loaded modules to assess whether the host is already compromised or under surveillance.

CRIL concluded that ShadowHS highlights growing challenges in securing Linux environments against fileless threats. Since these attacks avoid disk artifacts, traditional antivirus and file-based detection fall short. Effective defense requires monitoring process behavior, kernel telemetry, and memory-resident activity, focusing on live system behavior rather than static indicators.

Malicious Chrome Extensions Hijack Affiliate Links and Steal ChatGPT Tokens

 

Cybersecurity researchers have uncovered a alarming surge in malicious Google Chrome extensions that hijack affiliate links, steal sensitive data, and siphon OpenAI ChatGPT authentication tokens. These deceptive add-ons, masquerading as handy shopping aids and AI enhancers, infiltrate the Chrome Web Store to exploit user trust. Disguised tools like Amazon Ads Blocker from "10Xprofit" promise ad-free browsing but secretly swap creators' affiliate tags with the developer's own, robbing influencers of commissions across Amazon, AliExpress, Best Buy, Shein, Shopify, and Walmart.

Socket Security identified 29 such extensions in this cluster, uploaded as recently as January 19, 2026, which scan product URLs without user interaction to inject tags like "10xprofit-20." They also scrape product details to attacker servers at "app.10xprofit[.]io" and deploy fake "LIMITED TIME DEAL" countdowns on AliExpress pages to spur impulse buys. Misleading store listings claim mere "small commissions" from coupons, violating policies that demand clear disclosures, user consent for injections, and single-purpose designs.

Broadcom's Symantec separately flagged four data-thieving extensions with over 100,000 installs, including Good Tab, which relays clipboard access to "api.office123456[.]com," and Children Protection, which harvests cookies, injects ads, and executes remote JavaScript. DPS Websafe hijacks searches to malicious sites, while Stock Informer exposes users to an old XSS flaw (CVE-2020-28707). Researchers Yuanjing Guo and Tommy Dong stress caution even with trusted sources, as broad permissions enable unchecked surveillance.

LayerX exposed 16 coordinated "ChatGPT Mods" extensions—downloaded about 900 times—that pose as productivity boosters like voice downloaders and prompt managers. These inject scripts into chatgpt.com to capture session tokens, granting attackers full account access to conversations, metadata, and code. Natalie Zargarov notes this leverages AI tools' high privileges, turning trusted brands into deception vectors amid booming enterprise AI adoption.

Compounding risks, the "Stanley" malware-as-a-service toolkit, sold on Russian forums for $2,000-$6,000, generates note-taking extensions that overlay phishing iframes on bank sites while faking legitimate URLs. Premium buyers get Chrome Store approval guarantees and C2 panels for victim management; it vanished January 27, 2025, post-exposure but may rebrand. Varonis' Daniel Kelley warns browsers are now prime endpoints in BYOD and remote setups.

Users must audit extensions for mismatched features, excessive permissions, and vague disclosures—remove suspects via Chrome settings immediately. Limit installs to verified needs, favoring official apps over third-party tweaks. As e-commerce and AI extensions multiply, proactive vigilance thwarts financial sabotage and data breaches in this evolving browser battlefield.

CISA Issues New Guidance on Managing Insider Cybersecurity Risks

 



The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance warning that insider threats represent a major and growing risk to organizational security. The advisory was issued during the same week reports emerged about a senior agency official mishandling sensitive information, drawing renewed attention to the dangers posed by internal security lapses.

In its announcement, CISA described insider threats as risks that originate from within an organization and can arise from either malicious intent or accidental mistakes. The agency stressed that trusted individuals with legitimate system access can unintentionally cause serious harm to data security, operational stability, and public confidence.

To help organizations manage these risks, CISA published an infographic outlining how to create a structured insider threat management team. The agency recommends that these teams include professionals from multiple departments, such as human resources, legal counsel, cybersecurity teams, IT leadership, and threat analysis units. Depending on the situation, organizations may also need to work with external partners, including law enforcement or health and risk professionals.

According to CISA, these teams are responsible for overseeing insider threat programs, identifying early warning signs, and responding to potential risks before they escalate into larger incidents. The agency also pointed organizations to additional free resources, including a detailed mitigation guide, training workshops, and tools to evaluate the effectiveness of insider threat programs.

Acting CISA Director Madhu Gottumukkala emphasized that insider threats can undermine trust and disrupt critical operations, making them particularly challenging to detect and prevent.

Shortly before the guidance was released, media reports revealed that Gottumukkala had uploaded sensitive CISA contracting documents into a public version of an AI chatbot during the previous summer. According to unnamed officials, the activity triggered automated security alerts designed to prevent unauthorized data exposure from federal systems.

CISA’s Director of Public Affairs later confirmed that the chatbot was used with specific controls in place and stated that the usage was limited in duration. The agency noted that the official had received temporary authorization to access the tool and last used it in mid-July 2025.

By default, CISA blocks employee access to public AI platforms unless an exception is granted. The Department of Homeland Security, which oversees CISA, also operates an internal AI system designed to prevent sensitive government information from leaving federal networks.

Security experts caution that data shared with public AI services may be stored or processed outside the user’s control, depending on platform policies. This makes such tools particularly risky when handling government or critical infrastructure information.

The incident adds to a series of reported internal disputes and security-related controversies involving senior leadership, as well as similar lapses across other US government departments in recent years. These cases are a testament to how poor internal controls and misuse of personal or unsecured technologies can place national security and critical infrastructure at risk.

While CISA’s guidance is primarily aimed at critical infrastructure operators and regional governments, recent events suggest that insider threat management remains a challenge across all levels of government. As organizations increasingly rely on AI and interconnected digital systems, experts continue to stress that strong oversight, clear policies, and leadership accountability are essential to reducing insider-related security risks.

SK hynix Launches New AI Company as Data Center Demand Drives Growth

 

A surge in demand for data center hardware has lifted SK hynix into stronger market standing, thanks to limited availability of crucial AI chips. Though rooted in memory production, the company now pushes further - launching a dedicated arm centered on tailored AI offerings. Rising revenues reflect investor confidence, fueled by sustained component shortages. Growth momentum builds quietly, shaped more by timing than redirection. Market movements align closely with output constraints rather than strategic pivots. 

Early next year, the business will launch a division known as “AI Company” (AI Co.), set to begin operations in February. This offshoot aims to play a central role within the AI data center landscape, positioning itself alongside major contributors. As demand shifts toward bundled options, clients prefer complete packages - ones blending infrastructure, programs, and support - over isolated gear. According to SK hynix, such changes open doors previously unexplored through traditional component sales alone. 

Though little is known so far, news has emerged that AI Co., according to statements given to The Register, plans industry-specific AI tools through dedicated backing of infrastructure tied to processing hubs. Starting out, attention turns toward programs meant to refine how artificial intelligence operates within machines. From there, financial commitments may stretch into broader areas linked to computing centers as months pass. Alongside funding external ventures and novel tech, reports indicate turning prototypes into market-ready offerings might shape a core piece of its evolving strategy.  

About $10 billion is being set aside by SK hynix for the fresh venture. Next month should bring news of a temporary leadership group and governing committee. Instead of staying intact, the California-focused SSD unit known as Solidigm will undergo reorganization. What was once Solidigm becomes AI Co. under the shift. Meanwhile, production tied to SSDs shifts into a separate entity named Solidigm Inc., built from the ground up.  

Now shaping up, the AI server industry leans into tailored chips instead of generic ones. By 2027, ASIC shipments for these systems could rise threefold, according to Counterpoint Research. Come 2028, annual units sold might go past fifteen million. Such growth appears set to overtake current leaders - data center GPUs - in volume shipped. While initial prices for ASICs sometimes run high, their running cost tends to stay low compared to premium graphics processors. Inference workloads commonly drive demand, favoring efficiency-focused designs. Holding roughly six out of every ten units delivered in 2027, Broadcom stands positioned near the front. 

A wider shortage of memory chips keeps lifting SK hynix forward. Demand now clearly exceeds available stock, according to IDC experts, because manufacturers are directing more output into server and graphics processing units instead of phones or laptops. As a result, prices throughout the sector have climbed - this shift directly boosting the firm's earnings. Revenue for 2025 reached ₩97.14 trillion ($67.9 billion), up 47%. During just the last quarter, income surged 66% compared to the same period the previous year, hitting ₩32.8 trillion ($22.9 billion). 

Suppliers such as ASML are seeing gains too, thanks to rising demand in semiconductor production. Though known mainly for photolithography equipment, its latest quarterly results revealed €9.7 billion in revenue - roughly $11.6 billion. Even so, forecasts suggest a sharp rise in orders for their high-end EUV tools during the current year. Despite broader market shifts, performance remains strong across key segments. 

Still, experts point out that a lack of memory chips might hurt buyers, as devices like computers and phones could become more expensive. Predictions indicate computer deliveries might drop during the current year because supplies are tight and expenses are climbing.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

WhatsApp-Based Astaroth Banking Trojan Targets Brazilian Users in New Malware Campaign

 

A fresh look at digital threats shows malicious software using WhatsApp to spread the Astaroth banking trojan, mainly affecting people in Brazil. Though messaging apps are common tools for connection, they now serve attackers aiming to steal financial data. This method - named Boto Cor-de-Rosa by analysts at Acronis Threat Research - stands out because it leans on social trust within widely used platforms. Instead of relying on email or fake websites, hackers piggyback on real conversations, slipping malware through shared links. 
While such tactics aren’t entirely new, their adaptation to local habits makes them harder to spot. In areas where nearly everyone uses WhatsApp daily, blending in becomes easier for cybercriminals. Researchers stress that ordinary messages can now carry hidden risks when sent from compromised accounts. Unlike older campaigns, this one avoids flashy tricks, favoring quiet infiltration over noise. As behavior shifts online, so do attack strategies - quietly, persistently adapting. 

Acronis reports that the malware targets WhatsApp contact lists, sending harmful messages automatically - spreading fast with no need for constant hacker input. Notably, even though the main Astaroth component sticks with Delphi, and the setup script remains in Visual Basic, analysts spotted a fresh worm-style feature built completely in Python. Starting off differently this time, the mix of languages shows how cyber attackers now build adaptable tools by blending code types for distinct jobs. Ending here: such variety supports stealthier, more responsive attack systems. 

Astaroth - sometimes called Guildma - has operated nonstop since 2015, focusing mostly on Brazil within Latin America. Stealing login details and enabling money scams sits at the core of its activity. By 2024, several hacking collectives, such as PINEAPPLE and Water Makara, began spreading it through deceptive email messages. This newest push moves away from that method, turning instead to WhatsApp; because so many people there rely on the app daily, fake requests feel far more believable. 

Although tactics shift, the aim stays unchanged. Not entirely new, exploiting WhatsApp to spread banking trojans has gained speed lately. Earlier, Trend Micro spotted the Water Saci group using comparable methods to push financial malware like Maverick and a version of Casbaneierio. Messaging apps now appear more appealing to attackers than classic email phishing. Later that year, Sophos disclosed details of an evolving attack series labeled STAC3150, closely tied to previous patterns. This operation focused heavily on individuals in Brazil using WhatsApp, distributing the Astaroth malware through deceptive channels. 

Nearly all infected machines - over 95 percent - were situated within Brazilian territory, though isolated instances appeared across the U.S. and Austria. Running uninterrupted from early autumn 2025, the method leaned on compressed archives paired with installer files, triggering script-based downloads meant to quietly embed the malicious software. What Acronis has uncovered fits well with past reports. Messages on WhatsApp now carry harmful ZIP files sent straight to users. Opening one reveals what seems like a safe document - but it is actually a Visual Basic Script. Once executed, the script pulls down further tools from remote servers. 

This step kicks off the full infection sequence. After activation, this malware splits its actions into two distinct functions. While one part spreads outward by pulling contact data from WhatsApp and distributing infected files without user input, the second runs hidden, observing online behavior - especially targeting visits to financial sites - to capture login details. 

It turns out the software logs performance constantly, feeding back live updates on how many messages succeed or fail, along with transmission speed. Attackers gain a constant stream of operational insight thanks to embedded reporting tools spotted by Acronis.

Microsoft BitLocker Encryption Raises Privacy Questions After FBI Key Disclosure Case

 


Microsoft’s BitLocker encryption, long viewed as a safeguard for Windows users’ data, is under renewed scrutiny after reports revealed the company provided law enforcement with encryption keys in a criminal investigation.

The case, detailed in a government filing [PDF], alleges that individuals in Guam illegally claimed pandemic-related unemployment benefits. According to Forbes, this marks the first publicly documented instance of Microsoft handing over BitLocker recovery keys to law enforcement.

BitLocker is a built-in Windows security feature designed to encrypt data stored on devices. It operates through two configurations: Device Encryption, which offers a simplified setup, and BitLocker Drive Encryption, a more advanced option with greater control.

In both configurations, Microsoft generally stores BitLocker recovery keys on its servers when encryption is activated using a Microsoft account. As the company explains in its documentation, "If you use a Microsoft account, the BitLocker recovery key is typically attached to it, and you can access the recovery key online."

A similar approach applies to organizational devices. Microsoft notes, "If you're using a device that's managed by your work or school, the BitLocker recovery key is typically backed up and managed by your organization's IT department."

Users are not required to rely on Microsoft for key storage. Alternatives include saving the recovery key to a USB drive, storing it as a local file, or printing it. However, many customers opt for Microsoft’s cloud-based storage because it allows easy recovery if access is lost. This convenience, though, effectively places Microsoft in control of data access and reduces the user’s exclusive ownership of encryption keys.

Apple provides a comparable encryption solution through FileVault, paired with iCloud. Apple offers two protection levels: Standard Data Protection and Advanced Data Protection for iCloud.

Under Standard Data Protection, Apple retains the encryption keys for most iCloud data, excluding certain sensitive categories such as passwords and keychain data. With Advanced Data Protection enabled, Apple holds keys only for iCloud Mail, Contacts, and Calendar. Both Apple and Microsoft comply with lawful government requests, but neither can disclose encryption keys they do not possess.

Apple explicitly addresses this in its law enforcement guidelines [PDF]: "All iCloud content data stored by Apple is additionally encrypted at the location of the server. For data Apple can decrypt, Apple retains the encryption keys in its US data centers. Apple does not receive or retain encryption keys for [a] customer's end-to-end encrypted data."

This differs from BitLocker’s default behavior, where Microsoft may retain access to a customer’s encryption keys if the user enables cloud backup during setup.

Microsoft states that it does not share its own encryption keys with governments, but it stops short of extending that guarantee to customer-managed keys. In its law enforcement guidance, the company says, "We do not provide any government with our encryption keys or the ability to break our encryption." It further adds, "In most cases, our default is for Microsoft to securely store our customers' encryption keys. Even our largest enterprise customers usually prefer we keep their keys to prevent accidental loss or theft. However, in many circumstances we also offer the option for consumers or enterprises to keep their own keys, in which case Microsoft does not maintain copies."

Microsoft’s latest Government Requests for Customer Data Report, covering July 2024 through December 2024, shows the company received 128 law enforcement requests globally, including 77 from US agencies. Only four requests during that period—three from Brazil and one from Canada—resulted in content disclosure.

After the article was published, a Microsoft spokesperson clarified, “With BitLocker, customers can choose to store their encryption keys locally, in a location inaccessible to Microsoft, or in Microsoft’s cloud. We recognize that some customers prefer Microsoft’s cloud storage so we can help recover their encryption key if needed. While key recovery offers convenience, it also carries a risk of unwanted access, so Microsoft believes customers are in the best position to decide whether to use key escrow and how to manage their keys.”

Privacy advocates argue that this design reflects Microsoft’s priorities. As Erica Portnoy, senior staff technologist at the Electronic Frontier Foundation, stated in an email to The Register, "Microsoft is making a tradeoff here between privacy and recoverability. At a guess, I'd say that's because they're more focused on the business use case, where loss of data is much worse than Microsoft or governments getting access to that data. But by making that choice, they make their product less suitable for individuals and organizations with higher privacy needs. It's a clear message to activist organizations and law firms that Microsoft is not building their products for you."