Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

AI Coding Platform Orchids Exposed to Zero-Click Hack in BBC Security Test

  A BBC journalist has demonstrated an unresolved cybersecurity weakness in an artificial intelligence coding platform that is rapidly gaini...

All the recent news you need to know

Russia Blocks WhatsApp, Pushes State Surveillance App

 

Russia has effectively erased WhatsApp from its internet, impacting up to 100 million users in a bold move by regulator Roskomnadzor. On Wednesday, the app was removed from the national directory, severing access without prior slowdown warnings, as reported by the Financial Times and Gizmodo. WhatsApp condemned this as an attempt to force users onto a "state-owned surveillance app," highlighting the isolation of millions from secure communication. 

This crackdown escalates Russia's long-running battle against foreign messaging services amid its push for digital sovereignty. Restrictions began in August 2025 with blocks on voice and video calls, citing WhatsApp's failure to aid fraud and terrorism probes. Courts fined the Meta-owned app repeatedly for not removing banned content or opening a local office; by December, speeds dropped 70%, but full removal came after ongoing non-compliance. Telegram faced similar cuts this week, leaving Russians scrambling.

Enter Max, VK's 2025-launched "superapp" modeled on China's WeChat, now aggressively promoted as the national alternative. Preinstalled on devices and endorsed by celebrities and educators, it offers chats, video calls, file sharing up to 4GB, payments via Russia's Faster Payment System, and government services like digital IDs and e-signatures. Unlike WhatsApp's encryption, Max mandates activity sharing with authorities and lacks apparent privacy safeguards, per The Insider. 

The Kremlin justifies the ban as protecting citizens from scams and terrorism while achieving tech independence under sanctions. Spokesman Dmitry Peskov cited Meta's refusal to follow Russian law, though WhatsApp could return via compliance talks. Critics see it as unprecedented speech suppression, building on post-2022 Ukraine invasion censorship labeled "unprecedented" by Amnesty International. Yet past efforts, like the failed 2018 Telegram block, exposed regime overreach.

Users are turning to VPNs or rivals, but Max's rise could cement state surveillance in daily life. This mirrors global trends—France pushes local apps, and Meta faces U.S. spying claims—but Russia's unencrypted alternative raises alarms for privacy. As Putin eyes indefinite rule, such controls signal deepening authoritarianism, forcing 100 million into monitored chats.

Group-IB Warns Supply Chain Attacks Are Becoming a Self-Reinforcing Cybercrime Ecosystem

 

Cybercrime outfits now reshape supply chain intrusions into sprawling, linked assaults - spinning out data leaks, stolen login details, and ransomware in relentless loops, says fresh research by Group-IB. With each trend report, the security group highlights how standalone hacks have evolved: today’s strikes follow blueprints meant to ripple through corporate systems, setting off chains of further break-ins. 

Instead of going after one company just to make money fast, hackers now aim at suppliers, support services, or common software tools - gaining trust-based entry to many users at once. Cases highlighted in recent reports - the Shai-Hulud NPM worm, the break-in at Salesloft, and the corrupted OpenClaw package - all show how problems upstream spread quickly across systems. Not limited to isolated targets, these attacks ripple outward when shared platforms get hit. 

Modern supply chain attacks unfold in linked phases, says Group-IB. One stage might begin with a tainted open-source component spreading malicious code while quietly collecting login details. Following that, attackers may launch phishing efforts - alongside misuse of OAuth tokens - to seize user identities, opening doors to cloud services and development pipelines. Breached data feeds these steps, supplying access keys, corporate connections, and situational awareness required to move sideways across systems. Later comes ransomware, sometimes followed by threats - built on insights gathered during earlier stages of breach. One step enables another, creating loops experts call self-sustaining networks of attack. 

Soon, Group-IB expects artificial intelligence to push this shift further. Because of AI-powered tools, scanning for flaws in vendor networks, software workflows, or browser add-on stores happens almost instantly. These systems let hackers find gaps faster - operating at speeds humans cannot match. 

Expectations point to declining reliance on classic malware, favoring tactics centered on stolen identities. Rather than using obvious harmful software, attackers now mimic authorized personnel, slipping into everyday operational processes. Moving quietly through standard behaviors allows them to stay hidden longer, gradually reaching linked environments. Because they handle sensitive operations like human resources, customer data, enterprise planning, or outsourced IT support, certain platforms draw strong interest from threat actors. 

When a compromise occurs at that level, it opens doors not just to one company but potentially hundreds connected through shared services - multiplying consequences far beyond the initial point of failure. Cases like Salesloft and the breach tied to Oracle in March 2025 show shifts in how data intrusions unfold. Rather than seeking quick payouts, hackers often collect OAuth credentials first. Missteps in third-party connections give them room to move inward. 

Once inside client systems, fresh opportunities open up. Data copying follows naturally. Trust-based communication chains become tools for disguise later. Infected updates spread quietly through established channels. Fraud grows without drawing early attention. Fault lines in digital confidence now shape modern cyber threats, according to Dmitry Volkov, who leads Group-IB. Rather than one-off breaches, what unfolds are ripple effects across systems. Because outside providers act like open doors, companies should treat them as part of their own risk landscape. 

Instead of reacting late, they build models for supply chain risks early. Automated scans track software links continuously. Insight into how information moves becomes essential - without it, gaps stay hidden until exploited. With breaches in supply chains turning into routine operations, protecting confidence among users, collaborations, and code links has shifted from being a backup measure to a core part of today’s security planning. 

What once seemed secondary now shapes the foundation. Trust must hold firm where systems connect - because failure at one point pulls down many. Security can no longer treat relationships as external risks; they are built-in conditions. When components rely on each other, weakness spreads fast. The report frames this shift clearly: resilience lives not just in tools but in verified connections. Not adding layers matters most - it is about strengthening what already ties everything together.

Darktrace Flags Surge in Phishing as Identity-Based Attacks Redefine 2025 Threat Landscape

 

More than 32 million high-confidence phishing emails were identified in 2025, signaling a sharp rise in identity-focused cyberattacks, according to new findings from Darktrace.

The cybersecurity firm analyzed incidents across its global customer network, revealing a year marked by growing automation, overlapping attack techniques, and faster execution by threat actors.

Among the total phishing volume, over 8.2 million emails specifically targeted high-profile individuals and executives, representing more than a quarter of all attempts observed. Additionally, 1.6 million phishing messages were traced to newly registered domains, while 1.2 million leveraged malicious QR codes to lure victims.

The report found that 70% of phishing emails bypassed DMARC authentication checks. Spear-phishing accounted for 41% of attacks, and 38% featured new social engineering strategies. Roughly one-third of the phishing emails exceeded 1,000 characters in length, indicating increasingly sophisticated messaging tactics.

Identity Compromise Emerges as Primary Breach Method

The analysis underscores a major shift in cyber intrusion tactics: identity compromise has surpassed vulnerability exploitation as the leading initial access method. Although Common Vulnerabilities and Exposures (CVEs) rose approximately 20% year-over-year, many exploits were deployed even before vulnerabilities were publicly disclosed.

"Identity has become the attacker's skeleton key. Instead of forcing their way through a firewall, adversaries are logging in with stolen credentials, hijacked tokens and abused permissions, then moving laterally under the cover of legitimacy," commented Shane Barney, CISO at Keeper Security.

"When identity controls are fragmented or overly permissive, attackers don't need novel exploits. They just need access that looks routine."

In the Americas, nearly 70% of reported incidents involved SaaS and Microsoft 365 account takeovers. The manufacturing sector accounted for 17% of documented cases and represented 29% of ransomware incidents in the region. Overall, 47% of global security events tracked in 2025 originated from the Americas.

Regional data further illustrates varying levels of digital resilience and geopolitical pressure.

In Latin America, 44% of incidents stemmed from malware spreading after phishing or credential theft. The education sector was most affected, accounting for 18% of cases. Brazil, Mexico, and Colombia recorded the highest activity levels over the past three years. Across Europe, 58% of security incidents were linked to cloud and email compromise, while 42% were tied to network-based attacks. Africa reported a 60% year-over-year spike in ransomware incidents, with 76% of compromises categorized as network-driven.

In Asia-Pacific and Japan, 84% of organizations indicated that AI-driven threats are already affecting them. However, only 42% said they have formal governance policies in place for safe AI usage.

"Identity is no longer about perimeter-based defense. The rise in AI-based agents and the massively accelerating threat landscape has rendered that approach inadequate, and prompted a shift towards identity as the critical element to enterprise security," SailPoint CEO, Mark McClain, said.

"This report's findings demonstrate that there is now a need for real-time, intelligent, and dynamic identity security, built to govern and secure not just 'who,' or in the case of AI agents, 'what,' has access to the enterprise, but what data they can access and what they are able to do once inside."

Google Observes Threat Actors Deploying AI During Live Network Breaches


 

As synthetic intelligence has become a staple in modern organizations, the field has transformed how they analyze data, make automated decisions, and defend their digital perimeters, moving from experimental labs to the operational bloodstream. However, with the incorporation of these systems deeper into company infrastructure, the technology itself is becoming both a strategic asset and a desirable target for companies. 

Adversaries seeking leverage are now studying, imitating, and in some cases quietly manipulating the same models used to draft code, triage alerts, and streamline workflows. As Fast Company points out, this dual reality is redefining cyber risk, putting AI at the heart of both defense strategy and offensive innovation. 

Insights from Google Cloud's AI Threat Tracker indicate that this shift is accelerating rapidly. There has been a significant increase in model extraction attempts, or "distillation" attempts, which are attempts by attackers to systematically query proprietary artificial intelligence systems to estimate their underlying capabilities, without ever breaching a network in its traditional sense, according to the report. 

Google Threat Intelligence observes that state-aligned and financially motivated actors affiliated with China, Iran, North Korea, and Russia are integrating artificial intelligence tools into nearly every stage of the intrusion lifecycle. 

A growing number of these campaigns include automated reconnaissance, vulnerability mapping, and highly tailored social engineering, which can be carried out with minimal direct human intervention and are increasingly modular, scalable, and effective. 

In accordance with these findings, a newly released assessment by Google Threat Intelligence Group indicates a more operational phase of the threat landscape has begun. This analysis warns that adversaries are no longer considering artificial intelligence a peripheral experiment, but are instead embedding it directly into live attack workflows.

In particular, the targeting and misuse of Gemini models is highlighted, reflecting a broader trend in which commercially available generative systems are systematically evaluated, stressed, and sometimes incorporated into malicious toolchains. 

Researchers documented instances in which active malware strains initiated direct calls to Gemini during runtime through the application programming interface. In the absence of hard-coding all functional components within the malware binary, operators dynamically requested task-specific source code as the intrusion progressed from the model.

As part of the HONESTCUE malware family, structured prompts were issued to obtain C# code snippets that were subsequently executed within its attack chain. By externalizing portions of its logic, the malware was able to reduce its static footprint and complicate detection strategies that utilize signature matching or behavioral heuristics. 

Further, the report describes sustained efforts to perform model extraction attacks, also known as distillation attacks. These operations involved the generation of large volumes of carefully sequenced queries that mapped response patterns and approximated internal decision boundaries by threat actors. 

A key objective of adversaries is to replicate certain aspects of proprietary model performance through iterative analysis, so that they can train substitute systems without being required to bear the entire cost and workload associated with the development of a large-scale model. 

A Google representative has reported that multiple campaigns characterized by abnormal prompt velocity and structured probing activities intended to harvest Gemini's underlying capabilities have been identified and disrupted. This underscores the importance of safeguards which address not only data exfiltration, but also model intelligence protection as well. 

According to CrowdStrike, parallel intelligence strengthens our assessment that artificial intelligence integration is materially slowing down the tempo of modern intrusions. According to the investigators, adversaries are generating single-line commands for reconnaissance, credential harvesting, and data staging on compromised hosts by executing large language models in real time on compromised hosts. This effectively shifts tactical decision-making to on-demand AI systems. 

Metrics indicate that the firm's operational acceleration in 2025 has resulted in an average “breakout time” of eCrime, or the interval between initial access and lateral movement towards high-value assets, dropping to 29 minutes, with the fastest observed transition occurring within 27 seconds.

It was documented that the LAMEHUG malware utilized an external LLM via Hugging Face API to generate dynamic commands for enumerating hardware profiles, processes, services, network configurations and Active Directory domain data based upon minimal embedded prompts. Through outsourcing reconnaissance logic to a model, operators reduced the need for pre-compiled modules, enabling rapid adaptation without modifying the underlying binary. 

A single threat actor can pivot interactively by issuing contextualized instructions that are responsive to the environment in real time as a consequence of this architectural choice. There has been a continued focus on the technology sector, emphasizing its concentration of privileged access paths and its systemic significance throughout the supply chain. 

In addition, CrowdStrike noted that artificial intelligence is extending across multiple phases of the intrusion lifecycle. The number of incidents involving fake CAPTCHA lures grew by 563 percent in 2025 when compared with 2024, indicating the use of generative systems in social engineering. Some moderately resourced groups, such as Punk Spider, have been observed utilizing Gemini and DeepSeek to develop scripts designed to extract credentials from backup archives, terminate defensive services, and erase forensic evidence. 

Scripting that makes use of artificial intelligence (AI) narrows the capability gap between mid-tier criminal operators and highly-trained red teams, enabling coordinated attack chains which combine identity abuse, backup compromise, and domain escalation within a single attack chain. 

Separately, adversaries distributed malicious npm packages that instructed malicious AI command-line tools to generate commands for exfiltrating authentication material and cryptoassets. The incident responders reported the discovery of over 90 environments executing this adversary-developed AI workflow, indicating a trend toward threat actors delegating core post-exploitation functions to intelligent agents within enterprise networks. Model-driven approaches are also being implemented by state-aligned groups.

The Russian-linked collective FANCY BEAR deployed LAMEHUG against Ukrainian government entities, embedding prompts that instructed the model to copy Office documents and PDF documents, gather domain intelligence, and stage system data into text files for exfiltration by embedding prompts into the model. 

Underground forums reflect this operational shift. ChatGPT references outnumbered any other model by a significant margin by 2025, a development attributed less to technical preference than to the platform's widespread recognition and accessibility. This campaign illustrates how quickly reconnaissance, targeting, and staging can be automated once a model has been incorporated within an intrusion toolchain, despite the fact that LLM-enabled malware has not yet been proven more effective than traditional tools. 

It appears that AI will serve as a force multiplier, reducing operating friction and compressing timelines as well as reshaping expectations surrounding attacker speed and adaptability in the near future. 

Furthermore, Google announced that it worked with industry partners to dismantle an infrastructure associated with a suspected China-nexus espionage actor trackable as UNC2814 to emphasize the convergence of cloud platforms and covert command infrastructure. 

Approximately 53 organizations within 42 countries have been compromised as a result of the group's penetration, according to findings published by Google Threat Intelligence Group and Mandiant, with additional suspected intrusions in 20 other countries suspected. It is reported that the actor has maintained access to international government entities and global telecommunications providers across Africa, Asia, and the Americas for an extended period of time since at least 2017.

The investigators observed that the group utilized API calls to legitimate software as a service applications as a command-and-control strategy, intentionally intermixing malicious traffic with routine cloud communication. This operation is supported by the use of a C-based backdoor referred to as GRIDTIDE, which exploits the Google Sheets API for covert communication. 

The malware implements a polling mechanism by embedding command logic within spreadsheet cells, thereby retrieving attacker instructions and returning execution status codes from cell A1. A pair of adjacent cells facilitate bidirectional data transmission, including command output and file exfiltration staging. A second cell stores the compromised host's system metadata. This design facilitates remote data transfer and data tasking while concealing C2 exchanges in otherwise benign API activity. 

Although GRIDTIDE was identified in multiple environments, researchers were unable to definitively determine if every intrusion was based on the same payload. The initial access vectors are currently being investigated; however, UNC2814 has historically exploited vulnerable web servers and edge devices to gain access. 

As part of the post-compromise activity, service accounts were used to move laterally via SSH, living-off-the-land binaries were extensively used for reconnaissance and privilege escalation, as well as persistence through an embedded systemd service, deployed at /etc/systemd/system/xapt.service, which activated a new malware instance from /usr/sbin/xapt once activated.

The campaign also included the deployment of SoftEther VPN Bridge to create outbound encrypted tunnels to external infrastructure, which has previously been associated with multiple China-linked threat clusters. 

Based on forensic analysis, GRIDTIDE appears to have been selectively deployed on endpoints containing personally identifiable information in order to obtain intelligence on specific individuals or entities. Google reported that no confirmed evidence of data exfiltration occurred during the observed activity window. 

The remediation measures included terminating attacker-controlled Google Cloud projects, disabling UNC2814 infrastructure, robbing access to compromised accounts, and blocking the misuse of Google Sheets API endpoints utilized for C2 operations as part of Google's remediation measures. 

An official notification was sent to affected organizations and direct incident response support was provided to confirmed victims following the launch of this campaign, described as one among the most extensive and strategic campaigns that the company has encountered in recent years. All together, these disclosures indicate that artificial intelligence will become embedded in enterprise workflows with the same rigor as privileged infrastructure. 

As AI models, APIs, and service accounts become more integrated into enterprise workflows, they will need to be governed with the same level of rigorousness as privileged infrastructure. Security leaders should ensure that these assets are treated with strict access controls, anomaly detection, and continuous logging as high-value assets.

Increasing the effectiveness of threat hunting programs must include monitoring for abnormal prompt velocity, unusual API polling patterns, and model-driven command execution. As part of this effort, organizations should evaluate identity hygiene, restrict outbound connectivity from sensitive workloads, and harden edge systems that serve as the initial point of entry for hackers. 

An adversary who attempts to blend malicious traffic with legitimate SaaS communications can be contained with cloud-native telemetry, behavioral analytics, and zero-trust segmentation. The development of defensive strategies must therefore proceed parallel to the operationalization of artificial intelligence across reconnaissance, lateral movement, and persistence, with a particular focus on the security of models, the integrity of supply chains, and the coordination of rapid response activities. 

A clear lesson has emerged: Artificial intelligence is no longer peripheral to cyber security risk, but has become integral to both the threat model and the defense architecture designed to counteract it.

Is Spyware Secretly Hiding on Your Phone? How to Detect It, Remove It, and Prevent It

 



If your phone has started behaving in ways you cannot explain, such as draining power unusually fast, heating up during minimal use, crashing, or displaying unfamiliar apps, it may be more than a routine technical fault. In some cases, these irregularities signal the presence of spyware, a type of malicious software designed to quietly monitor users and extract personal information.

Spyware typically enters smartphones through deceptive mobile applications, phishing emails, malicious attachments, fraudulent text messages, manipulated social media links, or unauthorized physical access. These programs are often disguised as legitimate utilities or helpful tools. Once installed, they operate discreetly in the background, avoiding obvious detection.

Depending on the variant, spyware can log incoming and outgoing calls, capture SMS and MMS messages, monitor conversations on platforms such as Facebook and WhatsApp, and intercept Voice over IP communications. Some strains are capable of taking screenshots, activating cameras or microphones, tracking location through GPS, copying clipboard data, recording keystrokes, and harvesting login credentials or cryptocurrency wallet details. The stolen information is transmitted to external servers controlled by unknown operators.

Not all spyware functions the same way. Some applications focus on aggressive advertising tactics, overwhelming users with pop-ups, altering browser settings, and collecting browsing data for revenue generation. Broader mobile surveillance tools extract system-level data and financial credentials, often distributed through mass phishing campaigns. More intrusive software, frequently described as stalkerware, is designed to monitor specific individuals and has been widely associated with domestic abuse cases. At the highest level, intricately designed commercial surveillance platforms such as Pegasus have been deployed in targeted operations, although these tools are costly and rarely directed at the general public.

Applications marketed as parental supervision or employee productivity tools also require caution. While such software may have legitimate oversight purposes, its monitoring capabilities mirror those of spyware if misused or installed without informed consent.

Identifying spyware can be difficult because it is engineered to remain hidden. However, several warning indicators may appear. These include sudden battery drain, overheating, sluggish performance, unexplained crashes, random restarts, increased mobile data consumption, distorted calls, persistent pop-up advertisements, modified search engine settings, unfamiliar applications, difficulty shutting down the device, or unexpected subscription charges. Receiving suspicious messages that prompt downloads or permission changes may also signal targeting attempts. If a device has been out of your possession and returns with altered settings, tampering should be considered.

On Android devices, reviewing whether installation from unofficial sources has been enabled is critical, as this setting allows apps outside the Google Play Store to be installed. Users should also inspect special app access and administrative permissions for unfamiliar entries. Malicious programs often disguise themselves with neutral names such as system utilities. Although iPhones are generally more resistant without jailbreaking or exploited vulnerabilities, they are not immune. Failing to install firmware updates increases exposure to known security flaws.

If spyware is suspected, measured action is necessary. Begin by installing reputable mobile security software from verified vendors and running a comprehensive scan. Manually review installed applications and remove anything unfamiliar. Examine permission settings and revoke excessive access. On Android, restarting the device in Safe Mode temporarily disables third-party apps, which may assist in removal. Updating the operating system can also disrupt malicious processes. If the issue persists, a factory reset may be required. Important data should be securely backed up before proceeding, as this step erases all stored content. In rare instances, professional technical assistance or device replacement may be needed.

Long-term protection depends on consistent preventive practices. Maintain strict physical control over your phone and secure it with a strong password or biometric authentication. Configure automatic screen locking to reduce the risk of unauthorized access. Install operating system updates promptly, as they contain critical security patches. Download applications only from official app stores and review developer credibility, ratings, and permission requests carefully before installation. Enable built-in security scanners and avoid disabling system warnings. Regularly audit app permissions, especially for access to location, camera, microphone, contacts, and messages.

Remain cautious when interacting with links or attachments received through email, SMS, or social media, as phishing remains a primary delivery method for spyware. Avoid jailbreaking or rooting devices, since doing so weakens built-in protections and increases vulnerability. Activate multi-factor authentication on essential accounts such as email, banking, and cloud storage services, and monitor login activity for irregular access. Periodically review mobile data usage and billing statements for unexplained charges. Maintain encrypted backups so decisive action, including a factory reset, can be taken without permanent data loss.

No mobile device can be guaranteed completely immune from surveillance threats. However, informed digital habits, timely updates, disciplined permission management, and layered account security significantly reduce the likelihood of covert monitoring. In an era where smartphones store personal, financial, and professional data, vigilance remains the strongest defense.

Google Expands Privacy Tools With Automated ID Detection and Deepfake Image Removal

 

Years of relying on users to report privacy issues have shaped Google’s approach so far. Lately, automated tools began taking a bigger role in spotting private details online. One shift involves how quickly artificial visuals get flagged across search results. Instead of waiting for complaints, systems now proactively detect such content. Efficiency improves when machines assist with removals. This update adjusts how personal data flows through the platform. Recently, detection methods became sharper at finding fake imagery. People gain better control without needing to act first. Progress shows in faster response times behind the scenes. 

What stands out in this update is a more capable "Results About You" feature. Using Google's vast web index, it searches for personal details visible on public pages. Still, there is a condition - people need to share some identifying information for matches to be found. After signing up, automated scans run regularly. Alerts go out when fresh links showing that person’s data turn up in search results. 

One major upgrade helps the software spot ID codes hidden in online pages. These can be driving permit numbers, passport data, or national identity figures. Access depends on user permission set in profiles, along with self-submitted records. With permits, the entire sequence is needed; however, travel documents and tax IDs need just a partial match. After setup, the mechanism reviews stored material to flag possible leaks. 

Even though Google doesn’t control outside sites, it may take down certain links from its search listings. Since being found online often depends on search engines, removing those entries can greatly limit exposure to identity theft, unwanted personal disclosures, or abuse. Despite lacking authority over external pages, limiting access through search still offers meaningful protection.  
Now handling non-consensual intimate visuals differently, the firm includes AI-made fakes in its revised policy. Since manufactured images are spreading faster, reports may cover real photos alongside altered ones. Submitting several pictures at once is possible, which helps people facing organized abuse move through the steps quicker. 

A new option appears via three dots beside image entries - clicking lets people mark media showing them in sensitive situations. Removing such results begins there, with a choice labeled "Remove result" leading onward. That path includes confirming if pictures are authentic or made by artificial tools. Faster replies come now, Google says, especially when many visuals require attention. Streamlined steps help manage high quantities without delay piling up. 

Ahead of issues arising, the system checks for recurring content once someone submits a deletion. Following approval, ongoing scans detect related information during later indexing rounds. Whether it involves personal details or visual files, matches trigger warnings automatically. When duplicates show up, visibility stops before they appear in outcomes - no repeated forms needed. Each cycle works silently unless something flagged emerges. 

Even with improvements, the tools fall short in key ways. While they limit what shows up in searches, they leave the material live on source sites. Yet since many people rely on Google to find content, taking links out of results tends to help - sometimes significantly. 

Right now, systems can spot ID numbers automatically. Soon after, quicker image reports should appear in many regions - proactive scans following shortly afterward. Expansion to nearly every country will happen by the end of the year, though timing may differ slightly depending on location.

Featured