Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp Reveals "Private Processing" Feature for Cloud Based AI Features

WhatsApp claims even it can not process private data

WhatsApp has introduced ‘Private Processing,’ a new tech that allows users to use advanced AI features by offloading tasks to privacy-preserving cloud servers, without exposing their chat to Meta. Meta claims even it cannot see the messages while processing them. The system employs encrypted cloud infrastructure and hardware-based isolation without making it visible to anyone, even Meta, or processing data. 

About private processing

For those who decide to use Private Processing, the system works in an obscure verification via the user’s WhatsApp client to confirm the user’s validity. 

Meta claims this system keeps WhatsApp’s end-to-end encryption intact while offering AI features in chats. However, the feature currently applies only to select use cases and excludes Meta’s broader AI deployments, including those used in India’s public service systems.

Private processing employs Trusted Execution Environments (TEEs) — safe virtual machines that use cloud infrastructure to keep AI requests hidden. 

About the system

  • Performs encryption of user requests from the system to the TEE utilizing end-to-end encryption
  • Encrypts user requests from the device to the TEE using end-to-end encryption
  • Restricts storage or logging of messages post-processing
  • Reports logs and binary images for external verification and audits

WhatsApp builds AI through wider privacy concerns

According to Meta, the Private processing is a response to privacy questions around AI and messaging. WhatsApp has now joined other companies like Apple that have introduced confidential AI computing models in the previous year. “To validate our implementation of these and other security principles, independent security researchers will be able to continuously verify our privacy and security architecture and its integrity,” Meta said.

It is similar to Apple’s private cloud computing in terms of public transparency and stateless processing. Currently, however, WhatsApp is using them only for select features. Apple, on the other hand, has declared plans to implement this model throughout all its AI tools, whereas WhatsApp has not made such claims, yet. 

WhatsApp says, “Private Processing uses anonymous credentials to authenticate users over OHTTP. This way, Private Processing can authenticate users to the Private Processing system but remains unable to identify them.”

Public Wary of AI-Powered Data Use by National Security Agencies, Study Finds

 

A new report released alongside the Centre for Emerging Technology and Security (CETaS) 2025 event sheds light on growing public unease around automated data processing in national security. Titled UK Public Attitudes to National Security Data Processing: Assessing Human and Machine Intrusion, the research reveals limited public awareness and rising concern over how surveillance technologies—especially AI—are shaping intelligence operations.

The study, conducted by CETaS in partnership with Savanta and Hopkins Van Mil, surveyed 3,554 adults and included insights from a 33-member citizens’ panel. While findings suggest that more people support than oppose data use by national security agencies, especially when it comes to sensitive datasets like medical records, significant concerns persist.

During a panel discussion, investigatory powers commissioner Brian Leveson, who chaired the session, addressed the implications of fast-paced technological change. “We are facing new and growing challenges,” he said. “Rapid technological developments, especially in AI [artificial intelligence], are transforming our public authorities.”

Leveson warned that AI is shifting how intelligence gathering and analysis is performed. “AI could soon underpin the investigatory cycle,” he noted. But the benefits also come with risks. “AI could enable investigations to cover far more individuals than was ever previously possible, which raises concerns about privacy, proportionality and collateral intrusion.”

The report shows a divide in public opinion based on how and by whom data is used. While people largely support the police and national agencies accessing personal data for security operations, that support drops when it comes to regional law enforcement. The public is particularly uncomfortable with personal data being shared with political parties or private companies.

Marion Oswald, co-author and senior visiting fellow at CETaS, emphasized the intrusive nature of data collection—automated or not. “Data collection without consent will always be intrusive, even if the subsequent analysis is automated and no one sees the data,” she said.

She pointed out that predictive data tools, in particular, face strong opposition. “Panel members, in particular, had concerns around accuracy and fairness, and wanted to see safeguards,” Oswald said, highlighting the demand for stronger oversight and regulation of technology in this space.

Despite efforts by national security bodies to enhance public engagement, the study found that a majority of respondents (61%) still feel they understand “slightly” or “not at all” what these agencies actually do. Only 7% claimed a strong understanding.

Rosamund Powell, research associate at CETaS and co-author of the report, said: “Previous studies have suggested that the public’s conceptions of national security are really influenced by some James Bond-style fictions.”

She added that transparency significantly affects public trust. “There’s more support for agencies analysing data in the public sphere like posts on social media compared to private data like messages or medical data.”

AI Now Writes Up to 30% of Microsoft’s Code, Says CEO Satya Nadella

 

Artificial intelligence is rapidly reshaping software development at major tech companies, with Microsoft CEO Satya Nadella revealing that between 20% and 30% of code in the company’s repositories is currently generated by AI tools. 

Speaking during a fireside chat with Meta CEO Mark Zuckerberg at Meta’s LlamaCon conference, Nadella shed light on how AI is becoming a core contributor to Microsoft’s development workflows. He noted that Microsoft is increasingly relying on AI not just for coding but also for quality assurance. 

“The agents we have for reviewing code; that usage has increased,” Nadella said, adding that the performance of AI-generated code differs depending on the programming language. While Python showed strong results, C++ remained a challenge. “C Sharp is pretty good but C++ is not that great. Python is fantastic,” he noted. 

When asked about the role of AI in Meta’s software development, Zuckerberg did not provide a specific figure but shared that the company is prioritizing AI-driven engineering to support the development of its Llama models. 

“Our bet is that probably half the development is done by AI as opposed to people and that will just kind of increase from there,” Zuckerberg said. 

Microsoft’s Chief Technology Officer Kevin Scott has previously projected that AI will be responsible for generating 95% of all code within the next five years. Speaking on the 20VC podcast, Scott emphasized that human developers will still play a vital role. 

“Very little is going to be — line by line — human-written code,” he said, but added that AI will “raise everyone’s level,” making it easier for non-experts to create functional software. The comments from two of tech’s biggest leaders point to a future where AI not only augments but significantly drives software creation, making development faster, more accessible, and increasingly automated.

Do Not Charge Your Phone at Public Stations, Experts Warn

Do Not Charge Your Phone at Public Stations, Experts Warn

For a long time, smartphones have had a built-in feature that saves us against unauthorized access through USB. In Android and iOS, pop-ups ask us to confirm access before a data USB connection is established to transfer our data. 

But this defense is not enough to protect against “juice-jacking” — a hacking technique that manipulates charging stations to install malicious code, steal data, or enable access to the device while plugged in. Experts have found a severe flaw in this system that hackers can exploit easily. 

Cybersecurity researchers have discovered a serious loophole in this system that can be easily exploited. 

Hackers using new technique to hack smartphones via USB

According to experts, hackers can now use a new method called “choice jacking” to make sure that access to smartphones is easily verified without the user realizing it. 

First, the hackers deploy a feature on a charging station so that it looks like a USB keyboard when connected. After that, through USB Power Delivery, it runs a “USB PD Data Role Swap” to make a Bluetooth connection, activating the file transfer consent pop-up, and approving permission while acting as a Bluetooth keyboard. 

The hackers leverage the charging station to evade the protection mechanism on the device, which is aimed at protecting users against hacking attacks with USB peripherals. This can become a serious issue if the hacker gets access to all files and personal data stored on our smartphones to hack accounts. 

Experts at Graz University of Technology tried this technique on devices from a lot of manufacturers such as Samsung, which sells the second most smartphones besides Apple. All tested smartphones allowed the researchers to transfer data during the duration the screen was unlocked. 

No solution to this problem

Despite smartphone manufacturers being aware of the problem, there are not enough safety measures against juice-jacking, Only Google and Apply have implemented a solution, which requires users first to provide their PIN or password before they can use a device as authorized start and begin the data transfer. But, other manufacturers have not come up with efficient solutions to address this issue and offer protection.

If your smartphone has USB debugging enabled, it can be dangerous as USB debugging allows hackers to get access to the device via the Android Debug Bridge and deploy their own apps, run files, and generally use a higher access mode. 

How to be safe?

The easiest way users can protect themselves from juice-jacking attacks through USB charging stations is to never use a public charging station. Users should always avoid charging stations in busy areas such as airports and malls, they are the most dangerous. 

Users are advised to carry their power banks when traveling and always keep their smartphones updated.

Microsoft Launches Recall AI for Windows 11 Copilot+ PCs with Enhanced Privacy Measures

 

After months of delays stemming from privacy and security concerns, Microsoft has officially rolled out its Recall AI feature for users of Windows 11 Copilot+ PCs. The feature, which has now exited its beta phase, is included in the latest Windows update. Recall AI enables users to search their on-screen activity by automatically taking screenshots and storing them—along with any extracted text—in a locally encrypted and searchable database. This makes it easier for users to find and revisit previous interactions, such as documents, applications, or web pages, using natural language search. 

Originally introduced in May 2024, Recall AI faced widespread criticism due to concerns around user privacy and the potential for misuse. Microsoft delayed its public launch several times, including a planned release in October 2024, to address these issues and gather feedback from Windows Insider testers. 

In its revised version, Microsoft has made Recall AI an opt-in tool with built-in privacy protections. All data remains on the user’s device, with no transmission to Microsoft servers or third parties. Features such as Windows Hello authentication, full local encryption, and user control over data storage have been added to reinforce security. Microsoft assures users they can completely remove the feature at any time, although temporary system files may persist briefly before being permanently deleted. 

For enterprise users with an active Microsoft 365 E3 subscription, the company offers advanced administrative controls. These allow IT departments to set access permissions and manage security policies related to the use of Recall AI in workplace environments. Alongside Recall AI, Microsoft has also launched two additional features tailored to Copilot+ PCs. 

The improved Windows search function now interprets user queries more contextually and processes them using the device’s neural processing unit for faster and smarter results. Meanwhile, the Click to Do feature provides context-sensitive shortcuts, making tasks like copying or summarising text and images more efficient. In separate developments, Microsoft continues to advance its position in quantum computing.

Earlier this year, the company unveiled Majorana 1, a quantum chip based on a novel Topological Core architecture. According to Microsoft, this breakthrough has the potential to significantly accelerate solutions to industrial-scale problems using quantum technology.

Microsoft Alerts Users About Password-spraying Attack

Microsoft Alerts Users About Password-spraying Attack

Microsoft alerts users about password-spraying attacks

Microsoft has warned users about a new password-spraying attack by a hacking group Storm-1977 that targets cloud users. The Microsoft Threat Intelligence team reported a new warning after discovering threat actors are abusing unsecured workload identities to access restricted resources. 

According to Microsoft, “Container technology has become essential for modern application development and deployment. It's a critical component for over 90% of cloud-native organizations, facilitating swift, reliable, and flexible processes that drive digital transformation.” 

Hackers use adoption-as-a-service

Research says 51% of such workload identities have been inactive for one year, which is why attackers are exploiting this attack surface. The report highlights the “adoption of containers-as-a-service among organizations rises.” According to Microsoft, it continues to look out for unique security dangers that affect “containerized environments.” 

The password-spraying attack targeted a command line interface tool “AzureChecker” to download AES-encrypted data which revealed the list of password-spray targets after it was decoded. To make things worse, the “threat actor then used the information from both files and posted the credentials to the target tenants for validation.”

The attack allowed the Storm-1977 hackers to leverage a guest account to make a compromised subscription resource group and over 200 containers that were used for crypto mining. 

Mitigating password-spraying attacks

The solution to the problem of password spraying attacks is eliminating passwords. It can be done by moving towards passkeys, a lot of people are already doing that. 

Microsoft has suggested these steps to mitigate the issue

  • Use strong authentication while putting sensitive interfaces to the internet. 
  • Use strong verification methods for the Kubernetes API to stop hackers from getting access to the cluster even when valid credentials like kubeconfig are obtained.  
  • Don’t use the read-only endpoint of Kubelet on port 10255, which doesn’t need verification. 

Modify the Kubernetes role-based access controls for every user and service account to only retain permissions that are required. 

According to Microsoft, “Recent updates to Microsoft Defender for Cloud enhance its container security capabilities from development to runtime. Defender for Cloud now offers enhanced discovery, providing agentless visibility into Kubernetes environments, tracking containers, pods, and applications.” These updates upgrade security via continuous granular scanning. 

Now You Can Hire AI Tools Like Freelancers — Thanks to This Indian Startup

 



A tech startup based in Ahmedabad is changing how businesses use artificial intelligence. The company has launched a platform that allows users to hire AI tools the same way they hire freelancers— on demand and for specific tasks.

Over the past few years, companies everywhere have turned to AI to speed up their work, reduce costs, and make smarter decisions. But finding the right AI tool has become a tough task. With hundreds of platforms available online, most users—especially those without a technical background— don’t know where to start. Many tools are expensive, difficult to use, or don’t work as expected.

That’s where ActionAgents, a platform by ActionLabs.ai, comes in. The idea behind the platform began when the team noticed that many of their business clients kept asking which AI tool to use for particular needs. There was no clear or reliable place to compare different tools and test them first.

At first, they created a directory that listed a wide range of AI tools from different sectors. But it didn’t solve the full problem. Users still had to leave the site, sign up for external tools, and often pay for something that didn’t meet their expectations. This made it harder for small businesses and non-technical users to benefit from AI.

To solve this, the team launched ActionAgents in January. It is a single platform that brings various AI tools together and lets users access them directly. There’s no need to subscribe or download anything. Users can try out different AI agents and only pay when they use a service.

The platform currently offers over 50 AI-powered mini tools. These include tools for writing resumes and cover letters, checking job applications against hiring systems, generating business names, planning trips, finding gifts, building websites, and even analyzing WhatsApp chats.

In just two months, more than 3,000 people have signed up. Every day, about 80–100 new users join, and over 200 tasks are completed by the AI agents. What’s more impressive is that the startup has done all this without spending money on advertising. People from countries like India, the US, Canada, and those in Europe and the Middle East are using the platform.

The startup started with an investment of ₹15–20 lakh and is already seeing steady growth in users and revenue. Now, ActionAgents plans to reach 10,000 users in the next few months. Over the next two years, it aims to grow its user base to around 1 million.

The team also wants to open the platform to developers, allowing them to build their own AI tools and offer them on ActionAgents. This move could help more people build, sell, and earn from their own AI creations.


From a Small Home to a Big AI Dream

The person who started ActionAgents, Jay, didn’t come from a rich background. He grew up in Ahmedabad, where his family worked very hard to earn a living. His father drove a rickshaw and often worked extra hours to support them. His mother stitched clothes for a living and also taught other women how to sew, so they could earn money too.

Even though they didn’t have much money, Jay’s parents always believed that education was important. They wanted him to study in an English-medium school, even when relatives made fun of them for spending money on it. They hoped a good education would give him better chances in life.

That decision made a big difference. Today, Jay is building a powerful AI platform from scratch, without taking any money from investors. He started small, but now he’s working to make AI tools easy and affordable for everyone, whether they are tech-savvy or not.

He is not doing it alone. A young and talented team is helping him bring this idea to life. People like Jash Jasani, Dev Patel, Deepali, and many others are part of the ActionAgents team. Together, they are working on building smart solutions that can help businesses and individuals with simple tasks using AI.

Their goal is to change how people use technology in daily work by making it easier, quicker, and more helpful. From a small beginning, they are now working towards a big vision: to shape the future of how people work with the help of AI.

Threat Alert: Hackers Using AI and New Tech to Target Businesses

Threat Alert: Hackers Using AI and New Tech to Target Businesses

Hackers are exploiting the advantages of new tech and the availability of credentials, commercial tools, and other resources to launch advanced attacks faster, causing concerns among cybersecurity professionals. 

Global Threat Landscape Report 2025

The 2025 Global Threat Landscape Report by FortiGuard Labs highlights a “dramatic escalation in scale and advancement of cyberattacks” due to the fast adoption of the present hostile tech and commercial malware and attacker toolkits.  

According to the report, the data suggests cybercriminals are advancing faster than ever, “automating reconnaissance, compressing the time between vulnerability disclosure and exploitation, and scaling their operations through the industrialization of cybercrime.”

According to the researchers, hackers are exploiting all types of threat resources in a “systematic way” to disrupt traditional advantages enjoyed by defenders. This has put organizations on alert as they are implementing new defense measures and leveling up to mitigate these changing threats. 

Game changer AI

AI has become a key tool for hackers in launching phishing attacks which are highly effective and work as initial access vectors for more harmful attacks like identity theft or ransomware.

A range of new tools such as WormGPT and FraudGPT text generators; DeepFaceLab and Faceswap deepfake tools; BlackmailerV3, an AI-driven extortion toolkit for customizing automatic blackmail emails, and AI-generated phishing pages like Robin Banks and EvilProxy, making it simple for threat actors to make a swift and dirty cybercrime business. 

The report highlights that the growing cybercrime industry is running on “cheap and accessible wins.” With AI evolving, the bar has dropped for cybercriminals to access tactics and intelligence needed for cyberattacks “regardless of an adversary's technical knowledge.”

These tools also allow cybercriminals to build better and more convincing phishing threats and scale a cybercriminal enterprise faster, increasing their success rate. 

Attackers leveraging automated scanning

Attackers are now using automated scanning for vulnerable systems reaching “unprecedented levels” at billions of scans per month, 36,000 scans every second. The report suggests a yearly rise in active scanning to 16.7%. The defenders have less time to patch vulnerable systems due to threat actors leveraging automation, disclosing security loopholes impacting organizations. 

According to researchers, “Tools like SIPVicious and commercial scanning tools are weaponized to identify soft targets before patches can be applied, signaling a significant 'left-of-boom' shift in adversary strategy.”

TP-Link Outlines Effective Measures for Preventing Router Hacking

 


The presentation of a TP-Link Wi-Fi router by Representative Raja Krishnamoorthi of Illinois to Congress was one of the rare displays that highlighted increasing national security concerns on March 5. As a result of the congressman's stark warning — "Don't use this" — he sounded an alarm that the use of this network would carry significant security risks. His stark warning — "Don't use this" — immediately brought to mind the issue of potential vulnerabilities resulting from the use of foreign-made networking devices that may not have been adequately tested. 

The United States Representative Krishnamoorthi has been advocating for a ban on the sale and distribution of TP-Link routers across the nation for several months. His stance comes from an investigation that indicates that these devices may have been involved in state-sponsored cyber intrusions from China in 2023. There is increasing apprehension concerning the matter, and several federal agencies, including the Departments of Commerce, Defence, and Justice, have begun to conduct formal inquiries into the matter in the coming months. 

As federal agencies investigate the potential security risks associated with TP-Link's operations, one of the largest providers of consumer networking devices in the United States is currently being subjected to greater scrutiny. Though there is no doubt that the company is widely used in American households and businesses, there have been fears that regulators might take action against it over its alleged ties to mainland Chinese entities. 

This was a matter that was reported in December by The Wall Street Journal. It is reported that the U.S. Departments of Commerce, Defence, and Justice are investigating the matter, but there has not been conclusive evidence to indicate that intentional misconduct has occurred. In light of these developments, TP-Link's American management has clarified the company's organizational structure and operational independence as a result of these developments. 

The President of TP-Link USA, Jeff Barney, stated in a recent statement to WIRED that the American division operates as a separate and autonomous entity. According to Barney, TP-Link USA is a U.S.-based company. He asserted that the company has no connection with TP-Link Technologies, its counterpart operating in mainland China.

In addition, he also emphasised that the company was capable of demonstrating its operational and legal separation, as well as that it was committed to adhering to ensuring compliance with U.S. regulatory requirements. This increased scrutiny comes as a result of a bipartisan effort led by Representative Krishnamoorthi and Representative John Moolenaar of Michigan, who are currently working as representatives of the state of Michigan. According to the Wall Street Journal, federal authorities are seriously considering banning TP-Link routers. 

It is believed that the two lawmakers jointly submitted a formal request to the Department of Commerce in the summer of 2024, calling for immediate regulatory action because of the national security implications it might have. This incident has intensified the discussion surrounding the security of consumer networking devices and the broader consequences of relying on foreign technology infrastructure, while federal investigations are ongoing. 

There has recently been an appointment at TP-Link for Adam Robertson to become its new head of cybersecurity, a strategic move that underscores the company's commitment to ensuring the safety of consumers as well as enterprises. A 17-year industry veteran, he has been in executive leadership roles at firms like Reliance, Inc. and Incipio Group for the past eight years. In addition to playing an important role in advancing the company's cybersecurity initiatives, Robertson also has experience with Incipio Group and TP-Link's global headquarters in Irvine, California.

From his base at TP-Link's global headquarters, he is responsible for overseeing TP-Link's security operations across a wide range of networking and smart home products. In the past year, company executives have expressed strong confidence in Robertson's ability to drive significant change within the organisation. 

Jeff Barney, President of TP-Link USA, described Robertson's appointment as a timely and strategic addition to the organisation. He commented that Robertson's technical execution skills, as well as strategic planning skills, are in line with TP-Link's long-term innovation goals, which are centred upon innovation. With Robertson as the leader of the company, he is expected to help create a robust security culture within the company and help set more stringent industry standards for product integrity as well as consumer protection. 

Additionally, Robertson expressed enthusiasm for the organisation and his determination to contribute to its mission to advance secure, accessible technology by joining and contributing. It was his commitment to TP-Link to build on its strong foundation in cybersecurity to ensure that the brand will continue to be regarded as a trusted name in the global technology industry as a whole. As a result of the potential for it to be categorised as critical, a new security flaw, referred to as CVE-2023-1389, has raised considerable concern within the cybersecurity community. 

It is a vulnerability in TP-Link routers, called the Archer AX-21 router, that results from an inadequate input validation within the device's web-based management interface that leads to the vulnerability. By leveraging this weakness, malicious actors can craft specific HTTP requests that result in the execution of arbitrary commands with root privileges. As of right now, the Ballista botnet, an extremely sophisticated and rapidly evolving threat, is exploiting this vulnerability. 

It can, by exploiting this vulnerability, infect and propagate across vulnerable devices on the Internet autonomously, enabling it to recruit these devices in large-scale Distributed Denial of Service (DDoS) attacks. There is still a risk of exploitation for router firmware versions before 1.1.4 Build 202330219, according to cybersecurity analysts. The fact that this threat is capable of operating at a large scale makes it especially alarming. 

Due to its popularity among both consumers and businesses, the Archer AXE-21 has become a popular target for threat actors. As a result of several manufacturers in both the United States and Australia already being affected by this issue, there is a pressing need for mitigation. To prevent further compromise, experts stress immediate firmware updates and network security measures. As a result of the widespread use of this vulnerability, many previous botnet operations have exploited this vulnerability, further increasing the concerns surrounding its ongoing abuse. 

Multiple cybersecurity reports, including coverage by TechRadar Pro, have documented several threat actor groups utilising this particular vulnerability, among them the notorious Mirai botnet that has been operating for over 10 years. In both 2023 and 2024, activity surrounding this vulnerability was observed, which indicates that it has continued to attract malicious operators for years to come. 

Cato Networks researchers have identified an attack that occurs when an attacker deploys a Bash script to drop the malware onto a targeted system using the payload dropper function. This script is used to initiate the compromise by acting as a payload dropper for malicious code. During Cato's analysis, the botnet operators appeared to change their behaviour as the campaign progressed, moving to Tor-based domains, perhaps in response to increased cybersecurity professionals' attention. 

As soon as the malware has been executed, it establishes a secure TLS-encrypted C2 channel via port 82 that can be used for command-and-control (C2) purposes. Through the use of this channel, threat actors can take complete control of the compromised device remotely, enabling shell commands to be executed, remote code execution to be performed, and denial-of-service (Dos) attacks to be launched. This malware also has the capability of extracting sensitive data from the affected systems. This adds an exfiltration component to the malware's capabilities, giving it a significant amount of capability. 

As far as attribution is concerned, Cato Networks said it was reasonably confident that the operators behind the Ballista botnet are based in Italy, citing IP addresses that came from the region and Italian language strings embedded within the malware's binary. As a result of these indicators, the malware campaign was named "Ballista", and this is a result of those indicators. 

Several critical industries are the primary targets of the botnet, including manufacturing, healthcare, professional services, and technology. Its primary activity has been recorded in the United States, Australia, China, and Mexico, with noteworthy activity being observed there. It has been estimated that over 6,000 internet-connected devices are vulnerable, which means that the attack surface remains extensive as well as that the threat is still present.

Critical Infrastructure at Risk: Why OT-IT Integration is Key to Innovation and Cybersecurity

 

As cyberattacks grow more advanced, targeting the essential systems of modern life—from energy pipelines and manufacturing plants to airports and telecom networks—governments are increasing pressure on industries to fortify their digital and physical defenses.

A series of high-profile breaches, including the shutdown of Seattle’s port and airport and disruptions to emergency services in New York, have triggered calls for action. As early as 2020, agencies like the NSA and CISA urged critical infrastructure operators to tighten their cybersecurity frameworks.

Despite this, progress has been gradual. Many businesses remain hesitant due to perceived costs. However, experts argue that merging operational technology (OT)—which controls physical equipment—with information technology (IT)—which manages digital systems—offers both protection and growth potential.

This fusion not only enhances reliability and minimizes service interruptions, but also creates opportunities for innovation and revenue generation, as highlighted by experts in a recent conversation with CIO Upside.

“By integrating (Internet-of-Things) and OT systems, you gain visibility into processes that were previously opaque,” Sonu Shankar, chief product officer at Phosphorus, told CIO Upside. Well-managed systems are a “launchpad for innovation,” said Shankar, allowing enterprises to make use of raw operational data.

“This doesn’t just facilitate operational efficiencies — it would potentially generate new revenue streams born from integrated visibility,” Shankar added.

Understanding OT and Its Role

Operational technology refers to any hardware or system essential to a business’s core services—such as factory machinery, production lines, logistics hubs, and even connected office devices like smart printers.

Upgrading these legacy systems might seem overwhelming, particularly for industries reliant on outdated hardware. But OT-IT convergence doesn’t have to be expensive. In fact, several affordable and scalable solutions already exist.

Technologies such as network segmentation, zero trust architecture, and cloud-based OT-IT platforms provide robust protection and visibility:

Network segmentation breaks a primary network into smaller, isolated units—making it harder for unauthorized users to access critical systems.

Zero trust security continuously verifies users and devices, reducing the risks posed by human error or misconfigurations.

Cloud platforms offer centralized insights, historical logs, automated system upkeep, and AI-powered threat detection—making it easier to anticipate and prevent cyber threats.

Fused OT-IT environments lay the groundwork for faster product development and better service delivery, said James McQuiggan, security awareness advocate at KnowBe4.

“When OT and IT systems can communicate effectively and securely across multiple platforms and teams, the development cycle is more efficient and potentially brings products or services to market faster,” he said. “For CIOs, they are no longer just supporting the business, but shaping what it will become.”

As digital threats escalate and customer expectations rise, the integration of OT and IT is no longer optional—it’s a strategic imperative for security, resilience, and long-term growth

Understanding ACR on Smart TVS and the Reasons to Disable It

 


Almost all leading TV models in recent years have been equipped with Automatic Content Recognition (ACR), a form of advanced tracking technology designed to analyse and monitor viewing habits that is a key component of most television sets. As a result of this system, detailed information is collected about the content being displayed on the screen, regardless of the source — whether it is a broadcast, a streaming platform, or an external device. 

A centralised server processes and evaluates this data once it has been captured. It is the purpose of television manufacturers to use these insights to construct comprehensive user profiles so they can better understand how individuals view the media and how they prefer to watch it. Following this gathering of information, it is used to deliver highly targeted advertising content, which is tailored to align closely with the interests of the viewers. 

It is important to realise, however, that even though ACR can improve the user experience by offering tailored advertisements and recommendations, it also raises significant concerns concerning data privacy and the extent to which modern smart televisions can monitor the user in real time. Using automatic content recognition (ACR), which is a sophisticated technology integrated into most modern smart televisions, users can detect and interpret the content presented on the screen with remarkable accuracy.

The technology uses audiovisual signals that have been captured by the system, whether they are images, sounds, or both, and compares them with an extensive database of indexed media assets, such as movies, television programs, commercials, and other forms of digital content. By working in the background seamlessly, ACR captures a wide range of behavioural data without having to be actively involved on the part of the user. 

The system tracks patterns such as how long a user watches a video, what channel they prefer, and how they use it most. This information proves immensely valuable to a wide range of stakeholders, including advertisers, distributors of content, and manufacturers of devices. By using these insights, companies can better segment their audiences, deliver more targeted and relevant ads, and make better recommendations about content. 

Even though ACR is often positioned as a tool to help consumers with their personalisation experience, its data-driven capabilities bring up critical concerns relating to personal privacy and informed consent. Even though users have the option to opt out of Automatic Content Recognition (ACR), finding the right settings can often prove to be a challenge, since television manufacturers tend to label the feature under different names, resulting in a confusing process when it comes to deactivating the feature.

It is possible to deactivate the OneClick capability of Samsung's smart TVS through the viewing information service menu. 

Samsung identifies its OneClick capability as part of the Viewing Information Service menu. To deactivate this feature, simply navigate to: Settings > All Settings > Terms & Privacy > Privacy Choices > Terms & Conditions, Privacy Policies, then deselect the Viewing Information Services checkbox. 

LG brands its ACR functionality as Live Plus. To turn this off, press the settings button on the remote control and follow the path: 
All Settings > General > System > Additional Settings, and then switch off the Live Plus option.

For Sony televisions operating with Samba Interactive TV, the ACR service can be disabled by going to: Settings > System Preferences > Samba Interactive TV, and selecting the Disable option. 

In the case of Roku TV, users can restrict ACR tracking by accessing: Settings > Privacy > Smart TV Experience, and toggling off Use Info from TV Inputs. 

On Android TV or Google TV devices, ACR-related data sharing can be limited by going to Settings > Privacy > Usage and Diagnostics, and disabling the corresponding toggle. 

For Amazon Fire TV, begin by navigating to: Settings > Preferences > Privacy Settings, and turning off both Device Usage Data and Collect App Usage Data. Then proceed to Preferences > Data Monitoring, and deactivate this setting as well. 

With VIZIO TVS, the ACR feature is labelled as Viewing Data. 

To turn it off, go to: System > Reset & Admin > Viewing Data, and press OK to disable the function. It is through these steps that users can gain a greater level of control over their personal information as well as limit the extent to which smart television platforms are tracking their behaviour.

To identify media content in real time, Automatic Content Recognition (ACR) technology uses advanced pattern recognition algorithms that recognize a variety of media elements in real time, utilizing advanced pattern recognition algorithms. To accurately determine what is being watched on a smart television, the system primarily uses two distinct methods – audio-based and visual-based recognition.

During the process of ACR based on audio, a small sample of sound is recorded from the programming being played currently. These audio samples, including dialogue, ambient sounds, music scores, or recognisable jingles, are analysed and matched against a repository of reference audio tracks, which are compiled by the system. By comparing these audio samples, the system can identify with accuracy the source and nature of the content that is being analysed. 

ACR, based on visual images capture, on the other hand, takes stills and images directly from the screen and compares them to an extensive collection of images and video clips stored in a database. By identifying a specific set of visual markers, the system can recognise a specific television show, a movie, or a commercial advertisement precisely and quickly. 

After a successful match has been established—whether through auditory or visual means—the ACR system collects the viewing data and transmits it to a server managed by a manufacturer, an advertiser, or a streaming service provider who manages external servers. Using the collected information, we can analyse content performance, display targeted advertisements, and improve the user experience for users.

The technology provides highly tailored content that is highly efficient, but it also raises significant concerns about the privacy and security of personal data. Automatic Content Recognition (ACR), on the other hand, represents an enormous advance in the ways smart televisions interact with their end users, advertisers, and content distributors. 

By monitoring the viewership of a particular event in real time and delivering detailed audience analytics, ACR has effectively integrated traditional broadcasting with the precision of digital media ecosystems. Consequently, this convergence enables more informed decision-making across the entire media value chain, from content optimisation to advertising targeting. 

There is growing awareness among consumers and industry stakeholders of the importance of gaining a comprehensive understanding of ACR technology as smart TVS continue to be adopted across the globe. In terms of advertisers and content providers, ACR is a powerful tool that offers them an opportunity to make their campaigns more efficient and engage their viewers more effectively. 

In addition, it raises many important questions in regards to digital privacy, data transparency, and ethical behaviour when using personal information. The future of television will be shaped by the continued development and implementation of ACR, which will have a pivotal influence on what makes TV successful in the future. ACR will be crucial to ensure that it contributes positively to the industry, its audiences and the community it serves by balancing technological innovation with responsible data governance.

In a report by The Markup, Automatic Content Recognition (ACR) technology has been reported to have the capability of capturing and analysing up to 7,200 visual frames per hour, the same as about two images per second. With high-frequency data collection, marketers and content platforms can conduct a level of surveillance that is both valuable in terms of marketing and content production.

This tool enables marketers to create a comprehensive profile of their prospects based on the correlation between their viewing habits and identifiable personal information, which can include IP addresses, email addresses, and even physical mailing addresses. These insights enable marketers to target a targeted audience and deliver content accordingly. 

With the help of real-time viewership patterns, advertisers can fine-tune their advertisements based on their target audience, and the effectiveness of their campaigns can also be measured by tracking which advertisements resulted in consumer purchases. The benefits of using this approach for content distributors include optimising user engagement and maximising revenue, however, the risks associated with data security and privacy are significant.

There is a danger in the absence of appropriate safeguards that can prevent misuse or unauthorised access to sensitive personal data collected through ACR. ACR technology is a very powerful tool for stealing identity information, as well as compromising personal security in extreme cases. ACR technology is also known for its covert nature, which is one of the most concerning aspects of the technology. 

ACR usually operates in the background without the user's awareness or active consent, operating silently in the background without their explicit knowledge or consent. While it is possible to disable ACR, it is usually a cumbersome and often obscure process hidden within the user interface of the television. As a result, it can be both time-consuming and frustrating in some cases when users need to navigate through numerous menus and settings to opt out of the software.

Individuals who consider this level of tracking intrusive or ethically questionable may want to restrict ACR functionality, although it does require deliberate effort. Guidance is available to help individuals through the process. To help users take better control of their digital privacy, I'm including step-by-step instructions in this section on how to disable the automatic recognition feature of several major smart TV brands.

AI-Powered Tools Now Facing Higher Risk of Cyberattacks

 



As artificial intelligence becomes more common in business settings, experts are warning that these tools could be the next major target for online criminals.

Some of the biggest software companies, like Microsoft and SAP, have recently started using AI systems that can handle office tasks such as finance and data management. But these digital programs also come with new security risks.


What Are These Digital Identities?

In today’s automated world, many apps and devices run tasks on their own. To do this, they use something called digital identities — known in tech terms as non-human identities, or NHIs. These are like virtual badges that allow machines to connect and work together without human help.

The problem is that every one of these digital identities could become a door for hackers to enter a company’s system.


Why Are They Being Ignored?

Modern businesses now rely on large numbers of these machine profiles. Because there are so many, they often go unnoticed during security checks. This makes them easy targets for cybercriminals.

A recent report found that nearly one out of every five companies had already dealt with a security problem involving one of these digital identities.


Unsafe Habits Increase the Risk

Many companies fail to change or update the credentials of these identities in a timely manner. This is a basic safety step that should be done often. However, studies show that more than 70% of these identities are left unchanged for long periods, which leaves them vulnerable to attacks.

Another issue is that nearly all organizations allow outside vendors to access their digital identities. When third parties are involved, there is a bigger chance that something could go wrong, especially if those vendors don’t have strong security systems of their own.

Experts say that keeping old login details in use while also giving access to outsiders creates serious weak spots in a company's defense.


What Needs to Be Done

As businesses begin using AI agents more widely, the number of digital identities is growing quickly. If they are not protected, hackers could use them to gain control over company data and systems.

Experts suggest that companies should treat these machine profiles just like human accounts. That means regularly updating passwords, limiting who has access, and monitoring their use closely.

With the rise of AI in workplaces, keeping these tools safe is now more important than ever.


Building Smarter AI Through Targeted Training


 

In recent years, artificial intelligence and machine learning have been in high demand across a broad range of industries. As a consequence, the cost and complexity of constructing and maintaining these models have increased significantly. Artificial intelligence and machine learning systems are resource-intensive, as they require substantial computation resources and large datasets, and are also difficult to manage effectively due to their complexity. 

As a result of this trend, professionals such as data engineers, machine learning engineers, and data scientists are increasingly being tasked with identifying ways to streamline models without compromising performance or accuracy, which in turn will lead to improved outcomes. Among the key aspects of this process involves determining which data inputs or features can be reduced or eliminated, thereby making the model operate more efficiently. 

In AI model optimization, a systematic effort is made to improve a model's performance, accuracy, and efficiency to achieve superior results in real-world applications. The purpose of this process is to improve a model's operational and predictive capabilities through a combination of technical strategies. It is the engineering team's responsibility to improve computational efficiency—reducing processing time, reducing resource consumption, and reducing infrastructure costs—while also enhancing the model's predictive precision and adaptability to changing datasets by enhancing the model's computational efficiency. 

An important optimization task might involve fine-tuning hyperparameters, selecting the most relevant features, pruning redundant elements, and making advanced algorithmic adjustments to the model. Ultimately, the goal of modeling is not only to provide accurate and responsive data, but also to provide scalable, cost-effective, and efficient data. As long as these optimization techniques are applied effectively, they ensure the model will perform reliably in production environments as well as remain aligned with the overall objectives of the organization. 

It is designed to retain important details and user preferences as well as contextually accurate responses when ChatGPT's memory feature is enabled, which is typically set to active by default so that the system can provide more personalized responses over time. If the user desires to access this functionality, he or she can navigate to the Settings menu and select Personalization, where they can check whether memory is active and then remove specific saved interactions if needed. 

As a result of this, it is recommended that users periodically review the data that has been stored within the memory feature to ensure its accuracy. In some cases, incorrect information may be retained, including inaccurate personal information or assumptions made during a previous conversation. As an example, in certain circumstances, the system might incorrectly log information about a user’s family, or other aspects of their profile, based on the context in which it is being used. 

In addition, the memory feature may inadvertently store sensitive data when used for practical purposes, such as financial institutions, account details, or health-related queries, especially if users are attempting to solve personal problems or experiment with the model. It is important to remember that while the memory function contributes to improved response quality and continuity, it also requires careful oversight from the user. There is a strong recommendation that users audit their saved data points routinely and delete the information that they find inaccurate or overly sensitive. This practice helps maintain the accuracy of data, as well as ensure better, more secure interactions. 

It is similar to clearing the cache of your browser periodically to maintain your privacy and performance optimally. "Training" ChatGPT in terms of customized usage means providing specific contextual information to the AI so that its responses will be relevant and accurate in a way that is more relevant to the individual. ITGuides the AI to behave and speak in a way that is consistent with the needs of the users, users can upload documents such as PDFs, company policies, or customer service transcripts. 

When people and organizations can make customized interactions for business-related content and customer engagement workflows, this type of customization provides them with more customized interactions. It is, however, often unnecessary for users to build a custom GPT for personal use in the majority of cases. Instead, they can share relevant context directly within their prompts or attach files to their messages, thereby achieving effective personalization. 

As an example, a user can upload their resume along with a job description when crafting a job application, allowing artificial intelligence to create a cover letter based on the resume and the job description, ensuring that the cover letter accurately represents the user's qualifications and aligns with the position's requirements. As it stands, this type of user-level customization is significantly different from the traditional model training process, which requires large quantities of data to be processed and is mainly performed by OpenAI's engineering teams. 

Additionally, ChatGPT users can increase the extent of its memory-driven personalization by explicitly telling it what details they wish to be remembered, such as their recent move to a new city or specific lifestyle preferences, like dietary choices. This type of information, once stored, allows the artificial intelligence to keep a consistent conversation going in the future. Even though these interactions enhance usability, they also require thoughtful data sharing to ensure privacy and accuracy, especially as ChatGPT's memory is slowly swelled over time. 

It is essential to optimize an AI model to improve performance as well as resource efficiency. It involves refining a variety of model elements to maximize prediction accuracy and minimize computational demand while doing so. It is crucial that we remove unused parameters from networks to streamline them, that we apply quantization to reduce data precision and speed up processing, and that we implement knowledge distillation, which translates insights from complex models to simpler, faster models. 

A significant amount of efficiency can be achieved by optimizing data pipelines, deploying high-performance algorithms, utilizing hardware accelerations such as GPUs and TPUs, and employing compression techniques such as weight sharing, low-rank approximation, and optimization of the data pipelines. Also, balancing batch sizes ensures the optimal use of resources and the stability of training. 

A great way to improve accuracy is to curate clean, balanced datasets, fine-tune hyperparameters using advanced search methods, increase model complexity with caution and combine techniques like cross-validation and feature engineering with the models. Keeping long-term performance high requires not only the ability to learn from pre-trained models but also regular retraining as a means of combating model drift. To enhance the scalability, cost-effectiveness, and reliability of AI systems across diverse applications, these techniques are strategically applied. 

Using tailored optimization solutions from Oyelabs, organizations can unlock the full potential of their AI investments. In an age when artificial intelligence is continuing to evolve rapidly, it becomes increasingly important to train and optimize models strategically through data-driven optimization. There are advanced techniques that can be implemented by organizations to improve performance while controlling resource expenditures, from selecting features and optimizing algorithms to efficiently handling data. 

As professionals and teams that place a high priority on these improvements, they will put themselves in a much better position to create AI systems that are not only faster and smarter but are also more adaptable to the daily demands of the world. Businesses are able to broaden their understanding of AI and improve their scalability and long-term sustainability by partnering with experts and focusing on how AI achieves value-driven outcomes.

New Sec-Gemini v1 from Google Outperforms Cybersecurity Rivals

 


A cutting-edge artificial intelligence model developed by Google called Sec-Gemini v1, a version of Sec-Gemini that integrates advanced language processing, real-time threat intelligence, and enhanced cybersecurity operations, has just been released. With the help of Google's proprietary Gemini large language model and dynamic security data and tools, this innovative solution utilizes its capabilities seamlessly to enhance security operations. 

A new AI model, Sec-Gemini v1 that combines sophisticated reasoning with real-time cybersecurity insights and tools has been released by Google. This integration makes the model extremely capable of performing essential security functions like threat detection, vulnerability assessment, and incident analysis. A key part of Google's effort to support progress across the broader security landscape is its initiative to provide free access to Sec-Gemini v1 to select institutions, professionals, non-profit organizations, and academic institutions to promote a collaborative approach to security research. 

Due to its integration with Google Threat Intelligence (GTI), the Open Source Vulnerabilities (OSV) database, and other key data sources, Sec-Gemini v1 stands out as a unique solution. On the CTI-MCQ threat intelligence benchmark and the CTI-Root Cause Mapping benchmark, it outperforms peer models by at least 11%, respectively. Using the CWE taxonomy, this benchmark assesses the model's ability to analyze and classify vulnerabilities.

One of its strongest features is accurately identifying and describing the threat actors it encounters. Because of its connection to Mandiant Threat Intelligence, it can recognize Salt Typhoon as a known adversary, which is a powerful feature. There is no doubt that the model performs better than its competitors based on independent benchmarks. According to a report from Security Gemini v1, compared to comparable AI systems, Sec-Gemini v1 scored at least 11 per cent higher on CTI-MCQ, a key metric used to assess threat intelligence capabilities. 

Additionally, it achieved a 10.5 per cent edge over its competitors in the CTI-Root Cause Mapping benchmark, a test that assesses the effectiveness of an AI model in interpreting vulnerability descriptions and classifying them by the Common Weakness Enumeration framework, an industry standard. It is through this advancement that Google is extending its leadership position in artificial intelligence-powered cybersecurity, by providing organizations with a powerful tool to detect, interpret, and respond to evolving threats more quickly and accurately. 

It is believed that Sec-Gemini v1 has the strength to be able to perform complex cybersecurity tasks efficiently, according to Google. Aside from conducting in-depth investigations, analyzing emerging threats, and assessing the impact of known vulnerabilities, you are also responsible for performing comprehensive incident investigations. In addition to accelerating decision-making processes and strengthening organization security postures, the model utilizes contextual knowledge in conjunction with technical insights to accomplish the objective. 

Though several technology giants are actively developing AI-powered cybersecurity solutions—such as Microsoft's Security Copilot, developed with OpenAI, and Amazon's GuardDuty, which utilizes machine learning to monitor cloud environments—Google appears to have carved out an advantage in this field through its Sec-Gemini v1 technology. 

A key reason for this edge is the fact that it is deeply integrated with proprietary threat intelligence sources like Google Threat Intelligence and Mandiant, as well as its remarkable performance on industry benchmarks. In an increasingly competitive field, these technical strengths place it at the top of the list as a standout solution. Despite the scepticism surrounding the practical value of artificial intelligence in cybersecurity - often dismissed as little more than enhanced assistants that still require a lot of human interaction - Google insists that Sec-Gemini v1 is fundamentally different from other artificial intelligence models out there. 

The model is geared towards delivering highly contextual, actionable intelligence rather than simply summarizing alerts or making basic recommendations. Moreover, this technology not only facilitates faster decision-making but also reduces the cognitive load of security analysts. As a result, teams can respond more quickly to emerging threats in a more efficient way. At present, Sec-Gemini v1 is being made available exclusively as a research tool, with access being granted only to a select set of professionals, academic institutions, and non-profit organizations that are willing to share their findings. 

There have been early signs that the model will make a significant contribution to the evolution of AI-driven threat defence, as evidenced by the model's use-case demonstrations and early results. It will introduce a new era of proactive cyber risk identification, contextualization, and mitigation by enabling the use of advanced language models. 

In real-world evaluations, the Google security team demonstrated Sec-Gemini v1's advanced analytical capabilities by correctly identifying Salt Typhoon, a recognized threat actor, with its accurate analytical capabilities. As well as providing in-depth contextual insights, the model provided in-depth contextual information, including vulnerability details, potential exploitation techniques, and associated risk levels. This level of nuanced understanding is possible because Mandiant's threat intelligence provides a rich repository of real-time threat data as well as adversary profiles that can be accessed in real time. 

The integration of Sec-Gemini v1 into other systems allows Sec-Gemini v1 to go beyond conventional pattern recognition, allowing it to provide more timely threat analysis and faster, evidence-based decision-making. To foster collaboration and accelerate model refinement, Google has offered limited access to Sec-Gemini v1 to a carefully selected group of cybersecurity practitioners, academics, and non-profit organizations to foster collaboration. 

To avoid a broader commercial rollout, Google wishes to gather feedback from trusted users. This will not only ensure that the model is more reliable and capable of scaling across different use cases but also ensure that it is developed in a responsible and community-led manner. During practical demonstrations, Google's security team demonstrated Sec-Gemini v1's ability to identify Salt Typhoon, an internationally recognized threat actor, with high accuracy, as well as to provide rich contextual information, such as vulnerabilities, attack patterns and potential risk exposures associated with this threat actor. 

Through its integration with Mandiant's threat intelligence, which enhances the model's ability to understand evolving threat landscapes, this level of precision and depth can be achieved. The Sec-Gemini v1 software, which is being made available for free to a select group of cybersecurity professionals, academic institutions, and nonprofit organizations, for research, is part of Google's commitment to responsible innovation and industry collaboration. 

Before a broader deployment of this model occurs, this initiative will be designed to gather feedback, validate use cases, and ensure that it is effective across diverse environments. Sec-Gemini v1 represents an important step forward in integrating artificial intelligence into cybersecurity. Google's enthusiasm for advancing this technology while ensuring its responsible development underscores the company's role as a pioneer in the field. 

Providing early, research-focused access to Sec-Gemini v1 not only fosters collaboration within the cybersecurity community but also ensures that Sec-Gemini v1 will evolve in response to collective expertise and real-world feedback, as Google offers this model to the community at the same time. Sec-Gemini v1 has demonstrated remarkable performance across industry benchmarks as well as its ability to detect and mitigate complex threats, so it may be able to change the face of threat defense strategies in the future. 

The advanced reasoning capabilities of Sec-Gemini v1 are coupled with cutting-edge threat intelligence, which can accelerate decision-making, cut response times, and improve organizational security. However, while Sec-Gemini v1 shows great promise, it is still in the research phase and awaiting wider commercial deployment. Using such a phased approach, it is possible to refine the model carefully, ensuring that it adheres to the high standards that are required by various environments. 

For this reason, it is very important that stakeholders, such as cybersecurity experts, researchers, and industry professionals, provide valuable feedback during the first phase of the model development process, to ensure that the model's capabilities are aligned with real-world scenarios and needs. This proactive stance by Google in engaging the community emphasizes the importance of integrating AI responsibly into cybersecurity. 

This is not solely about advancing the technology, but also about establishing a collaborative framework that can make it easier to detect and respond to emerging cyber threats more effectively, more quickly, and more securely. The real issue is the evolution of Sec-Gemini version 1, which may turn out to be one of the most important tools for safeguarding critical systems and infrastructure around the globe in the future.

Meta Launches New Llama 4 AI Models

 



Meta has introduced a fresh set of artificial intelligence models under the name Llama 4. This release includes three new versions: Scout, Maverick, and Behemoth. Each one has been designed to better understand and respond to a mix of text, images, and videos.

The reason behind this launch seems to be rising competition, especially from Chinese companies like DeepSeek. Their recent models have been doing so well that Meta rushed to improve its own tools to keep up.


Where You Can Access Llama 4

The Scout and Maverick models are now available online through Meta’s official site and other developer platforms like Hugging Face. However, Behemoth is still in the testing phase and hasn’t been released yet.

Meta has already added Llama 4 to its own digital assistant, which is built into apps like WhatsApp, Instagram, and Messenger in several countries. However, some special features are only available in the U.S. and only in English for now.


Who Can and Can’t Use It

Meta has placed some limits on who can access Llama 4. People and companies based in the European Union are not allowed to use or share these models, likely due to strict data rules in that region. Also, very large companies, those with over 700 million monthly users — must first get permission from Meta.


Smarter Design, Better Performance

Llama 4 is Meta’s first release using a new design method called "Mixture of Experts." This means the model can divide big tasks into smaller parts and assign each part to a different “expert” inside the system. This makes it faster and more efficient.

For example, the Maverick model has 400 billion total "parameters" (which basically measure how smart it is), but it only uses a small part of them at a time. Scout, the lighter model, is great for reading long documents or big sections of code and can run on a single high-powered computer chip. Maverick needs a more advanced system to function properly.


Behemoth: The Most Advanced One Yet

Behemoth, which is still being developed, will be the most powerful version. It will have a huge amount of learning data and is expected to perform better than many leading models in science and math-based tasks. But it will also need very strong computing systems to work.

One big change in this new version is how it handles sensitive topics. Previous models often avoided difficult questions. Now, Llama 4 is trained to give clearer, fairer answers on political or controversial issues. Meta says the goal is to make the AI more helpful to users, no matter what their views are.

The Rise of Cyber Warfare and Its Global Implications

 

In Western society, the likelihood of cyberattacks is arguably higher now than it has ever been. The National Cyber Security Centre (NCSC) advised UK organisations to strengthen their cyber security when Russia launched its attack on Ukraine in early 2022. In a similar vein, the FBI and Cybersecurity and Infrastructure Security Agency (CISA) issued warnings about increased risks to US companies. 

There is no doubt that during times of global transition and turmoil, cyber security becomes a battlefield in its own right, with both state and non-state actors increasingly turning to cyber-attacks to gain an advantage in combat. Furthermore, as technology advances and an increasing number of devices connect to the internet, the scope and sophistication of cyber-attacks has grown significantly. 

Cyber warfare can take numerous forms, such as breaking into enemy state computer systems, spreading malware, and executing denial-of-service assaults. If a cyber threat infiltrates the right systems, entire towns and cities may be shut off from information, services, and infrastructure that have become fundamental to our way of life, such as electricity, online banking systems, and the internet. 

The European Union Agency for Network and Information Security (ENISA) believes that cyber warfare poses a substantial and growing threat to vital infrastructure. Its research on the "Threat Landscape for Foreign Information Manipulation Interference (FIMI)" states that key infrastructure, such as electricity and healthcare, is especially vulnerable to cyber-attacks during times of conflict or political tension.

In addition, cyber-attacks can disrupt banking systems, inflicting immediate economic loss and affecting individuals. According to the report, residents were a secondary target in more than half of the incidents analysed. Cyber-attacks are especially effective at manipulating public perceptions through, at the most basic level, inconvenience, to the most serious level, which could result in the loss of life. 

Risk to businesses 

War and military conflicts can foster a business environment susceptible to cyber-attacks, since enemies may seek to target firms or sectors deemed critical to a country's economy or infrastructure. They may also choose symbolic targets, like media outlets or high-profile businesses connected with a country. 

Furthermore, the use of cyber-attacks in war can produce a broad sense of instability and uncertainty, which can be exploited to exploit vulnerabilities in firms' cyber defences.

Cyber-attacks on a company's computer systems, networks, and servers can cause delays and shutdowns, resulting in direct loss of productivity and money. However, they can also harm reputation, prompt regulatory action (including the imposition of fines), and result in consumer loss. 

Prevention tips

To mitigate these risks, firms can take proactive actions to increase their cyber defences, such as self-critical auditing and third-party testing. Employees should also be trained to identify and respond to cyber risks. Furthermore, firms should conduct frequent security assessments to detect vulnerabilities and adopt mitigation techniques.