Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

New Sec-Gemini v1 from Google Outperforms Cybersecurity Rivals

 


A cutting-edge artificial intelligence model developed by Google called Sec-Gemini v1, a version of Sec-Gemini that integrates advanced language processing, real-time threat intelligence, and enhanced cybersecurity operations, has just been released. With the help of Google's proprietary Gemini large language model and dynamic security data and tools, this innovative solution utilizes its capabilities seamlessly to enhance security operations. 

A new AI model, Sec-Gemini v1 that combines sophisticated reasoning with real-time cybersecurity insights and tools has been released by Google. This integration makes the model extremely capable of performing essential security functions like threat detection, vulnerability assessment, and incident analysis. A key part of Google's effort to support progress across the broader security landscape is its initiative to provide free access to Sec-Gemini v1 to select institutions, professionals, non-profit organizations, and academic institutions to promote a collaborative approach to security research. 

Due to its integration with Google Threat Intelligence (GTI), the Open Source Vulnerabilities (OSV) database, and other key data sources, Sec-Gemini v1 stands out as a unique solution. On the CTI-MCQ threat intelligence benchmark and the CTI-Root Cause Mapping benchmark, it outperforms peer models by at least 11%, respectively. Using the CWE taxonomy, this benchmark assesses the model's ability to analyze and classify vulnerabilities.

One of its strongest features is accurately identifying and describing the threat actors it encounters. Because of its connection to Mandiant Threat Intelligence, it can recognize Salt Typhoon as a known adversary, which is a powerful feature. There is no doubt that the model performs better than its competitors based on independent benchmarks. According to a report from Security Gemini v1, compared to comparable AI systems, Sec-Gemini v1 scored at least 11 per cent higher on CTI-MCQ, a key metric used to assess threat intelligence capabilities. 

Additionally, it achieved a 10.5 per cent edge over its competitors in the CTI-Root Cause Mapping benchmark, a test that assesses the effectiveness of an AI model in interpreting vulnerability descriptions and classifying them by the Common Weakness Enumeration framework, an industry standard. It is through this advancement that Google is extending its leadership position in artificial intelligence-powered cybersecurity, by providing organizations with a powerful tool to detect, interpret, and respond to evolving threats more quickly and accurately. 

It is believed that Sec-Gemini v1 has the strength to be able to perform complex cybersecurity tasks efficiently, according to Google. Aside from conducting in-depth investigations, analyzing emerging threats, and assessing the impact of known vulnerabilities, you are also responsible for performing comprehensive incident investigations. In addition to accelerating decision-making processes and strengthening organization security postures, the model utilizes contextual knowledge in conjunction with technical insights to accomplish the objective. 

Though several technology giants are actively developing AI-powered cybersecurity solutions—such as Microsoft's Security Copilot, developed with OpenAI, and Amazon's GuardDuty, which utilizes machine learning to monitor cloud environments—Google appears to have carved out an advantage in this field through its Sec-Gemini v1 technology. 

A key reason for this edge is the fact that it is deeply integrated with proprietary threat intelligence sources like Google Threat Intelligence and Mandiant, as well as its remarkable performance on industry benchmarks. In an increasingly competitive field, these technical strengths place it at the top of the list as a standout solution. Despite the scepticism surrounding the practical value of artificial intelligence in cybersecurity - often dismissed as little more than enhanced assistants that still require a lot of human interaction - Google insists that Sec-Gemini v1 is fundamentally different from other artificial intelligence models out there. 

The model is geared towards delivering highly contextual, actionable intelligence rather than simply summarizing alerts or making basic recommendations. Moreover, this technology not only facilitates faster decision-making but also reduces the cognitive load of security analysts. As a result, teams can respond more quickly to emerging threats in a more efficient way. At present, Sec-Gemini v1 is being made available exclusively as a research tool, with access being granted only to a select set of professionals, academic institutions, and non-profit organizations that are willing to share their findings. 

There have been early signs that the model will make a significant contribution to the evolution of AI-driven threat defence, as evidenced by the model's use-case demonstrations and early results. It will introduce a new era of proactive cyber risk identification, contextualization, and mitigation by enabling the use of advanced language models. 

In real-world evaluations, the Google security team demonstrated Sec-Gemini v1's advanced analytical capabilities by correctly identifying Salt Typhoon, a recognized threat actor, with its accurate analytical capabilities. As well as providing in-depth contextual insights, the model provided in-depth contextual information, including vulnerability details, potential exploitation techniques, and associated risk levels. This level of nuanced understanding is possible because Mandiant's threat intelligence provides a rich repository of real-time threat data as well as adversary profiles that can be accessed in real time. 

The integration of Sec-Gemini v1 into other systems allows Sec-Gemini v1 to go beyond conventional pattern recognition, allowing it to provide more timely threat analysis and faster, evidence-based decision-making. To foster collaboration and accelerate model refinement, Google has offered limited access to Sec-Gemini v1 to a carefully selected group of cybersecurity practitioners, academics, and non-profit organizations to foster collaboration. 

To avoid a broader commercial rollout, Google wishes to gather feedback from trusted users. This will not only ensure that the model is more reliable and capable of scaling across different use cases but also ensure that it is developed in a responsible and community-led manner. During practical demonstrations, Google's security team demonstrated Sec-Gemini v1's ability to identify Salt Typhoon, an internationally recognized threat actor, with high accuracy, as well as to provide rich contextual information, such as vulnerabilities, attack patterns and potential risk exposures associated with this threat actor. 

Through its integration with Mandiant's threat intelligence, which enhances the model's ability to understand evolving threat landscapes, this level of precision and depth can be achieved. The Sec-Gemini v1 software, which is being made available for free to a select group of cybersecurity professionals, academic institutions, and nonprofit organizations, for research, is part of Google's commitment to responsible innovation and industry collaboration. 

Before a broader deployment of this model occurs, this initiative will be designed to gather feedback, validate use cases, and ensure that it is effective across diverse environments. Sec-Gemini v1 represents an important step forward in integrating artificial intelligence into cybersecurity. Google's enthusiasm for advancing this technology while ensuring its responsible development underscores the company's role as a pioneer in the field. 

Providing early, research-focused access to Sec-Gemini v1 not only fosters collaboration within the cybersecurity community but also ensures that Sec-Gemini v1 will evolve in response to collective expertise and real-world feedback, as Google offers this model to the community at the same time. Sec-Gemini v1 has demonstrated remarkable performance across industry benchmarks as well as its ability to detect and mitigate complex threats, so it may be able to change the face of threat defense strategies in the future. 

The advanced reasoning capabilities of Sec-Gemini v1 are coupled with cutting-edge threat intelligence, which can accelerate decision-making, cut response times, and improve organizational security. However, while Sec-Gemini v1 shows great promise, it is still in the research phase and awaiting wider commercial deployment. Using such a phased approach, it is possible to refine the model carefully, ensuring that it adheres to the high standards that are required by various environments. 

For this reason, it is very important that stakeholders, such as cybersecurity experts, researchers, and industry professionals, provide valuable feedback during the first phase of the model development process, to ensure that the model's capabilities are aligned with real-world scenarios and needs. This proactive stance by Google in engaging the community emphasizes the importance of integrating AI responsibly into cybersecurity. 

This is not solely about advancing the technology, but also about establishing a collaborative framework that can make it easier to detect and respond to emerging cyber threats more effectively, more quickly, and more securely. The real issue is the evolution of Sec-Gemini version 1, which may turn out to be one of the most important tools for safeguarding critical systems and infrastructure around the globe in the future.

Meta Launches New Llama 4 AI Models

 



Meta has introduced a fresh set of artificial intelligence models under the name Llama 4. This release includes three new versions: Scout, Maverick, and Behemoth. Each one has been designed to better understand and respond to a mix of text, images, and videos.

The reason behind this launch seems to be rising competition, especially from Chinese companies like DeepSeek. Their recent models have been doing so well that Meta rushed to improve its own tools to keep up.


Where You Can Access Llama 4

The Scout and Maverick models are now available online through Meta’s official site and other developer platforms like Hugging Face. However, Behemoth is still in the testing phase and hasn’t been released yet.

Meta has already added Llama 4 to its own digital assistant, which is built into apps like WhatsApp, Instagram, and Messenger in several countries. However, some special features are only available in the U.S. and only in English for now.


Who Can and Can’t Use It

Meta has placed some limits on who can access Llama 4. People and companies based in the European Union are not allowed to use or share these models, likely due to strict data rules in that region. Also, very large companies, those with over 700 million monthly users — must first get permission from Meta.


Smarter Design, Better Performance

Llama 4 is Meta’s first release using a new design method called "Mixture of Experts." This means the model can divide big tasks into smaller parts and assign each part to a different “expert” inside the system. This makes it faster and more efficient.

For example, the Maverick model has 400 billion total "parameters" (which basically measure how smart it is), but it only uses a small part of them at a time. Scout, the lighter model, is great for reading long documents or big sections of code and can run on a single high-powered computer chip. Maverick needs a more advanced system to function properly.


Behemoth: The Most Advanced One Yet

Behemoth, which is still being developed, will be the most powerful version. It will have a huge amount of learning data and is expected to perform better than many leading models in science and math-based tasks. But it will also need very strong computing systems to work.

One big change in this new version is how it handles sensitive topics. Previous models often avoided difficult questions. Now, Llama 4 is trained to give clearer, fairer answers on political or controversial issues. Meta says the goal is to make the AI more helpful to users, no matter what their views are.

The Rise of Cyber Warfare and Its Global Implications

 

In Western society, the likelihood of cyberattacks is arguably higher now than it has ever been. The National Cyber Security Centre (NCSC) advised UK organisations to strengthen their cyber security when Russia launched its attack on Ukraine in early 2022. In a similar vein, the FBI and Cybersecurity and Infrastructure Security Agency (CISA) issued warnings about increased risks to US companies. 

There is no doubt that during times of global transition and turmoil, cyber security becomes a battlefield in its own right, with both state and non-state actors increasingly turning to cyber-attacks to gain an advantage in combat. Furthermore, as technology advances and an increasing number of devices connect to the internet, the scope and sophistication of cyber-attacks has grown significantly. 

Cyber warfare can take numerous forms, such as breaking into enemy state computer systems, spreading malware, and executing denial-of-service assaults. If a cyber threat infiltrates the right systems, entire towns and cities may be shut off from information, services, and infrastructure that have become fundamental to our way of life, such as electricity, online banking systems, and the internet. 

The European Union Agency for Network and Information Security (ENISA) believes that cyber warfare poses a substantial and growing threat to vital infrastructure. Its research on the "Threat Landscape for Foreign Information Manipulation Interference (FIMI)" states that key infrastructure, such as electricity and healthcare, is especially vulnerable to cyber-attacks during times of conflict or political tension.

In addition, cyber-attacks can disrupt banking systems, inflicting immediate economic loss and affecting individuals. According to the report, residents were a secondary target in more than half of the incidents analysed. Cyber-attacks are especially effective at manipulating public perceptions through, at the most basic level, inconvenience, to the most serious level, which could result in the loss of life. 

Risk to businesses 

War and military conflicts can foster a business environment susceptible to cyber-attacks, since enemies may seek to target firms or sectors deemed critical to a country's economy or infrastructure. They may also choose symbolic targets, like media outlets or high-profile businesses connected with a country. 

Furthermore, the use of cyber-attacks in war can produce a broad sense of instability and uncertainty, which can be exploited to exploit vulnerabilities in firms' cyber defences.

Cyber-attacks on a company's computer systems, networks, and servers can cause delays and shutdowns, resulting in direct loss of productivity and money. However, they can also harm reputation, prompt regulatory action (including the imposition of fines), and result in consumer loss. 

Prevention tips

To mitigate these risks, firms can take proactive actions to increase their cyber defences, such as self-critical auditing and third-party testing. Employees should also be trained to identify and respond to cyber risks. Furthermore, firms should conduct frequent security assessments to detect vulnerabilities and adopt mitigation techniques.

New WhatsApp Feature Allows Users to Control Media Auto-Saving

 


As part of WhatsApp's ongoing efforts to ensure the safety of its users, a new feature will strengthen the confidential nature of chat histories. The enhancement is part of the platform's overall initiative aimed at increasing privacy safeguards and allowing users to take more control of their messaging experience by strengthening the privacy safeguards. This upcoming feature offers advanced settings which allow individuals to control how their conversations are stored, accessed, and used, providing a deeper level of protection against unauthorized access to their communications. 

As WhatsApp refines its privacy architecture, it aims to meet the evolving expectations of its users about data security while strengthening their trust in it at the same time. WhatsApp's strategy of focusing on user-centric innovation reflects its focus on ensuring communication remains seamless as well as secure in an increasingly digital world, which is the reason for this development. As part of its continued effort to improve digital safety, WhatsApp has introduced a new feature that is aimed at protecting the privacy of conversations of its users.

With the launch of this initiative, the platform is highlighting its evolving approach to data security to create a user-friendly, secure messaging environment. As part of this new development, users will be able to customize how their chat data is handled within the app through a set of refined privacy controls. By allowing individuals to customize their privacy preferences, rather than relying solely on default settings, they will be able to tailor their privacy preferences specifically to meet their communication needs.

By using this approach, people are minimizing the risk that users will experience unauthorized access, and some are also enhancing transparency in how data is managed on their platform. In line with the broader shift toward ensuring users are more autonomous in protecting their digital interactions, these improvements are aligned with a greater industry shift. With WhatsApp's strong balance between usability and robust privacy standards, it continues to position itself as a leader in secure communication.

As social media becomes an increasingly integral part of our daily lives, it continues to prioritize the delivery of tools that prioritize the trust and resilience of its users as well as their technological abilities. During the coming months, WhatsApp plans on introducing a new feature that will allow users to take control over how recipients handle their shared content. 

There was a time when media files sent through the platform were automatically saved to the recipient's device, but now with this upcoming change, users will have the option of preventing others from automatically saving the media that they send—which will make it easier to maintain their privacy, whether it be in one-to-one or group conversations. This new functionality extends similar privacy protections to regular chats and their associated media, as well as disappearing messages. 

It will also be a great idea for users to activate the feature to get additional security precautions, such as a restriction on exporting complete chat histories from conversations where the setting is enabled. Even though the feature does not prevent individuals from forwarding individual messages, it does set stronger limits on the ability to share and archive entire conversations. 

By making this change to the privacy setting, users can limit the reach of their content while still having the flexibility to use the messaging experience as freely as possible. Another interesting aspect of this update is how it interacts with artificial intelligence software. When the advanced privacy setting is enabled, participants of that conversation will not be able to make use of Meta AI features within the chat when this setting is enabled.

It seems that this inclusion indicates an underlying commitment to enhancing data protection and ethical AI integration. The feature is still in the development stage, and WhatsApp is expected to refine and expand its capabilities in advance of its official release. Once it is released, it will remain an optional feature, which users will be able to choose to enable or disable based on their personal preferences. 

In addition to its ongoing improvements to the calling features of WhatsApp, it is rumoured that the company will launch a new privacy-focused tool to give users more control over how their media is shared. As a matter of tradition, the platform has always defaulted to store pictures and videos sent to users on their devices, and this default behaviour has created ongoing concerns about data privacy, data protection, and the safety of digital devices. 

WhatsApp has responded to this problem by allowing senders to decide whether the media they share can be saved by the recipient. Using this feature, WhatsApp introduces a new level of content ownership by giving the sender the ability to decide whether or not their message should be saved. The setting is presented in the chat interface as a toggle option, and functions similarly to the existing Disappearing Messages feature. 

In addition, WhatsApp has also developed a system to limit the automatic storage of files that are shared during a typical conversation. By doing so, WhatsApp hopes to reduce the risk of sensitive content being accidentally stored on unauthorized devices, shared further without consent, or stored on devices that are not properly secured. A user in an era when data is becoming increasingly vulnerable will certainly appreciate this additional control, which is particularly useful for users who handle confidential, personal or time-sensitive information. 

In addition to presently being in beta testing, this update is part of WhatsApp's overall strategy to roll out improvements in user-centred privacy in phases. Although the beta program will expand to a wider audience within the next few weeks, users enrolled in the beta program are the first ones to have access to the feature. To ensure early access to new functionalities, WhatsApp encourages users to keep their app up to date so that they can explore the latest privacy tools. 

To push users for greater privacy, WhatsApp has developed an advanced chat protection tool that goes beyond controlling media downloads to strengthen the user experience. In terms of data retention and third-party access, this upcoming functionality is intended to give users a greater sense of control over how they manage their conversations. 

By focusing on features that restrict how chats can be saved and exported, the platform aims to create an environment that is both safe and respectful for its users. The restriction of exporting entire chat histories is an important part of this update. This setting is activated when users enable the feature. 

Once users activate this setting, recipients will not be able to export conversations that include messages from users whose settings have been enabled by this feature. This restriction aims to prevent the wholesale sharing of private information by preventing concerns over unauthorized data transfers. However, the inability to send individual messages will continue to be allowed, however, the inability to export full conversations will ensure that long-form chats remain confidential, particularly those that contain sensitive or personal material. 

In addition, the integration of artificial intelligence tools is significantly limited because of this feature, which introduces an important limitation. As long as advanced chat privacy is enabled, neither the sender nor the recipient will be able to interact with Meta AI within a conversation when it is active. The restriction represents a larger shift towards cautious and intentional AI implementation, ensuring that private interactions are left safe from automating or analyzing them without the need for human intervention. 

 The feature, which is still under development, may require further refinements before it becomes widely available, but when it becomes widely available, it will be offered to users as an opt-in setting, so they have the option to enhance their privacy in any way that they choose.

AI Powers Airbnb’s Code Migration, But Human Oversight Still Key, Say Tech Giants

 

In a bold demonstration of AI’s growing role in software development, Airbnb has successfully completed a large-scale code migration project using large language models (LLMs), dramatically reducing the timeline from an estimated 1.5 years to just six weeks. The project involved updating approximately 3,500 React component test files from Enzyme to the more modern React Testing Library (RTL). 

According to Airbnb software engineer Charles Covey-Brandt, the company’s AI-driven pipeline used a combination of automated validation steps and frontier LLMs to handle the bulk of the transformation. Impressively, 75% of the files were migrated within just four hours, thanks to robust automation and intelligent retries powered by dynamic prompt engineering with context-rich inputs of up to 100,000 tokens. 

Despite this efficiency, about 900 files initially failed validation. Airbnb employed iterative tools and a status-tracking system to bring that number down to fewer than 100, which were finally resolved manually—underscoring the continued need for human intervention in such processes. Other tech giants echo this hybrid approach. Google, in a recent report, noted a 50% speed increase in migrating codebases using LLMs. 

One project converting ID types in the Google Ads system—originally estimated to take hundreds of engineering years—was largely automated, with 80% of code changes authored by AI. However, inaccuracies still required manual edits, prompting Google to invest further in AI-powered verification. Amazon Web Services also highlighted the importance of human-AI collaboration in code migration. 

Its research into modernizing Java code using Amazon Q revealed that developers value control and remain cautious of AI outputs. Participants emphasized their role as reviewers, citing concerns about incorrect or misleading changes. While AI is accelerating what were once laborious coding tasks, these case studies reveal that full autonomy remains out of reach. 

Engineers continue to act as crucial gatekeepers, validating and refining AI-generated code. For now, the future of code migration lies in intelligent partnerships—where LLMs do the heavy lifting and humans ensure precision.

BitcoinOS to Introduce Alpha Mainnet for Digital Ownership Platform

 

BitcoinOS and Sovryn founder Edan Yago is creating a mechanism to turn Bitcoin into a digital ownership platform. Growing up in South Africa and coming from a family of Holocaust survivors, Yago's early experiences sneaking gold coins out of the nation between the ages of nine and eleven influenced his opinion that having financial independence is crucial for both human dignity and survival. 

"Money is power, and power is freedom," Yago explains. "Controlling people's access to capital means controlling their freedom. That's why property rights are critical. This conviction drives his work on BitcoinOS, which seeks to establish a foundation for digital property rights independent of governments or companies. 

Yago sees technology as the fundamental cause of societal transformation. He argues that the Industrial Revolution made slavery economically unviable, not a sudden moral awakening. However, he warns that technology needs direction, referencing how the internet morphed from a promise of decentralisation to a system dominated by industry titans.

When Yago uncovered Bitcoin in 2011, he saw it as "the missing piece" of digital property rights. Bitcoin introduced a decentralised ledger for ownership records, while Ethereum added smart contracts for decentralised computing, but both have size and efficiency restrictions.

BitcoinOS addresses these issues with zero-knowledge proofs, which enable computations to be confirmed without running on every node. "Instead of putting everything on a blockchain, we only store the proof that a computation happened correctly," Yago tells me. This technique can allow Bitcoin to support numerous types of property, including: real estate, stocks , digital identities, and other assets in Bitcoin's global ledger.

Yago characterises the cryptocurrency business as being in its "teenage years," but believes it will mature over the next decade. His vision goes beyond Bitcoin to embrace digital sovereignty and encryption as ways to better safeguard rights than traditional legal systems. 

BitcoinOS plans to launch its alpha mainnet in the coming months. Yago is optimistic about the project's potential: "We're creating property rights for the digital age." When you comprehend that, everything else comes into place." 

The quest for Bitcoin-based solutions coincides with increased institutional usage. BlackRock, the world's largest asset management, has recently launched its first Bitcoin exchange-traded product in Europe, which is now available on platforms in Paris, Amsterdam, and Frankfurt. This follows BlackRock's success in the United States, where it raised more than $50 billion for similar products.

DeepSeek Revives China's Tech Industry, Challenging Western Giants

 



As a result of DeepSeek's emergence, the global landscape for artificial intelligence (AI) has been profoundly affected, going way beyond initial media coverage. AI-driven businesses, semiconductor manufacturing, data centres and energy infrastructure all benefit from its advancements, which are transforming the dynamics of the industry and impacting valuations across key sectors. 


DeepSeek's R1 model is one of the defining characteristics of its success, and it represents one of the technological milestones of the company. This breakthrough system can rival leading Western artificial intelligence models while using significantly fewer resources to operate. Despite conventional assumptions that Western dominance in artificial intelligence remains, Chinese R1 models demonstrate China's growing capacity to compete at the highest level of innovation at the highest levels in AI. 

The R1 model is both efficient and sophisticated. Among the many disruptive forces in artificial intelligence, DeepSeek has established itself as one of the most efficient, scalable, and cost-effective systems on the market. It is built on a Mixture of Experts (MoE) architecture, which optimizes resource allocation by utilizing only relevant subnetworks to enhance performance and reduce computational costs at the same time. 

DeepSeek's innovation places it at the forefront of a global AI race, challenging Western dominance and influencing industry trends, investment strategies, and geopolitical competition while influencing industry trends. Even though its impact has spanned a wide range of industries, from technology and finance to energy, there is no doubt that a shift toward a decentralized AI ecosystem has taken place. 

As a result of DeepSeek's accomplishments, a turning point has been reached in the development of artificial intelligence worldwide, emphasizing the fact that China is capable of rivalling and even surpassing established technological leaders in certain fields. There is a shift indicating the emergence of a decentralized AI ecosystem in which innovation is increasingly spread throughout multiple regions rather than being concentrated in Western markets alone. 

Changing power balances in artificial intelligence research, commercialization, and industrial applications are likely to be altered as a result of the intensifying competition that is likely to persist. China's technology industry has experienced a wave of rapid innovation as a result of the emergence of DeepSeek as one of the most formidable competitors in artificial intelligence (AI). As a result of DeepSeek’s alleged victory over OpenAI last January, leading Chinese companies have launched several AI-based solutions based on a cost-effective artificial intelligence model developed at a fraction of conventional costs. 

The surge in artificial intelligence development poses a direct threat to both OpenAI and Alphabet Inc.’s Google, as well as the greater AI ecosystem that exists in Western nations. Over the past two weeks, major Chinese companies have unveiled no less than ten significant AI products or upgrades, demonstrating a strong commitment to redefining global AI competition. In addition to DeepSeek's technological achievements, this rapid succession of advancements was not simply a reaction to that achievement, but rather a concerted effort to set new standards for the global AI community. 

According to Baidu Inc., it has launched a new product called the Ernie X1 as a direct rival to DeepSeek's R1, while Alibaba Group Holding Ltd has announced several enhancements to its artificial intelligence reasoning model. At the same time, Tencent Holdings Ltd. has revealed its strategic AI roadmap, presenting its own alternative to the R1 model, and Ant Group Co. has revealed research that indicated domestically produced chips can be used to cut costs by up to 20 per cent. 

A new version of DeepSeek was unveiled by DeepSeek, a company that continues to grow, while Meituan, a company widely recognized as being the world's largest meal delivery platform, has made significant investment in artificial intelligence. As China has become increasingly reliant on open-source artificial intelligence development, established Western technology companies are being pressured to reassess their business strategies as a result. 

According to OpenAI, as a response to DeepSeek’s success, the company is considering a hybrid approach that may include freeing up certain technologies, while at the same time contemplating substantial increases in prices for its most advanced artificial intelligence models. There is also a chance that the widespread adoption of cost-effective AI solutions could have profound effects on the semiconductor industry in general, potentially hurting Nvidia's profits as well. 

Analysts expect that as DeepSeek's economic AI model gains traction, it may become inevitable that leading AI chip manufacturers' valuations are adjusted. Chinese artificial intelligence innovation is on the rise at a rapid pace, underscoring a fundamental shift in the global technology landscape. In the world of artificial intelligence, Chinese firms are increasingly asserting their dominance, while Western firms are facing mounting challenges in maintaining their dominance. 

As the long-term consequences of this shift remain undefined, the current competitive dynamic within China's AI sector indicates an emerging competitive dynamic that could potentially reshape the future of artificial intelligence worldwide. The advancements in task distribution and processing of DeepSeek have allowed it to introduce a highly cost-effective way to deploy artificial intelligence (AI). Using computational efficiency, the company was able to develop its AI model for around $5.6 million, a substantial savings compared to the $100 million or more that Western competitors typically require to develop a similar AI model. 

By introducing a resource-efficient and sustainable alternative to traditional models of artificial intelligence, this breakthrough has the potential to redefine the economic landscape of artificial intelligence. As a result of its ability to minimize reliance on high-performance computing resources, DeepSeekcano reduces costs by reducing the number of graphics processing units (GPUs) used. As a result, the model operates with a reduced number of graphics processing unit (GPU) hours, resulting in a significant reduction in hardware and energy consumption. 

Although the United States has continued to place sanctions against microchips, restricting China's access to advanced semiconductor technologies, DeepSeek has managed to overcome these obstacles by using innovative technological solutions. It is through this resilience that we can demonstrate that, even in challenging regulatory and technological environments, it is possible to continue to develop artificial intelligence. DeepSeek's cost-effective approach influences the broader market trends beyond AI development, and it has been shown to have an impact beyond AI development. 

During the last few years, a decline in the share price of Nvidia, one of the leading manufacturers of artificial intelligence chips, has occurred as a result of the move toward lower-cost computation. It is because of this market adjustment, which Apple was able to regain its position as the world's most valuable company by market capitalization. The impact of DeepSeek's innovations extends beyond financial markets, as its AI model requires fewer computations and operates with a lower level of data input, so it does not rely on expensive computers and big data centres to function. 

The result of this is not only a lower infrastructure cost but also a lower electricity consumption, which makes AI deployments more energy-efficient. As AI-driven industries continue to evolve, DeepSeek's model may catalyze a broader shift toward more sustainable, cost-effective AI solutions. The rapid advancement of technology in China has gone far beyond just participating in the DeepSeek trend. The AI models developed by Chinese developers, which are largely open-source, are collectively positioned as a concerted effort to set global benchmarks and gain a larger share of the international market. 

Even though it is still unclear whether or not these innovations will ultimately surpass the capabilities of the Western counterparts of these innovations, a significant amount of pressure is being exerted on the business models of the leading technology companies in the United States as a result of them. It is for this reason that OpenAI is attempting to maintain a strategic balance in its work. As a result, the company is contemplating the possibility of releasing certain aspects of its technology as open-source software, as inspired by DeepSeek's success with open-source software. 

Furthermore, it may also contemplate charging higher fees for its most advanced services and products. ASeveralindustry analysts, including Amr Awadallah, the founder and CEO of Vectara Inc., advocate the spread of DeepSeek's cost-effective model. If premium chip manufacturers, such as Nvidia, are adversely affected by this trend,theyt will likely have to adjust market valuations, causing premium chip manufacturers to lose profit margins.

Alibaba Launches Latest Open-source AI Model from Qwen Series for ‘Cost-effective AI agents’

Alibaba Launches Lates Open-source AI Model from Qwen Series for ‘Cost-effective AI agents’

Last week, Alibaba Cloud launched its latest AI model in its “Qwen series,” as large language model (LLM) competition in China continues to intensify after the launch of famous “DeepSeek” AI.

The latest "Qwen2.5-Omni-7B" is a multimodal model- it can process inputs like audio/video, text, and images- while also creating real-time text and natural speech responses, Alibaba’s cloud website reports. It also said that the model can be used on edge devices such as smartphones, providing higher efficiency without giving up on performance. 

According to Alibaba, the “unique combination makes it the perfect foundation for developing agile, cost-effective AI agents that deliver tangible value, especially intelligent voice applications.” For instance, the AI can be used to assist visually impaired individuals to navigate their environment via real-time audio description. 

The latest model is open-sourced on forums GitHub and Hugging Face, after a rising trend in China post DeepSeek breakthrough R1 model open-source. Open-source means a software in which the source code is created freely on web for potential modification and redistribution. 

In recent years, Alibaba claims it has open-sourced more that 200 generative AI models. In the noise of China’s AI dominance intensified by DeepSeek due to its shoe string budget and capabilities, Alibaba and genAI competitors are also releasing new, cost-cutting models and services an exceptional case.

Last week, Chinese tech mammoth Baidu launched a new multimodal foundational model and its first reasoning-based model. Likewise, Alibaba introduced its updated Qwen 2.5 AI model in January and also launched a new variant of its AI assistant tool Quark this month. 

Alibaba has also made strong commitments to its AI plan, recently, it announced a plan to put $53 billion in its cloud computing and AI infrastructure over the next three years, even surpassing its spending in the space over the past decade. 

CNBC talked with Kai Wang, Asia Senior equity analyst at Morningstar, Mr Kai told CNBC that “large Chinese tech players such as Alibaba, which build data centers to meet the computing needs of AI in addition to building their own LLMs, are well positioned to benefit from China's post-DeepSeek AI boom.” According to CNBC, “Alibaba secured a major win for its AI business last month when it confirmed that the company was partnering with Apple to roll out AI integration for iPhones sold in China.”

Revolution or Hype? Meet the AI Agent That’s Automating Invoicing for Thousands

 



French startup Twin has introduced its very first AI-powered automation tool to help business owners who use Qonto. Qonto is a digital banking platform that offers financial services to companies across Europe. Many Qonto users spend hours each month gathering invoices from different sources and uploading them. Twin’s new tool does this job faster and with almost no effort from the user.

The tool is called Invoice Operator. It has been designed to save time by automatically finding and attaching invoices to the right transactions in a Qonto account. This means users no longer have to search for documents themselves or waste time uploading files manually.

Usually, companies use tools like Zapier or software like UiPath to automate tasks. These tools often need coding knowledge or work through complex scripts that break if a website changes. Twin uses a smarter method that copies how a person uses a web browser but with the help of artificial intelligence.

Here’s how Invoice Operator works: when a Qonto user starts the tool, it first checks which transactions are missing invoices. Then it opens a browser and prepares to visit the websites where invoices might be stored. If a login is required, the tool will stop and ask the user to enter their username and password. After logging in, the AI continues its job— finding the needed documents and uploading them to Qonto automatically.

This method is useful because businesses often use many different platforms to make purchases. It would be too difficult and time-consuming to write special instructions for each website. But Twin’s technology can handle thousands of services without needing extra scripts.

The tool is powered by an advanced AI model developed by OpenAI, which allows the software to operate a browser in the same way a person would. Twin was one of only a few companies allowed to test this AI model before it was released to the public.

What makes Twin’s tool even more helpful is that it’s very easy to use. Business owners don’t have to understand coding or set up anything complicated. Once logged in, the AI handles the process without further input. This makes it ideal for people who want results without dealing with technical steps.

In the long run, Twin believes its technology can be useful for many other tasks in different industries. For example, it could help online stores handle orders or assist customer support teams in finding information quickly. 

With this launch, Twin is showing how smart automation can reduce boring and repetitive work. The company hopes to bring its AI tools to more people and businesses in the near future.



Gmail Upgrade Announced by Google with Three Billion Users Affected

 


The Google team has officially announced the launch of a major update to Gmail, which will enhance functionality, improve the user experience, and strengthen security. It is anticipated that this update to one of the world’s most commonly used email platforms will have a significant impact on both individuals as well as businesses, providing a more seamless, efficient, and secure way to manage digital communications for individuals and businesses alike.

The Gmail email service, which was founded in 2004 and has consistently revolutionized the email industry with its extensive storage, advanced features, and intuitive interface, has continuously revolutionized the email industry. In recent years, it has grown its capabilities by integrating with Google Drive, Google Chat, and Google Meet, thus strengthening its position within the larger Google Workspace ecosystem by extending its capabilities. 

The recent advancements from Google reflect the company’s commitment to innovation and leadership in the digital communication technology sector, particularly as the competitive pressures intensify in the email and productivity services sector. Privacy remains a crucial concern as the digital world continues to evolve. Google has stressed the company’s commitment to safeguarding user data, and is ensuring that user privacy remains of the utmost importance. 

In a statement released by the company, it was stated that the new tool could be managed through personalization settings, so users would be able to customize their experience according to their preferences, allowing them to tailor their experience accordingly. 

However, industry experts suggest that users check their settings carefully to ensure their data is handled in a manner that aligns with their privacy expectations, despite these assurances. Those who are seeking to gain a greater sense of control over their personal information may find it prudent to disable AI training features. In particular, this measured approach is indicative of broader discussions regarding the trade-off between advanced functionality and data privacy, especially as the competition from Microsoft and other major technology companies continues to gain ground. 

Increasingly, AI-powered services are analyzing user data and this has raised concerns about privacy and data security, which has led to a rise in privacy concerns. Chrome search histories, for example, offer highly personal insights into a person’s search patterns, as well as how those searches are phrased. As long as users grant permission to use historical data, the integration of AI will allow the company to utilize this historical data to create a better user experience.

It is also important to remember, however, that this technology is not simply a tool for executive assistants, but rather an extremely sophisticated platform that is operated by one of the largest digital marketing companies in the world. In the same vein, Microsoft's recent approach to integrating artificial intelligence with its services has created a controversy about user consent and data access, leading users to exercise caution and remain vigilant.

According to PC World, Copilot AI, the company's software for analyzing files stored on OneDrive, now has an automatic opt-in option. Users may not have been aware that this feature, introduced a few months ago, allowed them to consent to its use before the change. It has been assured that users will have full Although users have over their data they have AI-driven access to cloud-stored files, the transparency of such integrations is s being questioned as well as the extent of their data. There remain many concerns among businesses that are still being questioned. Businesses remain concerned aboutness, specifically about privacy issues.

The results of Global Data (cited by Verdict) indicate that more than 75% of organizations are concerned about these risks, contributing to a slowdown in the adoption of artificial intelligence. A study also indicates that 59% of organizations lack confidence in integrating artificial intelligence into their operations, with only 21% reporting an extensive or very extensive deployment of artificial intelligence. 

In the same way that individual users struggle to keep up with the rapid evolution of artificial intelligence technologies, businesses are often unaware of the security and privacy threats that these innovations pose. As a consequence, industry experts advise organizations to prioritize governance and control mechanisms before adopting AI-based solutions to maintain control over their data. CISOs (chief information security officers) might need to adopt a more cautious approach to mitigate potential risks, such as restricting AI adoption until comprehensive safeguards have been implemented. 

The introduction of AI-powered innovations is often presented as seamless and efficient tools, but they are supported by extensive frameworks for collecting and analyzing data. For these systems to work effectively, they must have well-defined policies in place that protect sensitive data from being exposed or misused. As AI adoption continues to grow, the importance of stringent regulation and corporate oversight will only increase. 

To improve the usability, security and efficiency of Gmail, as well as make it easier for both individuals and businesses, Google's latest update has been introduced to the Gmail platform. There are several features included in this update, including AI-driven features, improved interfaces, and improved search capabilities, which will streamline email management and strengthen security against cybersecurity threats. 

By integrating Google Workspace deeper, businesses will benefit from improved security measures that safeguard sensitive information while enabling teams to work more efficiently and effectively. This will allow businesses to collaborate more seamlessly while reducing cybersecurity risks. The improvements added by Google to Gmail allow it to be a critical tool within corporate environments, enhancing productivity, communication, and teamwork. With this update, Google confirms Gmail's reputation as a leading email and productivity tool. 

In addition to optimizing the user experience, integrating intelligent automation, strengthening security protocols, and expanding collaborative features, the platform maintains its position as a leading digital communication platform. During the rollout over the coming months, users can expect a more robust and secure email environment that keeps pace with the changing demands of today's digital interactions as the rollout progresses.

AI and Privacy – Issues and Challenges

 

Artificial intelligence is changing cybersecurity and digital privacy. It promises better security but also raises concerns about ethical boundaries, data exploitation, and spying. From facial recognition software to predictive crime prevention, customers are left wondering where to draw the line between safety and overreach as AI-driven systems become more and more integrated into daily life.

The same artificial intelligence (AI) tools that aid in spotting online threats, optimising security procedures, and stopping fraud can also be used for intrusive data collecting, behavioural tracking, and mass spying. The use of AI-powered surveillance in corporate data mining, law enforcement profiling, and government tracking has drawn criticism in recent years. AI runs the potential of undermining rather than defending basic rights in the absence of clear regulations and transparency. 

AI and data ethics

Despite encouraging developments, there are numerous instances of AI-driven inventions going awry, which raise serious questions. A face recognition business called Clearview AI amassed one of the largest facial recognition databases in the world by illegally scraping billions of photos from social media. Clearview's technology was employed by governments and law enforcement organisations across the globe, leading to legal action and regulatory action about mass surveillance. 

The UK Department for Work and Pensions used an AI system to detect welfare fraud. An internal investigation suggested that the system disproportionately targeted people based on their age, handicap, marital status, and country. This prejudice resulted in certain groups being unfairly picked for fraud investigations, raising questions about discrimination and the ethical use of artificial intelligence in public services. Despite earlier guarantees of impartiality, the findings have fuelled calls for increased openness and supervision in government AI use. 

Regulations and consumer protection

The ethical use of AI is being regulated by governments worldwide, with a number of significant regulations having an immediate impact on consumers. The AI Act of the European Union, which is scheduled to go into force in 2025, divides AI applications into risk categories. 

Strict regulations will be applied to high-risk technology, like biometric surveillance and facial recognition, to guarantee transparency and moral deployment. The EU's commitment to responsible AI governance is further reinforced by the possibility of severe sanctions for non compliant companies. 

Individuals in the United States have more control over their personal data according to California's Consumer Privacy Act. Consumers have the right to know what information firms gather about them, to seek its erasure, and to opt out of data sales. This rule adds an important layer of privacy protection in an era where AI-powered data processing is becoming more common. 

The White House has recently introduced the AI Bill of Rights, a framework aimed at encouraging responsible AI practices. While not legally enforceable, it emphasises the need of privacy, transparency, and algorithmic fairness, pointing to a larger push for ethical AI development in policy making.

The Future of Cloud Ownership Amid Deglobalization

 


Modern digital landscapes have become increasingly challenging for data management because of the rapid expansion of data volumes and sources. Organizations have to navigate the complexities of storing a vast amount of data while ensuring seamless access for a variety of users, regardless of their location in the world. It has become increasingly important to manage data efficiently due to the increased demand for real-time data availability and the need to maintain stringent security measures. 

The growing need for real-time data availability has resulted in the need for efficient data management. Many enterprises are turning to cloud computing as a reliable solution to address these challenges. Cloud-based systems offer the flexibility needed to accommodate diverse access needs while maintaining the integrity and security of the data. A business can streamline its operation, improve collaboration, and develop scalable data management strategies tailored to the needs of its customers by leveraging cloud technologies.

To make the most of cloud services, a comprehensive understanding of cloud data management principles is needed to effectively utilize cloud services for complex business needs. To maximize the benefits of cloud solutions, it is essential to maintain knowledge of industry best practices, adopt advanced security measures, and learn from successful implementations in order to maximize their potential. In a world where organizations are constantly embracing digital transformation, the cloud remains one of the most effective and efficient ways to manage data while ensuring efficiency, security, and long-term sustainability. 

According to a comprehensive analysis of global trends, a noticeable shift has been witnessed toward conservative governance and a retreat from globalization. Increasingly, nations are emphasizing self-reliance as a result of economic, security, and social concerns. In order to reduce dependency on foreign entities, they are concentrating on strengthening their domestic industries. In this sense, this transition reflects a wider trend towards economic nationalism, in which governments seek to preserve their interests by utilizing their local resources and capabilities in order to protect themselves. 

Among the many challenges faced by this shift in data management and cloud computing are infrastructure, security, and accessibility issues that are very important. It is important to understand that even though the cloud is perceived by many as a borderless, abstract entity, it is fundamentally based on physical data centres strategically located across the globe. Organizations carefully select these data centers to enhance performance, decrease latency, and deliver seamless service by placing data closer to end users.

A key challenge for businesses and policy makers as deglobalization takes hold is balancing the need for efficiency with evolving regulatory and geopolitical constraints, as well as the increasing need for deglobalization to succeed. The ability to maintain the integrity, security, and compliance of data in cloud environments requires effective cloud data governance. As a result of this framework, data collection, storage, protection, and utilization across various cloud platforms are overseen by a set of structured policies, regulations, and procedures.

By implementing the combination of best practices and advanced technologies, organizations can ensure that the quality and security of their data are maintained regardless of the physical location in which the information is stored or the cloud provider in which it is hosted. In the context of cloud data governance, the primary objective is to enhance the security of the data by enforcing stringent access controls, encryption protocols, and continuous auditing measures, which are designed to enhance data security. 

Cloud-based infrastructures have become increasingly popular as they distribute data across multiple locations, and safeguarding sensitive data from unauthorized access, breaches, and cyber threats becomes progressively more important. Besides ensuring the protection of organizational assets, strong security policies foster trust among customers and stakeholders. Additionally, regulation compliance remains a fundamental aspect of cloud data governance. Among all the companies operating across many industries and jurisdictions, a wide range of laws are important to follow, such as GDPR, CCPA, and HIPAA. With a well-defined governance framework, companies are able to navigate the complex world of global regulatory requirements, ensuring that all data management practices are aligned with legal and industry specific standards so they are not exposed to legal and regulatory penalties. 

In this increasingly data-driven world, companies can reduce risks, avoid legal penalties, and enhance operational efficiency by integrating compliance strategies into cloud governance policies. An interconnected global economy traditionally relies on cross-border infrastructures for data storage and management. Nations typically store and access information through data centers located in other countries, resulting in an efficient data flow by minimizing latency and optimizing data flow that enables data to be stored and accessed.

Several companies in Europe might not use servers based in Oceania due to performance restrictions, but instead host data at intermediary locations in Southeast Asia in order to improve speed and reduce latency. However, recent geopolitical developments have begun to reshape how cloud data infrastructures are constructed. A number of international conflicts, especially after the beginning of the Ukrainian conflict, have emphasized the vulnerability of businesses that rely on foreign cloud services. As a result of sanctions imposed by the United States and allied nations on Russia, businesses operating in the region have had to rethink their dependence on data.

Other nations were prompted to take notice of the risks associated with relying on foreign digital infrastructure, which raised pressing concerns among them. A larger issue of data sovereignty has been raised as a result of these uncertainties. Relying on cloud infrastructure from a foreign country not only exposes companies and governments to potential sanctions but also imposes varying data privacy and security requirements on their business operations. As a result of this recognition of these risks, many nations have begun to prioritize the importance of self-reliance in data management, aiming to gain a greater sense of control over their digital assets through self-reliance. 

As a result of this shift towards localized cloud infrastructure, national data is being protected against external influences, regulatory risks are mitigated, and a long-term digital resilience is being strengthened. There has been a dramatic shift in the landscape of data management, from traditional, locally hosted storage solutions to more dynamic, scalable cloud-based frameworks that are becoming more popular. 

The industry standard for data storage has long been on-premises; however, with advances in cloud technologies, new alternatives have emerged that are more efficient, secure, and affordable. As organizations realize the benefits of cloud computing, conventional storage methods are gradually being replaced by cloud computing. It is expected that digital transformation will result in an increase in the number of enterprises migrating their data management systems to the cloud over the next few years. 

The transition to this new technology does not only result from technological developments, but it is also a strategic necessity to remain competitive in a rapidly evolving environment of business. In today's fast-paced business environment, data has become one of the most crucial assets for decision-making, operational efficiency, and innovation, which emphasizes the need for organizations to implement robust and scalable strategies for managing data. 

With the continued evolution of industries, it is increasingly important that organizations ensure that they have a well-structured and efficient data management framework to ensure long-term success. As the data-driven world becomes increasingly data-driven, these companies will be better able to adapt to market shifts, enhance their overall business agility, and leverage real-time analytics through the use of cloud-based technology solutions, enhancing their competitiveness in the increasingly data-driven world.

AI Technology is Helping Criminal Groups Grow Stronger in Europe, Europol Warns

 



The European Union’s main police agency, Europol, has raised an alarm about how artificial intelligence (AI) is now being misused by criminal groups. According to their latest report, criminals are using AI to carry out serious crimes like drug dealing, human trafficking, online scams, money laundering, and cyberattacks.

This report is based on information gathered from police forces across all 27 European Union countries. Released every four years, it helps guide how the EU tackles organized crime. Europol’s chief, Catherine De Bolle, said cybercrime is growing more dangerous as criminals use advanced digital tools. She explained that AI is giving criminals more power, allowing them to launch precise and damaging attacks on people, companies, and even governments.

Some crimes, she noted, are not just about making money. In certain cases, these actions are also designed to cause unrest and weaken countries. The report explains that criminal groups are now working closely with some governments to secretly carry out harmful activities.

One growing concern is the rise in harmful online content, especially material involving children. AI is making it harder to track and identify those responsible because fake images and videos look very real. This is making the job of investigators much more challenging.

The report also highlights how criminals are now able to trick people using technology like voice imitation and deepfake videos. These tools allow scammers to pretend to be someone else, steal identities, and threaten people. Such methods make fraud, blackmail, and online theft harder to spot.

Another serious issue is that countries are now using criminal networks to launch cyberattacks against their rivals. Europol noted that many of these attacks are aimed at important services like hospitals or government departments. For example, a hospital in Poland was recently hit by a cyberattack that forced it to shut down for several hours. Officials said the use of AI made this attack more severe.

The report warns that new technology is speeding up illegal activities. Criminals can now carry out their plans faster, reach more people, and operate in more complex ways. Europol urged countries to act quickly to tackle this growing threat.

The European Commission is planning to introduce a new security policy soon. Magnus Brunner, the EU official in charge of internal affairs, said Europe needs to stay alert and improve safety measures. He also promised that Europol will get more staff and better resources in the coming years to fight these threats.

In the end, the report makes it clear that AI is making crime more dangerous and harder to stop. Stronger cooperation between countries and better cyber defenses will be necessary to protect people and maintain safety across Europe.

Over Half of Organizations Lack AI Cybersecurity Strategies, Mimecast Report Reveals

 

More than 55% of organizations have yet to implement dedicated strategies to counter AI-driven cyber threats, according to new research by Mimecast. The cybersecurity firm's latest State of Human Risk report, based on insights from 1,100 IT security professionals worldwide, highlights growing concerns over AI vulnerabilities, insider threats, and cybersecurity funding shortfalls.

The study reveals that 96% of organizations report improved risk management after adopting a formal cybersecurity strategy. However, security leaders face an increasingly complex threat landscape, with AI-powered attacks and insider risks posing significant challenges.

“Despite the complexity of challenges facing organisations—including increased insider risk, larger attack surfaces from collaboration tools, and sophisticated AI attacks—organisations are still too eager to simply throw point solutions at the problem,” said Mimecast’s human risk strategist VP, Masha Sedova. “With short-staffed IT and security teams and an unrelenting threat landscape, organisations must shift to a human-centric platform approach that connects the dots between employees and technology to keep the business secure.”

The report finds that 95% of organizations are leveraging AI for threat detection, endpoint security, and insider risk analysis. However, 81% express concerns over data leaks from generative AI (GenAI) tools. More than half lack structured strategies to combat AI-driven attacks, while 46% remain uncertain about their ability to defend against AI-powered phishing and deepfake threats.

Insider threats have surged by 43%, with 66% of IT leaders anticipating an increase in data loss from internal sources in the coming year. The report estimates that insider-driven data breaches, leaks, or theft cost an average of $13.9 million per incident. Additionally, 79% of organizations believe collaboration tools have heightened security risks, amplifying both intentional and accidental data breaches.

Despite 85% of organizations raising their cybersecurity budgets, 61% cite financial constraints as a barrier to addressing emerging threats and implementing AI-driven security solutions. The report underscores the need for increased investment in cybersecurity staffing, third-party security services, email security, and collaboration tool protection.

Although 87% of organizations conduct quarterly cybersecurity training, 33% of IT leaders remain concerned about employee mismanagement of email threats, while 27% cite security fatigue as a growing risk. 95% of organizations expect email-based cyber threats to persist in 2025, as phishing attacks continue to exploit human vulnerabilities.

Collaboration tools are expanding attack surfaces, with 44% of organizations reporting a rise in cyber threats originating from these platforms. 61% believe a cyberattack involving collaboration tools could disrupt business operations in 2025, raising concerns over data integrity and compliance.

The report highlights a shift from traditional security awareness training to proactive Human Risk Management. Notably, just 8% of employees are responsible for 80% of security incidents. Organizations are increasingly turning to AI-driven monitoring and behavioral analytics to detect and mitigate threats early. 72% of security leaders see human-centric cybersecurity solutions as essential in the next five years, signaling a shift toward advanced threat detection and risk mitigation.

Hawcx Aims to Solve Passkey Challenges with Passwordless Authentication

 


Passwords remain a staple of online security, despite their vulnerabilities. According to Verizon, nearly one-third of all reported data breaches in the past decade resulted from stolen credentials, including some of the largest cyberattacks in history.  

In response, the tech industry has championed passkeys as a superior alternative to passwords. Over 15 billion accounts now support passkey technology, with major companies such as Amazon, Apple, Google, and Microsoft driving adoption.

However, widespread adoption remains sluggish due to concerns about portability and usability. Many users find passkeys cumbersome, particularly when managing access across multiple devices.

Cybersecurity startup Hawcx is addressing these passkey limitations with its innovative authentication technology. By eliminating key storage and transmission issues, Hawcx enhances security while improving usability.

Users often struggle with passkey setup and access across devices, leading to account lockouts and costly recovery—a significant challenge for businesses. As Dan Goodin of Ars Technica highlights, while passkeys offer enhanced security, their complexity can introduce operational inefficiencies at scale.

Hawcx, founded in 2023 by Riya Shanmugam (formerly of Adobe, Google, and New Relic), along with Selva Kumaraswamy and Ravi Ramaraju, offers a platform-agnostic solution. Developers can integrate its passwordless authentication by adding just five lines of code.

Unlike traditional passkeys, Hawcx does not store or transmit private keys. Instead, it cryptographically generates private keys each time a user logs in. This method ensures compatibility with older devices that lack modern hardware for passkey support.

“We are not reinventing the wheel fundamentally in most of the processes we have built,” Shanmugam told TechCrunch.

If a user switches devices, Hawcx’s system verifies authenticity before granting access, without storing additional private keys on the new device or in the cloud. This approach differs from standard passkeys, which require syncing private keys across devices or through cloud services.

“No one is challenging beyond the foundation,” Shanmugam said. “What we are challenging is the foundation itself. We are not building on top of what passkeys as a protocol provides. We are saying this protocol comes with an insane amount of limitations for users, enterprises, and developers, and we can make it better.”

Although Hawcx has filed patents, its technology has yet to be widely deployed or independently validated—factors that could influence industry trust. However, the company recently secured $3 million in pre-seed funding from Engineering Capital and Boldcap to accelerate development and market entry.

Shanmugam revealed that Hawcx is in talks with major banks and gaming companies for pilot programs set to launch in the coming weeks. These trials, expected to run for three to six months, will help refine the technology before broader implementation. Additionally, the startup is working with cryptography experts from Stanford University to validate its approach.

“As we are rolling out passkeys, the adoption is low. It’s clear to me that as good as passkeys are and they have solved the security problem, the usability problem still remains,” Tushar Phondge, director of consumer identity at ADP, told TechCrunch.

ADP plans to pilot Hawcx’s solution to assess its effectiveness in addressing passkey-related challenges, such as device dependency and system lockups.

Looking ahead, Hawcx aims to expand its authentication platform by integrating additional security services, including document verification, live video authentication, and background checks.