Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Machine learning. Show all posts

Here's How Google Willow Chip Will Impact Startup Innovation in 2025

 

As technology advances at an unprecedented rate, the recent unveiling of Willow, Google's quantum computing device, ushers in a new age for startups. Willow's unprecedented computing capabilities—105 qubits, roughly double those of its predecessor, Sycamore—allow it to accomplish jobs incomprehensibly quicker than today's most powerful supercomputers. This milestone is set to significantly impact numerous sectors, presenting startups with a rare opportunity to innovate and tackle complex issues. 

The Willow chip's ability to effectively tackle complex issues that earlier technologies were unable to handle is among its major implications. Quantum computing can be used by startups in industries like logistics and pharmaceuticals to speed up simulations and streamline procedures. Willow's computational power, for example, can be utilised by a drug-discovery startup to simulate detailed chemical interactions, significantly cutting down on the time and expense required to develop new therapies. 

The combination of quantum computing and artificial intelligence has the potential to lead to ground-breaking developments in AI model capabilities. Startups developing AI-driven solutions can employ quantum algorithms to manage huge data sets more efficiently. This might lead to speedier model training durations and enhanced prediction skills in a variety of applications, including personalised healthcare, where quantum-enhanced machine learning tools can analyse patient data for real-time insights and tailored treatments. 

Cybersecurity challenges 

The powers of Willow offer many benefits, but they also bring with them significant challenges, especially in the area of cybersecurity. The security of existing encryption techniques is called into question by the processing power of quantum devices, as they may be vulnerable to compromise. Startups that create quantum-resistant security protocols will be critical in addressing this growing demand, establishing themselves in a booming niche market.

Access and collaboration

Google’s advancements with the Willow chip might also democratize access to quantum computing. Startups may soon benefit from cloud-based quantum computing resources, eliminating the substantial capital investment required for hardware acquisition. This model could encourage collaborative ecosystems between startups, established tech firms, and academic institutions, fostering knowledge-sharing and accelerating innovation.

Quantum Computing Meets AI: A Lethal Combination

 

Quantum computers are getting closer to Q-day — the day when they will be able to crack existing encryption techniques — as we continue to assign more infrastructure functions to artificial intelligence (AI). This could jeopardise autonomous control systems that rely on AI and ML for decision-making, as well as the security of digital communications. 

As AI and quantum converge to reveal remarkable novel technologies, they will also combine to develop new attack vectors and quantum cryptanalysis.

How far off is this threat?

For major organisations and governments, the transition to post-quantum cryptography (PQC) will take at least ten years, if not much more. Since the last encryption standard upgrade, the size of networks and data has increased, enabling large language models (LLMs) and related specialised technologies. 

While generic versions are intriguing and even enjoyable, sophisticated AI will be taught on expertly picked data to do specialised tasks. This will quickly absorb all of the previous research and information created, providing profound insights and innovations at an increasing rate. This will complement, not replace, human brilliance, but there will be a disruptive phase for cybersecurity.

If a cryptographically relevant quantum computer becomes available before PQC is fully deployed, the repercussions are unknown in the AI era. Regular hacking, data loss, and even disinformation on social media will bring back memories of the good old days before AI driven by evil actors became the main supplier of cyber carcinogens.

When AI models are hijacked, the combined consequence of feeding live AI-controlled systems personalised data with malicious intent will become a global concern. The debate in Silicon Valley and political circles is already raging over whether AI should be allowed to carry out catastrophic military operations. Regardless of existing concerns, this is undoubtedly the future. 

However, most networks and economic activity require explicit and urgent defensive actions. To take on AI and quantum, critical infrastructure design and networks must advance swiftly and with significantly increased security. With so much at stake and new combined AI-quantum attacks unknown, one-size-fits-all upgrades to libraries such as TLS will not suffice. 

Internet 1.0 was built on old 1970s assumptions and limitations that predated modern cloud technology and its amazing redundancy. The next version must be exponentially better, anticipating the unknown while assuming that our current security estimations are incorrect. The AI version of Stuxnet should not surprise cybersecurity experts because the previous iteration had warning indications years ago.

OpenAI's Latest AI Model Faces Diminishing Returns

 

OpenAI's latest AI model is yielding diminishing results while managing the demands of recent investments. 

The Information claims that OpenAI's upcoming AI model, codenamed Orion, is outperforming its predecessors in terms of performance gains. In staff testing, Orion reportedly achieved the GPT-4 performance level after only 20% of its training. 

However, the shift from GPT-4 to the upcoming GPT-5 is expected to result in fewer quality gains than the jump from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” noted employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

AI training often yields the biggest improvements in performance in the early stages and smaller gains in subsequent phases. As a result, the remaining 80% of training is unlikely to provide breakthroughs comparable to earlier generational improvements. This predicament with its latest AI model comes at a critical juncture for OpenAI, following a recent investment round that raised $6.6 billion.

With this financial backing, investors' expectations rise, as do technical hurdles that confound typical AI scaling approaches. If these early versions do not live up to expectations, OpenAI's future fundraising chances may not be as attractive. The report's limitations underscore a major difficulty for the entire AI industry: the decreasing availability of high-quality training data and the need to remain relevant in an increasingly competitive environment.

A June research (PDF) predicts that between 2026 and 2032, AI companies will exhaust the supply of publicly accessible human-generated text data. Developers have "largely squeezed as much out of" the data that has been utilised to enable the tremendous gains in AI that we have witnessed in recent years, according to The Information. OpenAI is fundamentally rethinking its approach to AI development in order to meet these challenges. 

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” states The Information.

HiddenLayer Unveils "ShadowLogic" Technique for Implanting Codeless Backdoors in AI Models

 

Manipulating an AI model's computational graph can allow attackers to insert codeless, persistent backdoors, reports AI security firm HiddenLayer. This vulnerability could lead to malicious use of machine learning (ML) models in a variety of applications, including supply chain attacks.

Known as ShadowLogic, the technique works by tampering with the computational graph structure of a model, triggering unwanted behavior in downstream tasks. This manipulation opens the door to potential security breaches across AI supply chains.

Traditional backdoors offer unauthorized system access, bypassing security layers. Similarly, AI models can be exploited to include backdoors or manipulated to yield malicious outcomes. However, any changes in the model could potentially affect these hidden pathways.

HiddenLayer explains that using ShadowLogic enables threat actors to embed codeless backdoors that persist through model fine-tuning, allowing highly targeted and stealthy attacks.

Building on prior research showing that backdoors can be implemented during the training phase, HiddenLayer investigated how to inject a backdoor into a model's computational graph post-training. This bypasses the need for training phase vulnerabilities.

A computational graph is a mathematical blueprint that controls a neural network's operations. These graphs represent data inputs, mathematical functions, and learning parameters, guiding the model’s forward and backward propagation.

According to HiddenLayer, this graph acts like compiled code in a program, with specific instructions for the model. By manipulating the graph, attackers can override normal model logic, triggering predefined behavior when the model processes specific input, such as an image pixel, keyword, or phrase.

ShadowLogic leverages the wide range of operations supported by computational graphs to embed triggers, which could include checksum-based activations or even entirely new models hidden within the original one. HiddenLayer demonstrated this method on models like ResNet, YOLO, and Phi-3 Mini.

These compromised models behave normally but respond differently when presented with specific triggers. They could, for example, fail to detect objects or generate controlled responses, demonstrating the subtlety and potential danger of ShadowLogic backdoors.

Such backdoors introduce new vulnerabilities in AI models that do not rely on traditional code exploits. Embedded within the model’s structure, these backdoors are harder to detect and can be injected across a variety of graph-based architectures.

ShadowLogic is format-agnostic and can be applied to any model that uses graph-based architectures, regardless of the domain, including autonomous navigation, financial predictions, healthcare diagnostics, and cybersecurity.

HiddenLayer warns that no AI system is safe from this type of attack, whether it involves simple classifiers or advanced large language models (LLMs), expanding the range of potential targets.






Meta Unveils its First Open AI Model That Can Process Images

 

Meta has released new versions of its renowned open source AI model Llama, including small and medium-sized models capable of running workloads on edge and mobile devices. 

Llama 3.2 models were showcased at the company's annual Meta Connect event. They can support multilingual text production and vision apps like image recognition. 

“This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding,” stated Mark Zuckerberg, CEO of Meta.

Llama 3.2 is based on the huge open source model Llama 3.1, which was released in late July. The previous Llama model was the largest open-source AI model in history, with 405 billion parameters (parameters are the adjustable variables within an AI model that help it learn patterns from data). The size shows the AI's ability to interpret and generate human-like text. 

The new Llama models presented at Meta Connect 2024 are significantly reduced in size. Meta explained that they choose to develop smaller models because not all researchers have the required computational resources and expertise to run a model as large as Llama 3.1.

In terms of performance, Meta's new Llama 3.2 models compete with industry-leading systems from Anthropic and OpenAI. The 3B model exceeds Google's Gemma 2 2.6B and Microsoft's Phi 3.5-mini in tasks such as instruction following and content summarisation. The 90B version, the largest of the models, surpasses both Claude 3-Haiku and GPT-4o-mini on a variety of benchmarks, including the widely used MMLU test, an industry-leading AI model evaluation tool. 

How to access Llama 3.2 models 

The new Llama 3.2 models are open source, so anyone can download and use them to power AI applications. The models can be downloaded straight from llama.com or Hugging Face, a popular open source repository platform. Llama 3.2 models are also available through a number of cloud providers, including Google Cloud, AWS, Nvidia, Microsoft Azure, and Grow, among others.

According to figures published in early September, demand for Meta's Llama models from cloud customers increased tenfold between January and July, and is expected to rise much more in the wake of the new 3.2 line of models. Meta partner Together AI is providing free access to the vision version of Llama 3.2 11B on its platform till the end of the year. 

Vipul Ved Prakash, founder and CEO of Together AI, stated that the new multimodal models will drive the adoption of open-source AI among developers and organisations. 

“We’re thrilled to partner with Meta to offer developers free access to the Llama 3.2 vision model and to be one of the first API providers for Llama Stack,” Prakash noted. “With Together AI's support for Llama models and Llama Stack, developers and enterprises can experiment, build, and scale multimodal applications with the best performance, accuracy, and cost.”

Data Poisoning: The Hidden Threat to AI Models



As ongoing developments in the realms of artificial intelligence and machine learning take place at a dynamic rate, yet another new form of attack is emerging, one which can topple all those systems we use today without much ado: data poisoning. This type of attack involves tampering with data used by AI models in training to make them malfunction, often undetectably. The issue came to light when recently, more than 100 malicious models were uncovered on the popular repository for AI, Hugging Face, by a software management company, JFrog. 

What is Data Poisoning?

Data poisoning is an attack method on AI models by corrupting the data used for its training. In other words, the intent is to have the model make inappropriate predictions or choices. Besides, unlike traditional hacking, it doesn't require access to the system; therefore, data poisoning manipulates input data either before the deployment of an AI model or after the deployment of the AI model, and that makes it very difficult to detect.

One attack happens at the training phase when an attacker manages to inject malicious data into any AI model. Yet another attack happens post-deployment when poisoned data is fed to the AI; it yields wrong outputs. Both kinds of attacks remain hardly detectable and cause damage to the AI system in the long run.

According to research by JFrog, investigators found a number of suspicious models uploaded to Hugging Face, a community where users can share AI models. Those contained encoded malicious code, which the researchers believe hackers-those potentially coming from the KREOnet research network in Korea-might have embedded. The most worrying aspect, however, was the fact that these malicious models went undetected by masquerading as benign.

That's a serious threat because many AI systems today use a great amount of data from different sources, including the internet. In cases where attackers manage to change the data used in the training of a model, that could mean anything from misleading results to actual large-scale cyberattacks.

Why It's Hard to Detect

One of the major challenges with data poisoning is that AI models are built by using enormous data sets, which makes it difficult for researchers to always know what has gone into the model. A lack of clarity of this kind in turn creates ways in which attackers can sneak in poisoned data without being caught.

But it gets worse: AI systems that scrape data from the web continuously in order to update themselves could poison their own training data. This sets up the alarming possibility of an AI system's gradual breakdown, or "degenerative model collapse."

The Consequences of Ignoring the Threat

If left unmitigated, data poisoning could further allow attackers to inject stealth backdoors in AI software that enable them to conduct malicious actions or cause any AI system to behave in ways unexpected. Precisely, they can run malicious code, allow phishing, and rig AI predictions for various nefarious uses.

The cybersecurity industry must take this as a serious threat since more dependence occurs on generative AI linked together, alongside LLMs. If one fails to do so, widespread vulnerability across the complete digital ecosystem will result.

How to Defend Against Data Poisoning

The protection of AI models against data poisoning calls for vigilance throughout the process of the AI development cycle. Experts say that this may require oversight by organisations in using only data from sources they can trust for training the AI model. The Open Web Application Security Project, or OWASP, has provided a list of some best ways to avoid data poisoning; a few of these include frequent checks to find biases and abnormalities during the training of data.

Other recommendations come in the form of multiple AI algorithms that verify results against each other to locate inconsistency. If an AI model starts producing strange results, fallback mechanisms should be in place to prevent any harm.

This also encompasses simulated data poisoning attacks run by cybersecurity teams to test their AI systems for robustness. While it is hard to build an AI system that is 100% secure, frequent validation of predictive outputs goes a long way in detecting and preventing poisoning.

Creating a Secure Future for AI

While AI keeps evolving, there is a need to instil trust in such systems. This will only be possible when the entire ecosystem of AI, even the supply chains, forms part of the cybersecurity framework. This would be achievable through monitoring inputs and outputs against unusual or irregular AI systems. Therefore, organisations will build robust, and more trustworthy models of AI.

Ultimately, the future of AI hangs in the balance with our capability to race against emerging threats like data poisoning. In sum, the ability of businesses to proactively take steps toward the security of AI systems today protects them from one of the most serious challenges facing the digital world.

The bottom line is that AI security is not just about algorithms; it's about the integrity for the data powering those algorithms.


 

Adopting a Connected Mindset: A Strategic Imperative for National Security

 

In today's rapidly advancing technological landscape, connectivity goes beyond being just a buzzword—it has become a strategic necessity for both businesses and national defense. As security threats grow more sophisticated, an integrated approach that combines technology, strategic planning, and human expertise is essential. Embracing a connected mindset is crucial for national security, and here's how it can be effectively implemented.What is a Connected Mindset?A connected mindset involves understanding that security is not an isolated function but a comprehensive effort that spans multiple domains and disciplines. It requires seamless collaboration between government, private industry, and academia to address security challenges. This approach recognizes that modern threats are interconnected and complex, necessitating a comprehensive response.Over the past few decades, security threats have evolved significantly. While traditional threats like military aggression still exist, newer challenges such as cyber threats, economic espionage, and misinformation have emerged. Cybersecurity has become a major concern as both state and non-state actors develop new methods to exploit vulnerabilities in digital infrastructure. Attacks on critical systems can disrupt essential services, leading to widespread chaos and posing risks to public safety. The recent rise in ransomware attacks on healthcare, financial sectors, and government entities underscores the need for a comprehensive approach to these challenges.The Central Role of TechnologyAt the core of the connected mindset is technology. Advances in artificial intelligence (AI), machine learning, and big data analytics provide valuable tools for detecting and countering threats. However, these technologies need to be part of a broader strategy that includes human insight and collaborative efforts. AI can process large datasets to identify patterns and anomalies indicating potential threats, while machine learning algorithms can predict vulnerabilities and suggest proactive measures. Big data analytics enable real-time insights into emerging risks, facilitating faster and more effective responses.Despite the critical role of technology, human expertise remains indispensable. Cybersecurity professionals, intelligence analysts, and policymakers must collaborate to interpret data, evaluate risks, and devise strategies. Public-private partnerships are vital for fostering this cooperation, as the private sector often possesses cutting-edge technology and expertise, while the government has access to critical intelligence and regulatory frameworks. Together, they can build a more resilient security framework.To implement a connected mindset effectively, consider the following steps:
  • Promote Continuous Education and Training: Regular training programs are essential to keep professionals up-to-date with the latest threats and technologies. Cybersecurity certifications, workshops, and simulations can help enhance skills and preparedness.
  • Encourage Information Sharing: Establishing robust platforms and protocols for information sharing between public and private sectors can enhance threat detection and response times. Shared information must be timely, accurate, and actionable.
  • Invest in Advanced Technology: Governments and organizations should invest in AI, machine learning, and advanced cybersecurity tools to stay ahead of evolving threats, ensuring real-time threat analysis capabilities.
  • Foster Cross-Sector Collaboration: Cultivating a culture of collaboration is crucial. Regular meetings, joint exercises, and shared initiatives can build stronger partnerships and trust.
  • Develop Supportive Policies: Policies and regulations should encourage a connected mindset by promoting collaboration and innovation while protecting data privacy and supporting effective threat detection.
A connected mindset is not just a strategic advantage—it is essential for national security. As threats evolve, adopting a holistic approach that integrates technology, human insight, and cross-sector collaboration is crucial. By fostering this mindset, we can create a more resilient and secure future capable of addressing the complexities of modern security challenges. In a world where physical and digital threats increasingly overlap, a connected mindset paves the way for enhanced national security and a safer global community.

How AI and Machine Learning Are Revolutionizing Cybersecurity

 

The landscape of cybersecurity has drastically evolved over the past decade, driven by increasingly sophisticated and costly cyberattacks. As more businesses shift online, they face growing threats, creating a higher demand for innovative cybersecurity solutions. The rise of AI and machine learning is reshaping the cybersecurity industry, offering powerful tools to combat these modern challenges. 

AI and machine learning, once seen as futuristic technologies, are now integral to cybersecurity. By processing vast amounts of data and identifying patterns at incredible speeds, these technologies surpass human capabilities, providing a new level of protection. Traditional cybersecurity methods relied heavily on human expertise and signature-based detection, which were effective in the past. However, with the increasing complexity of cybercrime, AI offers a significant advantage by enabling faster and more accurate threat detection and response. Machine learning is the engine driving AI-powered cybersecurity solutions. 

By feeding large datasets into algorithms, machine learning models can uncover hidden patterns and predict potential threats. This ability allows AI to detect unknown risks and anticipate future attacks, significantly enhancing the effectiveness of cybersecurity measures. AI-powered systems can mimic human thought processes to some extent, enabling them to learn from experience, adapt to new challenges, and make real-time decisions. These systems can block malicious traffic, quarantine files, and even take independent actions to counteract threats, all without human intervention. By analyzing vast amounts of data rapidly, AI can identify patterns and predict potential cyberattacks. This proactive approach allows security teams to defend against threats before they escalate, reducing the risk of damage. 

Additionally, AI can automate incident response, acting swiftly to detect breaches and contain damage, often faster than any human could. AI also plays a crucial role in hunting down zero-day threats, which are previously unknown vulnerabilities that attackers can exploit before they are patched. By analyzing data for anomalies, AI can identify these vulnerabilities early, allowing security teams to address them before they are exploited. 

Moreover, AI enhances cloud security by analyzing data to detect threats and vulnerabilities, ensuring that businesses can safely transition to cloud-based systems. The integration of AI in various cybersecurity tools, such as Security Orchestration, Automation, and Response (SOAR) platforms and endpoint protection solutions, is a testament to its potential. With AI’s ability to detect and respond to threats faster and more accurately than ever before, the future of cybersecurity looks promising.