Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Systems. Show all posts

AI System Optimise Could Help GPs Identify High-Risk Heart Patients

 

Artificial intelligence (AI) is proving to be a game-changer in healthcare by helping general practitioners (GPs) identify patients who are most at risk of developing conditions that could lead to severe heart problems. Researchers at the University of Leeds have contributed to training an AI system called Optimise, which analyzed the health records of more than two million people. The AI was designed to detect undiagnosed conditions and identify individuals who had not received appropriate medications to help reduce their risk of heart-related issues. 

From the two million health records it scanned, Optimise identified over 400,000 people at high risk for serious conditions such as heart failure, stroke, and diabetes. This group represented 74% of patients who ultimately died from heart-related complications, underscoring the critical need for early detection and timely medical intervention. In a pilot study involving 82 high-risk patients, the AI found that one in five individuals had undiagnosed moderate to high-risk chronic kidney disease. 

Moreover, more than half of the patients with high blood pressure were prescribed new medications to better manage their risk of heart problems. Dr. Ramesh Nadarajah, a health data research fellow from the University of Leeds, noted that deaths related to heart conditions are often caused by a constellation of factors. According to him, Optimise leverages readily available data to generate insights that could assist healthcare professionals in delivering more effective and timely care to their patients. Early intervention is often more cost-effective than treating advanced diseases, making the use of AI a valuable tool for both improving patient outcomes and optimizing healthcare resources. 

The study’s findings suggest that using AI in this way could allow doctors to treat patients earlier, potentially reducing the strain on the NHS. Researchers plan to carry out a larger clinical trial to further test the system’s capabilities. The results were presented at the European Society of Cardiology Congress in London. It was pointed out by Professor Bryan Williams that a quarter of all deaths in the UK are due to heart and circulatory diseases. This innovative study harnesses the power of evolving AI technology to detect a range of conditions that contribute to these diseases, offering a promising new direction in medical care.

Navigating AI and GenAI: Balancing Opportunities, Risks, and Organizational Readiness

 

The rapid integration of AI and GenAI technologies within organizations has created a complex landscape, filled with both promising opportunities and significant challenges. While the potential benefits of these technologies are evident, many companies find themselves struggling with AI literacy, cautious adoption practices, and the risks associated with immature implementation. This has led to notable disruptions, particularly in the realm of security, where data threats, deepfakes, and AI misuse are becoming increasingly prevalent. 

A recent survey revealed that 16% of organizations have experienced disruptions directly linked to insufficient AI maturity. Despite recognizing the potential of AI, system administrators face significant gaps in education and organizational readiness, leading to mixed results. While AI adoption has progressed, the knowledge needed to leverage it effectively remains inadequate. This knowledge gap has decreased only slightly, with 60% of system administrators admitting to a lack of understanding of AI’s practical applications. Security risks associated with GenAI are particularly urgent, especially those related to data. 

With the increased use of AI, enterprises have reported a surge in proprietary source code being shared within GenAI applications, accounting for 46% of all documented data policy violations. This raises serious concerns about the protection of sensitive information in a rapidly evolving digital landscape. In a troubling trend, concerns about job security have led some cybersecurity teams to hide security incidents. The most alarming AI threats include GenAI model prompt hacking, data poisoning, and ransomware as a service. Additionally, 41% of respondents believe GenAI holds the most promise for addressing cyber alert fatigue, highlighting the potential for AI to both enhance and challenge security practices. 

The rapid growth of AI has also put immense pressure on CISOs, who must adapt to new security risks. A significant portion of security leaders express a lack of confidence in their workforce’s ability to identify AI-driven cyberattacks. The overwhelming majority of CISOs have admitted that the rise of AI has made them reconsider their future in the role, underscoring the need for updated policies and regulations to secure organizational systems effectively. Meanwhile, employees have increasingly breached company rules regarding GenAI use, further complicating the security landscape. 

Despite the cautious optimism surrounding AI, there is a growing concern that AI might ultimately benefit malicious actors more than the organizations trying to defend against them. As AI tools continue to evolve, organizations must navigate the fine line between innovation and security, ensuring that the integration of AI and GenAI technologies does not expose them to greater risks.

NIST Introduces ARIA Program to Enhance AI Safety and Reliability

 

The National Institute of Standards and Technology (NIST) has announced a new program called Assessing Risks and Impacts of AI (ARIA), aimed at better understanding the capabilities and impacts of artificial intelligence. ARIA is designed to help organizations and individuals assess whether AI technologies are valid, reliable, safe, secure, private, and fair in real-world applications. 

This initiative follows several recent announcements from NIST, including developments related to the Executive Order on trustworthy AI and the U.S. AI Safety Institute's strategic vision and international safety network. The ARIA program, along with other efforts supporting Commerce’s responsibilities under President Biden’s Executive Order on AI, demonstrates NIST and the U.S. AI Safety Institute’s commitment to minimizing AI risks while maximizing its benefits. 

The ARIA program addresses real-world needs as the use of AI technology grows. This initiative will support the U.S. AI Safety Institute, expand NIST’s collaboration with the research community, and establish reliable methods for testing and evaluating AI in practical settings. The program will consider AI systems beyond theoretical models, assessing their functionality in realistic scenarios where people interact with the technology under regular use conditions. This approach provides a broader, more comprehensive view of the effects of these technologies. The program helps operationalize the framework's recommendations to use both quantitative and qualitative techniques for analyzing and monitoring AI risks and impacts. 

ARIA will further develop methodologies and metrics to measure how well AI systems function safely within societal contexts. By focusing on real-world applications, ARIA aims to ensure that AI technologies can be trusted to perform reliably and ethically outside of controlled environments. The findings from the ARIA program will support and inform NIST’s collective efforts, including those through the U.S. AI Safety Institute, to establish a foundation for safe, secure, and trustworthy AI systems. This initiative is expected to play a crucial role in ensuring AI technologies are thoroughly evaluated, considering not only their technical performance but also their broader societal impacts. 

The ARIA program represents a significant step forward in AI oversight, reflecting a proactive approach to addressing the challenges and opportunities presented by advanced AI systems. As AI continues to integrate into various aspects of daily life, the insights gained from ARIA will be instrumental in shaping policies and practices that safeguard public interests while promoting innovation.

Are The New AI PCs Worth The Hype?

 

In recent years, the realm of computing has witnessed a remarkable transformation with the rise of AI-powered PCs. These cutting-edge machines are not just your ordinary computers; they are equipped with advanced artificial intelligence capabilities that are revolutionizing the way we work, learn, and interact with technology. From enhancing productivity to unlocking new creative possibilities, AI PCs are rapidly gaining popularity and reshaping the digital landscape. 

AI PCs, also known as artificial intelligence-powered personal computers, are a new breed of computing devices that integrate AI technology directly into the hardware and software architecture. Unlike traditional PCs, which rely solely on the processing power of the CPU and GPU, AI PCs leverage specialized AI accelerators, neural processing units (NPUs), and machine learning algorithms to deliver unparalleled performance and efficiency. 

One of the key features of AI PCs is their ability to adapt and learn from user behavior over time. By analyzing patterns in user interactions, preferences, and workflow, these intelligent machines can optimize performance, automate repetitive tasks, and personalize user experiences. Whether it's streamlining workflow in professional settings or enhancing gaming experiences for enthusiasts, AI PCs are designed to cater to diverse user needs and preferences. One of the most significant advantages of AI PCs is their ability to handle complex computational tasks with unprecedented speed and accuracy. 

From natural language processing and image recognition to data analysis and predictive modeling, AI-powered algorithms enable these machines to tackle tasks that were once considered beyond the capabilities of traditional computing systems. This opens up a world of possibilities for industries ranging from healthcare and finance to manufacturing and entertainment, where AI-driven insights and automation are driving innovation and efficiency. 

Moreover, AI PCs are empowering users to unleash their creativity and explore new frontiers in digital content creation. With advanced AI-powered tools and software applications, users can generate realistic graphics, compose music, edit videos, and design immersive virtual environments with ease. Whether you're a professional artist, filmmaker, musician, or aspiring creator, AI PCs provide the tools and resources to bring your ideas to life in ways that were previously unimaginable. 

Another key aspect of AI PCs is their role in facilitating seamless integration with emerging technologies such as augmented reality (AR) and virtual reality (VR). By harnessing the power of AI to optimize performance and enhance user experiences, these machines are driving the adoption of immersive technologies across various industries. From immersive gaming experiences to interactive training simulations and virtual collaboration platforms, AI PCs are laying the foundation for the next generation of digital experiences. 

AI PCs represent a paradigm shift in computing that promises to redefine the way we interact with technology and unleash new possibilities for innovation and creativity. With their advanced AI capabilities, these intelligent machines are poised to drive significant advancements across industries and empower users to achieve new levels of productivity, efficiency, and creativity. As the adoption of AI PCs continues to grow, we can expect to see a future where intelligent computing becomes the new norm, transforming the way we live, work, and connect with the world around us.

UK Government’s New AI System to Monitor Bank Accounts

 



The UK’s Department for Work and Pensions (DWP) is gearing up to deploy an advanced AI system aimed at detecting fraud and overpayments in social security benefits. The system will scrutinise millions of bank accounts, including those receiving state pensions and Universal Credit. This move comes as part of a broader effort to crack down on individuals either mistakenly or intentionally receiving excessive benefits.

Despite the government's intentions to curb fraudulent activities, the proposed measures have sparked significant backlash. More than 40 organisations, including Age UK and Disability Rights UK, have voiced their concerns, labelling the initiative as "a step too far." These groups argue that the planned mass surveillance of bank accounts poses serious threats to privacy, data protection, and equality.

Under the proposed Data Protection and Digital Information Bill, banks would be mandated to monitor accounts and flag any suspicious activities indicative of fraud. However, critics contend that such measures could set a troubling precedent for intrusive financial surveillance, affecting around 40% of the population who rely on state benefits. Furthermore, these powers extend to scrutinising accounts linked to benefit claims, such as those of partners, parents, and landlords.

In regards to the mounting criticism, the DWP emphasised that the new system does not grant them direct access to individuals' bank accounts or allow monitoring of spending habits. Nevertheless, concerns persist regarding the broad scope of the surveillance, which would entail algorithmic scanning of bank and third-party accounts without prior suspicion of fraudulent behaviour.

The joint letter from advocacy groups highlights the disproportionate nature of the proposed powers and their potential impact on privacy rights. They argue that the sweeping surveillance measures could infringe upon individual liberties and exacerbate existing inequalities within the welfare system.

As the debate rages on, stakeholders are calling for greater transparency and safeguards to prevent misuse of the AI-powered monitoring system. Advocates stress the need for a balanced approach that addresses fraud while upholding fundamental rights to privacy and data protection.

While the DWP asserts that the measures are necessary to combat fraud, critics argue that they represent a disproportionate intrusion into individuals' financial privacy. As this discourse takes shape, the situation is pronouncing the importance of finding a balance between combating fraud and safeguarding civil liberties in the digital sphere. 


Hays Research Reveals the Increasing AI Adoption in Scottish Workplaces


Artificial intelligence (AI) tool adoption in Scottish companies has significantly increased, according to a new survey by recruitment firm Hays. The study, which is based on a poll with almost 15,000 replies from professionals and employers—including 886 from Scotland—shows a significant rise in the percentage of companies using AI in their operations over the previous six months, from 26% to 32%.

Mixed Attitudes Toward the Impact of AI on Jobs

Despite the upsurge in AI technology, the study reveals that professionals have differing opinions on how AI will affect their jobs. Even though 80% of Scottish professionals do not already use AI in their employment, 21% think that AI technologies will improve their ability to do their tasks. Interestingly, during the past six months, the percentage of professionals expecting a negative impact has dropped from 12% to 6%.

However, the study indicates its concern among employees, with 61% of them believing that their companies are not doing enough to prepare them for the expanding use of AI in the workplace. Concerns are raised by this trend regarding the workforce's readiness to adopt and take full use of AI technologies. Tech-oriented Hays business director Justin Black stresses the value of giving people enough training opportunities to advance their skills and become proficient with new technologies.

Barriers to AI Adoption 

The reluctance of enterprises to disclose their data and intellectual property to AI systems, citing concerns linked to GDPR compliance (General Data Protection Regulation), is one of the noteworthy challenges impeding the mass adoption of AI. This reluctance is also influenced by concerns about trust. The demand for AI capabilities has outpaced the increase of skilled individuals in the sector, highlighting a skills deficit in the AI space, according to Black.

The reluctance to subject sensitive data and intellectual property to AI systems results from concerns about GDPR compliance. Businesses are cautious about the possible dangers of disclosing confidential data to AI systems. Professionals' scepticism about the security and dependency on AI systems contributes to their trust issues. 

The study suggests that as AI sets its foot as a crucial element in Scottish workplaces, employees should prioritize tackling skills shortages, encouraging employee readiness, and improving communication about AI integration, given the growing role that AI is playing in workplaces. By doing this, businesses might as well ease the concerns about GDPR and trust difficulties while simultaneously fostering an atmosphere that allows employees to fully take advantage of AI technology's benefits.  

Here's How Quantum Computing can Help Safeguard the Future of AI Systems

 

Algorithms for artificial intelligence are rapidly entering our daily lives. Machine learning is already or soon will be the foundation of many systems that demand high levels of security. To name a few of these technologies, there are robotics, autonomous vehicles, banking, facial recognition, and military targeting software. 

This poses a crucial question: How resistant to hostile attacks are these machine learning algorithms? 

Security experts believe that incorporating quantum computing into machine learning models may produce fresh algorithms that are highly resistant to hostile attacks.

Data manipulation attacks' risks

For certain tasks, machine learning algorithms may be extremely precise and effective. They are very helpful for categorising and locating visual features. But they are also quite susceptible to data manipulation assaults, which can be very dangerous for security. 

There are various techniques to conduct data manipulation assaults, which require the very delicate alteration of image data. An attack could be conducted by introducing erroneous data into a dataset used to train an algorithm, causing it to pick up incorrect information. In situations where the AI system continues to train the underlying algorithms while in use, manipulated data can also be introduced during the testing phase (after training is complete). 

Even from the physical world, people are capable of committing such attacks. To trick a self-driving car's artificial intelligence into thinking a stop sign is a speed restriction sign, someone may apply a sticker to it. Or, soldiers may wear clothing on the front lines that would make them appear to be natural terrain features to AI-based drones. Attacks on data manipulation can have serious repercussions in any case.

For instance, a self-driving car may mistakenly believe there are no people on the road if it utilises a machine learning algorithm that has been compromised. In reality, there are people on the road.

What role quantum computing can play 

In this article, we discuss the potential development of secure algorithms known as quantum machine learning models through the integration of quantum computing with machine learning. In order to detect certain patterns in image data that are difficult to manipulate, these algorithms were painstakingly created to take advantage of unique quantum features. Resilient algorithms that are secure from even strong attacks would be the outcome. Furthermore, they wouldn't call for the pricey "adversarial training" that is currently required to train algorithms to fend off such assaults. Quantum machine learning may also provide quicker algorithmic training and higher feature accuracy.

So how would it function?

The smallest unit of data that modern classical computers can handle is called a "bit," which is stored and processed as binary digits. Bits are represented as binary numbers, specifically 0s and 1s, in traditional computers, which adhere to the principles of classical physics. On the other hand, quantum computing adheres to the same rules as quantum physics. Quantum bits, or qubits, are used in quantum computers to store and process information. Qubits can be simultaneously 0, 1, or both 0 and 1.

A quantum system is considered to be in a superposition state when it is simultaneously in several states. It is possible to create smart algorithms that take advantage of this property using quantum computers. Although employing quantum computing to protect machine learning models has tremendous potential advantages, it could potentially have drawbacks.

On the one hand, quantum machine learning models will offer vital security for a wide range of sensitive applications. Quantum computers, on the other hand, might be utilised to develop powerful adversarial attacks capable of readily misleading even the most advanced traditional machine learning models. Moving forward, we'll need to think carefully about the best ways to defend our systems; an attacker with early quantum computers would pose a substantial security risk. 

Obstacles to overcome

Due to constraints in the present generation of quantum processors, current research shows that quantum machine learning will be a few years away. 

Today's quantum computers are relatively small (fewer than 500 qubits) and have substantial error rates. flaws can occur for a variety of causes, including poor qubit manufacture, flaws in control circuitry, or information loss (referred to as "quantum decoherence") caused by interaction with the environment. 

Nonetheless, considerable progress in quantum hardware and software has been made in recent years. According to recent quantum hardware roadmaps, quantum devices built in the coming years are expected to include hundreds to thousands of qubits. 

These devices should be able to run sophisticated quantum machine learning models to help secure a wide range of sectors that rely on machine learning and AI tools. Governments and the commercial sector alike are increasing their investments in quantum technology around the world. 

This month, the Australian government unveiled the National Quantum Strategy, which aims to expand the country's quantum sector and commercialise quantum technology. According to the CSIRO, Australia's quantum sector might be valued A$2.2 billion by 2030.