Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI technology. Show all posts

Creating a Strong Cybersecurity Culture: The Key to Business Resilience

 

In today’s fast-paced digital environment, businesses face an increasing risk of cyber threats. Establishing a strong cybersecurity culture is essential to protecting sensitive information, maintaining operations, and fostering trust with clients. Companies that prioritize cybersecurity awareness empower employees to play an active role in safeguarding data, creating a safer and more resilient business ecosystem. 

A cybersecurity-aware culture is about more than just protecting networks and systems; it’s about ensuring that every employee understands their role in preventing cyberattacks. The responsibility for data security has moved beyond IT departments to involve everyone in the organization. Even with robust technology, a single mistake—such as clicking a phishing link—can lead to severe consequences. Therefore, educating employees about potential threats and how to mitigate them is crucial. 

As technology becomes increasingly integrated into business operations, security measures must evolve to address emerging risks. The importance of cybersecurity awareness cannot be overstated. Just as you wouldn’t leave your home unsecured, companies must ensure their employees recognize the value of safeguarding corporate information. Awareness training helps employees understand that protecting company data also protects their personal digital presence. This dual benefit motivates individuals to remain vigilant, both professionally and personally. Regular cybersecurity training programs, designed to address threats like phishing, malware, and weak passwords, are critical. Studies show that such initiatives significantly reduce the likelihood of successful attacks. 

In addition to training, consistent reminders throughout the year help reinforce cybersecurity principles. Simulated phishing exercises, for instance, teach employees to identify suspicious emails by looking for odd sender addresses, unusual keywords, or errors in grammar. Encouraging the use of strong passwords and organizing workshops to discuss evolving threats also contribute to a secure environment. Organizations that adopt these practices often see measurable improvements in their overall cybersecurity posture. Artificial intelligence (AI) has emerged as a powerful tool for cybersecurity, offering faster and more accurate threat detection. 

However, integrating AI into a security strategy requires careful consideration. AI systems must be managed effectively to avoid introducing new vulnerabilities. Furthermore, while AI excels at monitoring and detection, foundational cybersecurity knowledge among employees remains essential. A well-trained workforce can address risks independently, ensuring that AI complements human efforts rather than replacing them. Beyond internal protections, cybersecurity also plays a vital role in maintaining customer trust. Clients want to know their data is secure, and any breach can severely harm a company’s reputation. 

For example, a recent incident involving CrowdStrike revealed how technical glitches can escalate into major phishing attacks, eroding client confidence. Establishing a clear response strategy and fostering a culture of accountability help organizations manage such crises effectively. 

A robust cybersecurity culture is essential for modern businesses. By equipping employees with the tools and knowledge to identify and respond to threats, organizations not only strengthen their defenses but also enhance trust with customers. This proactive approach is key to navigating today’s complex digital landscape with confidence and resilience.

Reboot Revolution Protecting iPhone Users

 


Researchers at the University of Michigan (UMI) believe that Apple's new iPhone software has a novel security feature. It presents that the feature may automatically reboot the phone if it has been unlocked for 72 hours without being unlocked. 

As 404 Media reported later, a new technology called "inactivity reboot" was introduced in iOS 18.1, which forces devices to restart if their inactivity continues for more than a given period.  Aside from the Inactivity Reboot feature, Apple continues to enhance its security framework with additional features as part of its ongoing security enhancements. Stolen Data Protection is one of the features introduced in iOS 17.3. It allows the device to be protected against theft by requiring biometric authentication (Face ID or Touch ID) before allowing it to change key settings. 

There are various methods to ensure that a stolen device is unable to be reconfigured easily, including this extra layer of security. With the upcoming iOS 18.2 update, Apple intends to take advantage of a feature called Stolen Data Protection, which is set to be turned off by default to avoid confusing users. However, Apple plans to encourage users to enable it when setting up their devices or after a factory reset to maintain an optimal user experience. 

As a result, users will be able to have more control over the way their personal information is protected. Apple has quietly introduced a new feature to its latest iPhone update that makes it even harder for anyone to unlock a device without consent—whether they are thieves or law enforcement officers. With this inactivity reboot feature, Apple has made unlocking even more difficult for anyone. When an iPhone has been asleep or in lock mode for an extended period, a new feature is introduced with iOS 18.1 will automatically reboot it in addition to turning it off. 

A common problem with iPhones is that once they have been rebooted, they become more difficult to crack since either a passcode or biometric signature is required to unlock them. According to the terms of the agreement, the primary objective of this measure is to prevent thieves (or police officers) from hacking into smartphones and potentially accessing data on them. There is a new "inactivity reboot" feature included in iOS 18 that, according to experts who spoke to 404 Media, will restart the device after approximately four days of dormancy if no activity is made.

A confirmation of this statement was provided by Magnet Forensics' Christopher Vance in a law enforcement group chat as described in Magnet Forensics' Christopher Vance, who wrote that iOS 18.1 has a timer which runs out after a set amount of time, and the device then reboots, moving from an AFU (After First Unlock) state to a BFU (Before First Unlock) state at the end of this timer. According to 404 Media, it seems that the issue was discovered after officers from the Detroit Police Department found the feature while investigating a crime scene in Detroit, Michigan.

When officers were working on iPhones for forensic purposes in the course of their investigation, they noticed that they automatically rebooted themselves frequently, which made it more difficult for them to unlock and access the devices. As soon as the devices were disconnected from a cellular network for some time, the working theory was that the phones would reboot when they were no longer connected to the network.  

However, there are actually much simpler explanations that can be provided for this situation. The feature, which AppleInsider refers to as an inactivity reboot, is not based on the current network connection or the state of the battery on the phone, which are factors that may affect the reboot timer. The reboot typically occurs after a certain amount of time has elapsed -- somewhere around 96 hours in most cases.  Essentially, the function of this timer is identical to the Mac's hibernation mode, which is intended to put the computer to sleep as a precaution in case there is a power outage or the battery is suddenly discharged. 

During the BFU state of the iPhone, all data on the iPhone belongs to the user and is fully encrypted, and is nearly impossible for anyone to access, except a person who knows the user's passcode to be able to get into the device. However, when the phone is in a state known as "AFU", certain data can be extracted by some device forensic tools, even if the phone is locked, since it is unencrypted and is thus easier to access and extract.  

According to Tihmstar, an iPhone security researcher on TechCrunch, the iPhones in these two states are also known as "hot" devices or "cold" devices depending on their temperature.  As a result, Tihmstar was making a point to emphasize that the majority of forensic firms are focusing on "hot" devices in an AFU state as they can verify that the user entered the correct passcode in the iPhone's secure enclave at some point. A "cold" device, on the other hand, is considerably more difficult to compromise because its memory can not be easily accessed once the device restarts, so there is no easy way to compromise it.

The law enforcement community has consistently opposed and argued against new technology that Apple has implemented to enhance security, arguing that this is making their job more difficult. According to reports, in 2016, the FBI filed a lawsuit against Apple in an attempt to force the company to install a backdoor that would enable it to open a phone owned by a mass shooter. Azimuth Security, an Australian startup, ultimately assisted the FBI in gaining access to the phone through hacking. 

These developments highlight Apple’s ongoing commitment to prioritizing user privacy and data security, even as such measures draw criticism from law enforcement agencies. By introducing features like Inactivity Reboot and Stolen Data Protection, Apple continues to establish itself as a leader in safeguarding personal information against unauthorized access. 

These innovations underscore the broader debate between privacy advocates and authorities over the balance between individual rights and security imperatives in an increasingly digitized world.

Securing Generative AI: Tackling Unique Risks and Challenges

 

Generative AI has introduced a new wave of technological innovation, but it also brings a set of unique challenges and risks. According to Phil Venables, Chief Information Security Officer of Google Cloud, addressing these risks requires expanding traditional cybersecurity measures. Generative AI models are prone to issues such as hallucinations—where the model produces inaccurate or nonsensical content—and the leaking of sensitive information through model outputs. These risks necessitate the development of tailored security strategies to ensure safe and reliable AI use. 

One of the primary concerns with generative AI is data integrity. Models rely heavily on vast datasets for training, and any compromise in this data can lead to significant security vulnerabilities. Venables emphasizes the importance of maintaining the provenance of training data and implementing controls to protect its integrity. Without proper safeguards, models can be manipulated through data poisoning, which can result in the production of biased or harmful outputs. Another significant risk involves prompt manipulation, where adversaries exploit vulnerabilities in the AI model to produce unintended outcomes. 

This can include injecting malicious prompts or using adversarial tactics to bypass the model’s controls. Venables highlights the necessity of robust input filtering mechanisms to prevent such manipulations. Organizations should deploy comprehensive logging and monitoring systems to detect and respond to suspicious activities in real time. In addition to securing inputs, controlling the outputs of AI models is equally critical. Venables recommends the implementation of “circuit breakers”—mechanisms that monitor and regulate model outputs to prevent harmful or unintended actions. This ensures that even if an input is manipulated, the resulting output is still within acceptable parameters. Infrastructure security also plays a vital role in safeguarding generative AI systems. 

Venables advises enterprises to adopt end-to-end security practices that cover the entire lifecycle of AI deployment, from model training to production. This includes sandboxing AI applications, enforcing the least privilege principle, and maintaining strict access controls on models, data, and infrastructure. Ultimately, securing generative AI requires a holistic approach that combines innovative security measures with traditional cybersecurity practices. 

By focusing on data integrity, robust monitoring, and comprehensive infrastructure controls, organizations can mitigate the unique risks posed by generative AI. This proactive approach ensures that AI systems are not only effective but also safe and trustworthy, enabling enterprises to fully leverage the potential of this groundbreaking technology while minimizing associated risks.

Microsoft and Salesforce Clash Over AI Autonomy as Competition Intensifies

 

The generative AI landscape is witnessing fierce competition, with tech giants Microsoft and Salesforce clashing over the best approach to AI-powered business tools. Microsoft, a significant player in AI due to its collaboration with OpenAI, recently unveiled “Copilot Studio” to create autonomous AI agents capable of automating tasks in IT, sales, marketing, and finance. These agents are meant to streamline business processes by performing routine operations and supporting decision-making. 

However, Salesforce CEO Marc Benioff has openly criticized Microsoft’s approach, likening Copilot to “Clippy 2.0,” referencing Microsoft’s old office assistant software that was often ridiculed for being intrusive. Benioff claims Microsoft lacks the data quality, enterprise security, and integration Salesforce offers. He highlighted Salesforce’s Agentforce, a tool designed to help enterprises build customized AI-driven agents within Salesforce’s Customer 360 platform. According to Benioff, Agentforce handles tasks autonomously across sales, service, marketing, and analytics, integrating large language models (LLMs) and secure workflows within one system. 

Benioff asserts that Salesforce’s infrastructure is uniquely positioned to manage AI securely, unlike Copilot, which he claims may leak sensitive corporate data. Microsoft, on the other hand, counters that Copilot Studio empowers users by allowing them to build custom agents that enhance productivity. The company argues that it meets corporate standards and prioritizes data protection. The stakes are high, as autonomous agents are projected to become essential for managing data, automating operations, and supporting decision-making in large-scale enterprises. 

As AI tools grow more sophisticated, both companies are vying to dominate the market, setting standards for security, efficiency, and integration. Microsoft’s focus on empowering users with flexible AI tools contrasts with Salesforce’s integrated approach, which centers on delivering a unified platform for AI-driven automation. Ultimately, this rivalry is more than just product competition; it reflects two different visions for how AI can transform business. While Salesforce focuses on integrated security and seamless data flows, Microsoft is emphasizing adaptability and user-driven AI customization. 

As companies assess the pros and cons of each approach, both platforms are poised to play a pivotal role in shaping AI’s impact on business. With enterprises demanding robust, secure AI solutions, the outcomes of this competition could influence AI’s role in business for years to come. As these AI leaders continue to innovate, their differing strategies may pave the way for advancements that redefine workplace automation and decision-making across the industry.

The Growing Role of AI in Ethical Hacking: Insights from Bugcrowd’s 2024 Report

Bugcrowd’s annual “Inside the Mind of a Hacker” report for 2024 reveals new trends shaping the ethical hacking landscape, with an emphasis on AI’s role in transforming hacking tactics. Compiled from feedback from over 1,300 ethical hackers, the report explores how AI is rapidly becoming an integral tool in cybersecurity, shifting from simple automation to advanced data analysis. 

This year, a remarkable 71% of hackers say AI enhances the value of hacking, up from just 21% last year, highlighting its growing significance. For ethical hackers, data analysis is now a primary AI use case, surpassing task automation. With 74% of participants agreeing that AI makes hacking more accessible, new entrants are increasingly using AI-powered tools to uncover vulnerabilities in systems and software. This is a positive shift, as these ethical hackers disclose security flaws, allowing companies to strengthen their defenses before malicious actors can exploit them. 

However, it also means that criminal hackers are adopting AI in similar ways, creating both opportunities and challenges for cybersecurity. Dave Gerry, Bugcrowd’s CEO, emphasizes that while AI-driven threats evolve rapidly, ethical hackers are equally using AI to refine their methods. This trend is reshaping traditional cybersecurity strategies as hackers move toward more sophisticated, AI-enhanced approaches. While AI offers undeniable benefits, the security risks are just as pressing, with 81% of respondents recognizing AI as a significant potential threat. The report also underscores a key insight: while AI can complement human capabilities, it cannot fully replicate them. 

For example, only a minority of hackers surveyed felt that AI could surpass their skills or creativity. These findings suggest that while AI contributes to hacking, human insight remains crucial, especially in complex problem-solving and adaptive thinking. Michael Skelton, Bugcrowd’s VP of security, further notes that AI’s role in hardware hacking, a specialized niche, has expanded as Internet of Things (IoT) devices proliferate. AI helps identify tiny vulnerabilities in hardware that human hackers might overlook, such as power fluctuations and unusual electromagnetic signals. As AI reshapes the ethical hacking landscape, Bugcrowd’s report concludes with both a call to action and a note of caution. 

While AI offers valuable tools for ethical hackers, it equally empowers cybercriminals, accelerating the development of sophisticated, AI-driven attacks. This dual use highlights the importance of responsible, proactive cybersecurity practices. By leveraging AI to protect systems while staying vigilant against AI-fueled cyber threats, the hacking community can help guide the broader industry toward safer, more secure digital environments.

OpenAI’s Disruption of Foreign Influence Campaigns Using AI

 

Over the past year, OpenAI has successfully disrupted over 20 operations by foreign actors attempting to misuse its AI technologies, such as ChatGPT, to influence global political sentiments and interfere with elections, including in the U.S. These actors utilized AI for tasks like generating fake social media content, articles, and malware scripts. Despite the rise in malicious attempts, OpenAI’s tools have not yet led to any significant breakthroughs in these efforts, according to Ben Nimmo, a principal investigator at OpenAI. 

The company emphasizes that while foreign actors continue to experiment, AI has not substantially altered the landscape of online influence operations or the creation of malware. OpenAI’s latest report highlights the involvement of countries like China, Russia, Iran, and others in these activities, with some not directly tied to government actors. Past findings from OpenAI include reports of Russia and Iran trying to leverage generative AI to influence American voters. More recently, Iranian actors in August 2024 attempted to use OpenAI tools to generate social media comments and articles about divisive topics such as the Gaza conflict and Venezuelan politics. 

A particularly bold attack involved a Chinese-linked network using OpenAI tools to generate spearphishing emails, targeting OpenAI employees. The attack aimed to plant malware through a malicious file disguised as a support request. Another group of actors, using similar infrastructure, utilized ChatGPT to answer scripting queries, search for software vulnerabilities, and identify ways to exploit government and corporate systems. The report also documents efforts by Iran-linked groups like CyberAveng3rs, who used ChatGPT to refine malicious scripts targeting critical infrastructure. These activities align with statements from U.S. intelligence officials regarding AI’s use by foreign actors ahead of the 2024 U.S. elections. 

However, these nations are still facing challenges in developing sophisticated AI models, as many commercial AI tools now include safeguards against malicious use. While AI has enhanced the speed and credibility of synthetic content generation, it has not yet revolutionized global disinformation efforts. OpenAI has invested in improving its threat detection capabilities, developing AI-powered tools that have significantly reduced the time needed for threat analysis. The company’s position at the intersection of various stages in influence operations allows it to gain unique insights and complement the work of other service providers, helping to counter the spread of online threats.

Downside of Tech: Need for Upgraded Security Measures Amid AI-driven Cyberattacks


Technological advancements have brought about an unparalleled transformation in our lives. However, the flip side to this progress is the escalating threat posed by AI-driven cyberattacks

Rising AI Threats

Artificial intelligence, once considered a tool for enhancing security measures, has become a threat. Cybercriminals are leveraging AI to orchestrate more sophisticated and pervasive attacks. AI’s capability to analyze vast amounts of data at lightning speed, identify vulnerabilities, and execute attacks autonomously has rendered traditional security measures obsolete. 

Sneha Katkar from Quick Heal notes, “The landscape of cybercrime has evolved significantly with AI automating and enhancing these attacks.”

Rising AI Threats

From January to April 2024, Indians lost about Rs 1,750 crore to fraud, as reported by the Indian Cybercrime Coordination Centre. Cybercrime has led to major financial setbacks for both people and businesses, with phishing, ransomware, and online fraud becoming more common.

As AI technology advances rapidly, there are rising concerns about its ability to boost cyberattacks by generating more persuasive phishing emails, automating harmful activities, and creating new types of malware.

Cybercriminals employed AI-driven tools to bypass security protocols, resulting in the compromise of sensitive data. Such incidents underscore the urgent need for upgraded security frameworks to counter these advanced threats.

The rise of AI-powered malware and ransomware is particularly concerning. These malicious programs can adapt, learn, and evolve, making them harder to detect and neutralize. Traditional antivirus software, which relies on signature-based detection, is often ineffective against such threats. As Katkar pointed out, “AI-driven cyberattacks require an equally sophisticated response.”

Challenges in Addressing AI

One of the critical challenges in combating AI-driven cyberattacks is the speed at which these attacks can be executed. Automated attacks can be carried out in a matter of minutes, causing significant damage before any countermeasures can be deployed. This rapid execution leaves organizations with little time to react, highlighting the need for real-time threat detection and response systems.

Moreover, the use of AI in phishing attacks has added a new layer of complexity. Phishing emails generated by AI can mimic human writing styles, making them indistinguishable from legitimate communications. This sophistication increases the likelihood of unsuspecting individuals falling victim to these scams. Organizations must therefore invest in advanced AI-driven security solutions that can detect and mitigate such threats.

India Disconnects 1.77 Crore Mobile Connections Using AI Tools, Blocks 45 Lakh Spoofed Calls

 

The Indian government has disconnected over 1.77 crore mobile connections registered with fake or forged documents using AI-powered tools, according to a recent announcement by the Department of Telecommunications (DoT). The AI-based system has identified and blocked 45 lakh spoofed international calls, preventing them from entering the Indian telecom network. This initiative is part of a larger effort to curb telecom fraud and cybercrime, with four telecom service providers collaborating with DoT to implement a more advanced two-phase system. 

In the first phase, the system blocks spoofed calls at the telecom service provider (TSP) level, targeting those that attempt to use numbers from the provider’s own subscribers. In the second phase, a centralized system will be introduced to tackle spoofed calls using numbers from other TSPs, thereby ensuring more comprehensive protection. The Ministry of Communications announced that this centralized system is expected to be operational soon, enhancing the safety of Indian telecom subscribers. Alongside these efforts, the government has disconnected 33.48 lakh mobile connections and blocked 49,930 mobile handsets, particularly in districts considered to be cybercrime hotspots. About 77.61 lakh mobile connections exceeding the prescribed limits for individuals were also deactivated. 

The AI tools have further enabled the identification and blocking of 2.29 lakh mobile phones involved in fraudulent activities or cybercrime. Additionally, the DoT traced 12.02 lakh out of 21.03 lakh reported stolen or lost mobile phones. It also blocked 32,000 SMS headers, 2 lakh SMS templates, and 20,000 entities engaged in malicious messaging activities, preventing cybercriminals from sending fraudulent SMSs. Approximately 11 lakh accounts linked to fraudulent mobile connections have been frozen by banks and payment wallets, while WhatsApp has deactivated 11 lakh profiles associated with these numbers. 

In an effort to curb the sale of SIM cards issued with fake documents, 71,000 Point of Sale (SIM agents) have been blacklisted, and 365 FIRs have been filed. These measures represent a significant crackdown on telecom-related cybercrime, demonstrating the government’s commitment to making India’s telecom sector more secure through the use of advanced technology. The upcoming centralized system will further bolster this security, as it will address spoofed calls from all telecom providers.

Harvard Student Uses Meta Ray-Ban 2 Glasses and AI for Real-Time Data Scraping

A recent demonstration by Harvard student AnhPhu Nguyen using Meta Ray-Ban 2 smart glasses has revealed the alarming potential for privacy invasion through advanced AI-powered facial recognition technology. Nguyen’s experiment involved using these $379 smart glasses, equipped with a livestreaming feature, to capture faces in real-time. He then employed publicly available software to scan the internet for more images and data related to the individuals in view. 

By linking facial recognition data with databases such as voter registration records and other publicly available sources, Nguyen was able to quickly gather sensitive personal information like names, addresses, phone numbers, and even social security numbers. This process takes mere seconds, thanks to the integration of an advanced Large Language Model (LLM) similar to ChatGPT, which compiles the scraped data into a comprehensive profile and sends it to Nguyen’s phone. Nguyen claims his goal is not malicious, but rather to raise awareness about the potential threats posed by this technology. 

To that end, he has even shared a guide on how to remove personal information from certain databases he used. However, the effectiveness of these solutions is minimal compared to the vast scale of potential privacy violations enabled by facial recognition software. In fact, the concern over privacy breaches is only heightened by the fact that many databases and websites have already been compromised by bad actors. Earlier this year, for example, hackers broke into the National Public Data background check company, stealing information on three billion individuals, including every social security number in the United States. 

 This kind of privacy invasion will likely become even more widespread and harder to avoid as AI systems become more capable. Nguyen’s experiment demonstrated how easily someone could exploit a few small details to build trust and deceive people in person, raising ethical and security concerns about the future of facial recognition and data gathering technologies. While Nguyen has chosen not to release the software he developed, which he has dubbed “I-Xray,” the implications are clear. 

If a college student can achieve this level of access and sophistication, it is reasonable to assume that similar, if not more invasive, activities could already be happening on a much larger scale. This echoes the privacy warnings raised by whistleblowers like Edward Snowden, who have long warned of the hidden risks and pervasive surveillance capabilities in the digital age.

Cyber Resilience: Preparing for the Inevitable in a New Era of Cybersecurity

 

At the TED Conference in Vancouver this year, the Radical Innovators foundation brought together over 60 of the world’s leading CHROs, CIOs, and founders to discuss how emerging technologies like AI and quantum computing can enhance our lives. Despite the positive focus, the forum also addressed a more concerning topic: how these same technologies could amplify cybersecurity threats. Jeff Simon, CISO of T-Mobile, led a session on the future of security, engaging tech executives on the growing risks. 

The urgency of this discussion was underscored by alarming data from Proofpoint, which showed that 94% of cloud customers faced cyberattacks monthly in 2023, with 62% suffering breaches. This illustrates the increased risk posed by emerging technologies in the wrong hands. The sentiment from attendees was clear: successful cyberattacks are now inevitable, and the traditional focus on preventing breaches is no longer sufficient. Ajay Waghray, CIO of PG&E Corporation, emphasized a shift in mindset, suggesting that organizations must operate under the assumption that their systems are already compromised. 

He proposed a new approach centered around “cyber resilience,” which goes beyond stopping breaches to maintaining business continuity and strengthening organizational resilience during and after attacks. The concept of cyber resilience aligns with lessons learned during the pandemic, where resilience was about not just recovery, but coming back stronger. Bipul Sinha, CEO of Rubrik, a leading cyber resilience firm, believes organizations must know where sensitive data resides and evolve security policies to stay ahead of future threats. He argues that preparedness, including preemptive planning and strategic evolution after an attack, is crucial for continued business operations. 

Venture capital firms like Lightspeed Venture Partners are also recognizing this shift towards cyber resilience. Co-founder Ravi Mhatre highlights the firm’s investments in companies like Rubrik, Wiz, and Arctic Wolf, which focus on advanced threat mitigation and containment. Mhatre believes that cybersecurity now requires a more dynamic approach, moving beyond the idea of a strong perimeter to embrace evolutionary thinking. Waghray identifies four core elements of a cyber resilience strategy: planning, practice, proactive detection, and partnerships. 

These components serve as essential starting points for companies looking to adopt a cyber resilience posture, ensuring they are prepared to adapt, respond, and recover from the inevitable cyber threats of the future.

AI System Optimise Could Help GPs Identify High-Risk Heart Patients

 

Artificial intelligence (AI) is proving to be a game-changer in healthcare by helping general practitioners (GPs) identify patients who are most at risk of developing conditions that could lead to severe heart problems. Researchers at the University of Leeds have contributed to training an AI system called Optimise, which analyzed the health records of more than two million people. The AI was designed to detect undiagnosed conditions and identify individuals who had not received appropriate medications to help reduce their risk of heart-related issues. 

From the two million health records it scanned, Optimise identified over 400,000 people at high risk for serious conditions such as heart failure, stroke, and diabetes. This group represented 74% of patients who ultimately died from heart-related complications, underscoring the critical need for early detection and timely medical intervention. In a pilot study involving 82 high-risk patients, the AI found that one in five individuals had undiagnosed moderate to high-risk chronic kidney disease. 

Moreover, more than half of the patients with high blood pressure were prescribed new medications to better manage their risk of heart problems. Dr. Ramesh Nadarajah, a health data research fellow from the University of Leeds, noted that deaths related to heart conditions are often caused by a constellation of factors. According to him, Optimise leverages readily available data to generate insights that could assist healthcare professionals in delivering more effective and timely care to their patients. Early intervention is often more cost-effective than treating advanced diseases, making the use of AI a valuable tool for both improving patient outcomes and optimizing healthcare resources. 

The study’s findings suggest that using AI in this way could allow doctors to treat patients earlier, potentially reducing the strain on the NHS. Researchers plan to carry out a larger clinical trial to further test the system’s capabilities. The results were presented at the European Society of Cardiology Congress in London. It was pointed out by Professor Bryan Williams that a quarter of all deaths in the UK are due to heart and circulatory diseases. This innovative study harnesses the power of evolving AI technology to detect a range of conditions that contribute to these diseases, offering a promising new direction in medical care.

Project Strawberry: Advancing AI with Q-learning, A* Algorithms, and Dual-Process Theory

Project Strawberry, initially known as Q*, has quickly become a focal point of excitement and discussion within the AI community. The project aims to revolutionize artificial intelligence by enhancing its self-learning and reasoning capabilities, crucial steps toward achieving Artificial General Intelligence (AGI). By incorporating advanced algorithms and theories, Project Strawberry pushes the boundaries of what AI can accomplish, making it a topic of intense interest and speculation. 

At the core of Project Strawberry are several foundational algorithms that enable AI systems to learn and make decisions more effectively. The project utilizes Q-learning, a reinforcement learning technique that allows AI to determine optimal actions through trial and error, helping it navigate complex environments. Alongside this, the A* search algorithm provides efficient pathfinding capabilities, ensuring AI can find the best solutions to problems quickly and accurately. 

Additionally, the dual-process theory, inspired by human cognitive processes, is used to balance quick, intuitive judgments with thorough, deliberate analysis, enhancing decision-making abilities. Despite the project’s promising advancements, it also raises several concerns. One of the most significant risks involves encryption cracking, where advanced AI could potentially break encryption codes, posing a severe security threat. 

Furthermore, the issue of “AI hallucinations”—errors in AI outputs—remains a critical challenge that needs to be addressed to ensure accurate and trustworthy AI responses. Another concern is the high computational demands of Project Strawberry, which may lead to increased costs and energy consumption. Efficient resource management and optimization will be crucial to maintaining the project’s scalability and sustainability. The ultimate goal of Project Strawberry is to pave the way for AGI, where AI systems can perform any intellectual task a human can. 

Achieving AGI would revolutionize problem-solving across various fields, enabling AI to tackle long-term and complex challenges with advanced reasoning capabilities. OpenAI envisions developing “reasoners” that exhibit human-like intelligence, pushing the frontiers of AI research even further. While Project Strawberry represents a significant step forward in AI development, it also presents complex challenges that must be carefully navigated. 

The project’s potential has fueled widespread excitement and anticipation within the AI community, with many eagerly awaiting further updates and breakthroughs. As OpenAI continues to refine and develop Project Strawberry, it could set the stage for a new era in AI, bringing both remarkable possibilities and significant responsibilities.

Snowflake Faces Declining Growth Amid Cybersecurity Concerns and AI Expansion

 

Snowflake Inc. recently faced a challenging earnings period marked by slowing growth and concerns following multiple cyberattacks. Despite being an AI data company with innovative technology, these events have impacted investor confidence, causing the stock price to retest recent lows. The company’s latest financial results reflect a continuing trend of decelerating growth, which is compounded by a valuation that assumes far higher growth rates than currently achieved.  

Snowflake’s sales growth has slowed considerably, with its FQ2 revenue growing by just under 29%, down from nearly 33% in the previous quarter. Projections for FQ3 suggest an even sharper decline, with product revenue growth forecasted to rise by only 22% year-over-year. The slowdown in revenue is significant, with growth rates expected to dip to as low as 20% in FQ4. In past quarters, Snowflake experienced higher sequential growth on a much smaller base, indicating that the company’s growth challenges are becoming more pronounced as it scales. The deceleration in sales has not been mitigated by the company’s focus on AI. During the earnings call, Snowflake highlighted the adoption of AI technologies among its 2,500 customers. 

However, these new product features, such as those centered around AI products like Cortex, are not expected to materially impact revenues in the near term. Snowflake’s guidance for FY 2025 does not factor in any significant contributions from these AI initiatives, further dampening expectations for a quick turnaround. Snowflake’s recent performance is further complicated by lingering cybersecurity issues. The company faced a series of cyberattacks where customer data stored on their platforms was compromised, partly due to inadequate sign-on controls by customers. Additionally, the recent CrowdStrike (CRWD) cybersecurity incident has only added to investor concerns about the company’s data security posture. 

Despite the concerns, Snowflake points to growth in remaining performance obligations (RPOs), with commitments reaching $5.2 billion, a 48% increase. Yet, management admits that RPOs may not be the best leading indicator for growth, given that product revenue is declining. The company also contends with multiple top customers operating on flexible, month-to-month contracts, which creates uncertainty in long-term revenue projections. Snowflake remains priced for perfection, trading at 12 times its FY25 revenue target of $3.5 billion, with a fully diluted market cap of $41.4 billion. However, the stock price has already fallen nearly 50% this year, and non-GAAP gross margins are slim, sitting at just 5% in the most recent quarter. 

While Snowflake generates significant free cash flow due to upfront customer payments, it also carries future obligations, further straining its financial outlook. The key takeaway for investors is that while Snowflake continues to innovate in AI and data management, it faces substantial headwinds due to slowing growth, cybersecurity concerns, and a valuation that does not reflect current market realities. Given these factors, potential investors might be wise to stay on the sidelines until there is clearer evidence of a turnaround in the company’s growth trajectory.

How AI and Machine Learning Are Revolutionizing Cybersecurity

 

The landscape of cybersecurity has drastically evolved over the past decade, driven by increasingly sophisticated and costly cyberattacks. As more businesses shift online, they face growing threats, creating a higher demand for innovative cybersecurity solutions. The rise of AI and machine learning is reshaping the cybersecurity industry, offering powerful tools to combat these modern challenges. 

AI and machine learning, once seen as futuristic technologies, are now integral to cybersecurity. By processing vast amounts of data and identifying patterns at incredible speeds, these technologies surpass human capabilities, providing a new level of protection. Traditional cybersecurity methods relied heavily on human expertise and signature-based detection, which were effective in the past. However, with the increasing complexity of cybercrime, AI offers a significant advantage by enabling faster and more accurate threat detection and response. Machine learning is the engine driving AI-powered cybersecurity solutions. 

By feeding large datasets into algorithms, machine learning models can uncover hidden patterns and predict potential threats. This ability allows AI to detect unknown risks and anticipate future attacks, significantly enhancing the effectiveness of cybersecurity measures. AI-powered systems can mimic human thought processes to some extent, enabling them to learn from experience, adapt to new challenges, and make real-time decisions. These systems can block malicious traffic, quarantine files, and even take independent actions to counteract threats, all without human intervention. By analyzing vast amounts of data rapidly, AI can identify patterns and predict potential cyberattacks. This proactive approach allows security teams to defend against threats before they escalate, reducing the risk of damage. 

Additionally, AI can automate incident response, acting swiftly to detect breaches and contain damage, often faster than any human could. AI also plays a crucial role in hunting down zero-day threats, which are previously unknown vulnerabilities that attackers can exploit before they are patched. By analyzing data for anomalies, AI can identify these vulnerabilities early, allowing security teams to address them before they are exploited. 

Moreover, AI enhances cloud security by analyzing data to detect threats and vulnerabilities, ensuring that businesses can safely transition to cloud-based systems. The integration of AI in various cybersecurity tools, such as Security Orchestration, Automation, and Response (SOAR) platforms and endpoint protection solutions, is a testament to its potential. With AI’s ability to detect and respond to threats faster and more accurately than ever before, the future of cybersecurity looks promising.

Navigating AI and GenAI: Balancing Opportunities, Risks, and Organizational Readiness

 

The rapid integration of AI and GenAI technologies within organizations has created a complex landscape, filled with both promising opportunities and significant challenges. While the potential benefits of these technologies are evident, many companies find themselves struggling with AI literacy, cautious adoption practices, and the risks associated with immature implementation. This has led to notable disruptions, particularly in the realm of security, where data threats, deepfakes, and AI misuse are becoming increasingly prevalent. 

A recent survey revealed that 16% of organizations have experienced disruptions directly linked to insufficient AI maturity. Despite recognizing the potential of AI, system administrators face significant gaps in education and organizational readiness, leading to mixed results. While AI adoption has progressed, the knowledge needed to leverage it effectively remains inadequate. This knowledge gap has decreased only slightly, with 60% of system administrators admitting to a lack of understanding of AI’s practical applications. Security risks associated with GenAI are particularly urgent, especially those related to data. 

With the increased use of AI, enterprises have reported a surge in proprietary source code being shared within GenAI applications, accounting for 46% of all documented data policy violations. This raises serious concerns about the protection of sensitive information in a rapidly evolving digital landscape. In a troubling trend, concerns about job security have led some cybersecurity teams to hide security incidents. The most alarming AI threats include GenAI model prompt hacking, data poisoning, and ransomware as a service. Additionally, 41% of respondents believe GenAI holds the most promise for addressing cyber alert fatigue, highlighting the potential for AI to both enhance and challenge security practices. 

The rapid growth of AI has also put immense pressure on CISOs, who must adapt to new security risks. A significant portion of security leaders express a lack of confidence in their workforce’s ability to identify AI-driven cyberattacks. The overwhelming majority of CISOs have admitted that the rise of AI has made them reconsider their future in the role, underscoring the need for updated policies and regulations to secure organizational systems effectively. Meanwhile, employees have increasingly breached company rules regarding GenAI use, further complicating the security landscape. 

Despite the cautious optimism surrounding AI, there is a growing concern that AI might ultimately benefit malicious actors more than the organizations trying to defend against them. As AI tools continue to evolve, organizations must navigate the fine line between innovation and security, ensuring that the integration of AI and GenAI technologies does not expose them to greater risks.

The Rise of AI: New Cybersecurity Threats and Trends in 2023

 

The rise of artificial intelligence (AI) is becoming a critical trend to monitor, with the potential for malicious actors to exploit the technology as it advances, according to the Cyber Security Agency (CSA) on Tuesday (Jul 30). AI is increasingly used to enhance various aspects of cyberattacks, including social engineering and reconnaissance. 

The CSA’s Singapore Cyber Landscape 2023 report, released on Tuesday, highlights that malicious actors are leveraging generative AI for deepfake scams, bypassing biometric authentication, and identifying vulnerabilities in software. Deepfakes, which use AI techniques to alter or manipulate visual and audio content, have been employed for commercial and political purposes. This year, several Members of Parliament received extortion letters featuring manipulated images, and Senior Minister Lee Hsien Loong warned about deepfake videos misrepresenting his statements on international relations.  

Traditional AI typically performs specific tasks based on predefined data, analyzing and predicting outcomes but not creating new content. This technology can generate new images, videos, and audio, exemplified by ChatGPT, OpenAI’s chatbot. AI has also enabled malicious actors to scale up their operations. The CSA and its partners analyzed phishing emails from 2023, finding that about 13 percent contained AI-generated content, which was grammatically superior and more logically structured. These AI-generated emails aimed to reduce logical gaps and enhance legitimacy by adapting to various tones to exploit a wide range of emotions in victims. 

Additionally, AI has been used to scrape personal identification information from social media profiles and websites, increasing the speed and scale of cyberattacks. The CSA cautioned that malicious actors could misuse legitimate research on generative AI’s negative applications, incorporating these findings into their attacks. The use of generative AI adds a new dimension to cyber threats, making it crucial for individuals and organizations to learn how to detect and respond to such threats. Techniques for identifying deepfakes include evaluating the message, analyzing audio-visual elements, and using authentication tools. 

Despite the growing sophistication of cyberattacks, Singapore saw a 52 percent decline in phishing attempts in 2023 compared to the previous year, contrary to the global trend of rising phishing incidents. However, the number of phishing attempts in 2023 remained 30 percent higher than in 2021. Phishing continues to pose a significant threat, with cybercriminals making their attempts appear more legitimate. In 2023, over a third of phishing attempts used the credible-looking domain “.com” instead of “.xyz,” and more than half of the phishing URLs employed the secure “HTTPS protocol,” a significant increase from 9 percent in 2022. 

The banking and financial services, government, and technology sectors were the most targeted industries in phishing attempts, with 63 percent of the spoofed organizations belonging to the banking and financial services sector. This industry is frequently targeted because it holds sensitive and valuable information, such as personal details and login credentials, which are highly attractive to cybercriminals.

Hacker Breaches OpenAI, Steals Sensitive AI Tech Details


 

Earlier this year, a hacker successfully breached OpenAI's internal messaging systems, obtaining sensitive details about the company's AI technologies. The incident, initially kept under wraps by OpenAI, was not reported to authorities as it was not considered a threat to national security. The breach was revealed through sources cited by The New York Times, which highlighted that the hacker accessed discussions in an online forum used by OpenAI employees to discuss their latest technologies.

The breach was disclosed to OpenAI employees during an April 2023 meeting at their San Francisco office, and the board of directors was also informed. According to sources, the hacker did not penetrate the systems where OpenAI develops and stores its artificial intelligence. Consequently, OpenAI executives decided against making the breach public, as no customer or partner information was compromised.

Despite the decision to withhold the information from the public and authorities, the breach sparked concerns among some employees about the potential risks posed by foreign adversaries, particularly China, gaining access to AI technology that could threaten U.S. national security. The incident also brought to light internal disagreements over OpenAI's security measures and the broader implications of their AI technology.

In the aftermath of the breach, Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the company's board of directors. In his memo, Aschenbrenner criticised OpenAI's security measures, arguing that the company was not doing enough to protect its secrets from foreign adversaries. He emphasised the need for stronger security to prevent the theft of crucial AI technologies.

Aschenbrenner later claimed that he was dismissed from OpenAI in the spring for leaking information outside the company, which he argued was a politically motivated decision. He hinted at the breach during a recent podcast, but the specific details had not been previously reported.

In response to Aschenbrenner's allegations, OpenAI spokeswoman Liz Bourgeois acknowledged his contributions and concerns but refuted his claims regarding the company's security practices. Bourgeois stated that OpenAI addressed the incident and shared the details with the board before Aschenbrenner joined the company. She emphasised that Aschenbrenner's separation from the company was unrelated to the concerns he raised about security.

While the company deemed the incident not to be a national security threat, the internal debate it sparked highlights the ongoing challenges in safeguarding advanced technological developments from potential threats.


The Decline of Serverless Computing: Lessons For Enterprises To Learn

In the rapidly changing world of cloud technology, serverless computing, once hailed as a groundbreaking innovation, is now losing its relevance. When it first emerged over a decade ago, serverless computing promised to free developers from managing detailed compute and storage configurations by handling everything automatically at the time of execution. It seemed like a natural evolution from Platform-as-a-Service (PaaS) systems, which were already simplifying aspects of computing. 

Many industry experts and enthusiasts jumped on the serverless bandwagon, predicting it would revolutionize cloud computing. However, some seasoned professionals, wary of the hype, recognized that serverless would play a strategic role rather than be a game-changer. Today, serverless technology is increasingly overshadowed by newer trends and innovations in the cloud marketplace. 

Why Did Serverless Lose Its Shine? 

Initially praised for simplifying infrastructure management and scalability, serverless computing has been pushed to the periphery by the rise of other cloud paradigms, such as edge computing and microclouds. These new paradigms offer more tailored solutions that cater to specific business needs, moving away from the one-size-fits-all approach of serverless computing. One significant factor in the decline of serverless is the explosion of generative AI. 

Cloud providers are heavily investing in AI-driven solutions, which require specialized computing resources and substantial data management capabilities. Traditional serverless models often fall short in meeting these demands, leading companies to opt for more static and predictable solutions. The concept of ubiquitous computing, which involves embedding computation into everyday objects, further exemplifies this shift. This requires continuous, low-latency processing that traditional serverless frameworks might struggle to deliver consistently. As a result, serverless models are increasingly marginalized in favour of more integrated and pervasive computing environments. 

What Can Enterprises Learn? 

For enterprises, the fading prominence of serverless cloud technology signals a need to reassess their technology strategies. Organizations must embrace emerging paradigms like edge computing, microclouds, and AI-driven solutions to stay competitive. 

The rise of AI and ubiquitous computing necessitates specialized computing resources and innovative application designs. Businesses should focus on selecting the right technology stack to meet their specific needs rather than chasing the latest cloud hype. While serverless has played a role in cloud evolution, its impact is limited compared to the newer, more nuanced solutions now available.

Apple's Private Cloud Compute: Enhancing AI with Unparalleled Privacy and Security

 

At Apple's WWDC 2024, much attention was given to its "Apple Intelligence" features, but the company also emphasized its commitment to user privacy. To support Apple Intelligence, Apple introduced Private Cloud Compute (PCC), a cloud-based AI processing system designed to extend Apple's rigorous security and privacy standards to the cloud. Private Cloud Compute ensures that personal user data sent to the cloud remains inaccessible to anyone other than the user, including Apple itself. 

Apple described it as the most advanced security architecture ever deployed for cloud AI compute at scale. Built with custom Apple silicon and a hardened operating system designed specifically for privacy, PCC aims to protect user data robustly. Apple's statement highlighted that PCC's security foundation lies in its compute node, a custom-built server hardware that incorporates the security features of Apple silicon, such as Secure Enclave and Secure Boot. This hardware is paired with a new operating system, a hardened subset of iOS and macOS, tailored for Large Language Model (LLM) inference workloads with a narrow attack surface. 

Although details about the new OS for PCC are limited, Apple plans to make software images of every production build of PCC publicly available for security research. This includes every application and relevant executable, and the OS itself, published within 90 days of inclusion in the log or after relevant software updates are available. Apple's approach to PCC demonstrates its commitment to maintaining high privacy and security standards while expanding its AI capabilities. By leveraging custom hardware and a specially designed operating system, Apple aims to provide a secure environment for cloud-based AI processing, ensuring that user data remains protected. 

Apple's initiative is particularly significant in the current digital landscape, where concerns about data privacy and security are paramount. Users increasingly demand transparency and control over their data, and companies are under pressure to provide robust protections against cyber threats. By implementing PCC, Apple not only addresses these concerns but also sets a new benchmark for cloud-based AI processing security. The introduction of PCC is a strategic move that underscores Apple's broader vision of integrating advanced AI capabilities with uncompromised user privacy. 

As AI technologies become more integrated into everyday applications, the need for secure processing environments becomes critical. PCC's architecture, built on the strong security foundations of Apple silicon, aims to meet this need by ensuring that sensitive data remains private and secure. Furthermore, Apple's decision to make PCC's software images available for security research reflects its commitment to transparency and collaboration within the cybersecurity community. This move allows security experts to scrutinize the system, identify potential vulnerabilities, and contribute to enhancing its security. Such openness is essential for building trust and ensuring the robustness of security measures in an increasingly interconnected world. 

In conclusion, Apple's Private Cloud Compute represents a significant advancement in cloud-based AI processing, combining the power of Apple silicon with a specially designed operating system to create a secure and private environment for user data. By prioritizing security and transparency, Apple sets a high standard for the industry, demonstrating that advanced AI capabilities can be achieved without compromising user privacy. As PCC is rolled out, it will be interesting to see how this initiative shapes the future of cloud-based AI and influences best practices in data security and privacy.

Microsoft Revises AI Feature After Privacy Concerns

 

Microsoft is making changes to a controversial feature announced for its new range of AI-powered PCs after it was flagged as a potential "privacy nightmare." The "Recall" feature for Copilot+ was initially introduced as a way to enhance user experience by capturing and storing screenshots of desktop activity. However, following concerns that hackers could misuse this tool and its saved screenshots, Microsoft has decided to make the feature opt-in. 

"We have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards," said Pavan Davuluri, corporate vice president of Windows and Devices, in a blog post on Friday. The company is banking on artificial intelligence (AI) to drive demand for its devices. Executive vice president Yusuf Medhi, during the event's keynote speech, likened the feature to having photographic memory, saying it used AI "to make it possible to access virtually anything you have ever seen on your PC." 

The feature can search through a user's past activity, including files, photos, emails, and browsing history. While many devices offer similar functionalities, Recall's unique aspect was its ability to take screenshots every few seconds and search these too. Microsoft claimed it "built privacy into Recall’s design" from the beginning, allowing users control over what was captured—such as opting out of capturing certain websites or not capturing private browsing on Microsoft’s browser, Edge. Despite these assurances, the company has now adjusted the feature to address privacy concerns. 

Changes will include making Recall an opt-in feature during the PC setup process, meaning it will be turned off by default. Users will also need to use Windows' "Hello" authentication process to enable the tool, ensuring that only authorized individuals can view or search their timeline of saved activity. Additionally, "proof of presence" will be required to access or search through the saved activity in Recall. These updates are set to be implemented before the launch of Copilot+ PCs on June 18. The adjustments aim to provide users with a clearer choice and enhanced control over their data, addressing the potential privacy risks associated with the feature. 

Microsoft's decision to revise the Recall feature underscores the importance of user feedback and the company's commitment to privacy and security. By making Recall opt-in and incorporating robust authentication measures, Microsoft seeks to balance innovation with the protection of user data, ensuring that AI enhancements do not compromise privacy. As AI continues to evolve, these safeguards are crucial in maintaining user trust and mitigating the risks associated with advanced data collection technologies.