Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Tool. Show all posts

Hong Kong Launches Its First Generative AI Model

 

Last week, Hong Kong launched its first generative artificial intelligence (AI) model, HKGAI V1, ushering in a new era in the city's AI development. The tool was designed by the Hong Kong Generative AI Research and Development Centre (HKGAI) for the Hong Kong Special Administrative Region (HKSAR) government's InnoHK innovation program. 

The locally designed AI tool, which is driven by DeepSeek's data learning model, has so far been tested by about 70 HKSAR government departments. According to a press statement from HKGAI, this innovative accomplishment marks the successful localisation of DeepSeek in Hong Kong, injecting new vitality to the city's AI ecosystem and demonstrating the strong collaborative innovation capabilities between Hong Kong and the Chinese mainland in AI. 

Sun Dong, the HKSAR government's Secretary for Innovation, Technology, and Industry, highlighted during the launch ceremony that artificial intelligence (AI) is in the vanguard of a new industrial revolution and technical revolution, and Hong Kong is actively participating in this wave. 

Sun also emphasised the HKSAR government's broader efforts to encourage AI research, which include the building of an AI supercomputing centre, a 3-billion Hong Kong dollar (386 million US dollar) AI funding plan, and the clustering of over 800 AI enterprises at Science Park and Cyberport. He expressed confidence that the locally produced large language model will soon be available for usage by not just enterprises and people, but also for overseas Chinese communities. 

DeepSeek, founded by Liang Wengfeng, previously stunned the world with its low-cost AI model, which was created with substantially fewer processing resources than larger US tech businesses such as OpenAI and Meta. The HKGAI V1 system is the first in the world to use DeepSeek's full-parameter fine-tuning research methodology. 

The financial secretary allocated HK$1 billion (US$128.6 million) in the budget to build the Hong Kong AI Research and Development Institute. The government intends to start the institute by the fiscal year 2026-27, with cash set aside for the first five years to cover operational costs, including employing staff. 

“Our goal is to ensure Hong Kong’s leading role in the development of AI … So the Institute will focus on facilitating upstream research and development [R&D], midstream and downstream transformation of R&D outcomes, and expanding application scenarios,” Sun noted.

AI Use Linked to Decline in Critical Thinking Skills Among Students, Study Finds

 

A recent study has revealed a concerning link between the increased use of artificial intelligence (AI) tools and declining critical thinking abilities among students. The research, which analyzed responses from over 650 individuals aged 17 and older in the UK, found that young people who heavily relied on AI for memory and problem-solving tasks showed lower critical thinking skills. This phenomenon, known as cognitive offloading, suggests that outsourcing mental tasks to AI may hinder essential cognitive development. 

The study, titled AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking, was published in Societies and led by Michael Gerlich of SBS Swiss Business School. The findings indicated a strong correlation between high AI tool usage and lower critical thinking scores, with younger participants being more affected than their older counterparts. Gerlich emphasized the importance of educational interventions to help students engage critically with AI technologies and prevent the erosion of vital cognitive skills.  

Participants in the study were divided into three age groups: 17-25, 26-45, and 46 and older, with diverse educational backgrounds. Data collection included a 23-item questionnaire to measure AI tool usage, cognitive offloading tendencies, and critical thinking skills. Additionally, semi-structured interviews provided further insights into participants’ experiences and concerns about AI reliance. Many respondents expressed worry that their dependence on AI was influencing their decision-making processes. Some admitted to rarely questioning the biases inherent in AI recommendations, while others feared they were being subtly influenced by the technology. 

One participant noted, “I sometimes wonder if AI is nudging me toward decisions I wouldn’t normally make.” The study’s findings have significant implications for educational institutions and workplaces integrating AI tools into daily operations. With AI adoption continuing to grow rapidly, there is an urgent need for schools and universities to implement strategies that promote critical thinking alongside technological advancements. Educational policies may need to prioritize cognitive skill development to counterbalance the potential negative effects of AI dependence. 

As AI continues to shape various aspects of life, striking a balance between leveraging its benefits and preserving essential cognitive abilities will be crucial. The study serves as a wake-up call for educators, policymakers, and individuals to remain mindful of the potential risks associated with AI over-reliance.

The Role of Confidential Computing in AI and Web3

 

 
The rise of artificial intelligence (AI) has amplified the demand for privacy-focused computing technologies, ushering in a transformative era for confidential computing. At the forefront of this movement is the integration of these technologies within the AI and Web3 ecosystems, where maintaining privacy while enabling innovation has become a pressing challenge. A major event in this sphere, the DeCC x Shielding Summit in Bangkok, brought together more than 60 experts to discuss the future of confidential computing.

Pioneering Confidential Computing in Web3

Lisa Loud, Executive Director of the Secret Network Foundation, emphasized in her keynote that Secret Network has been pioneering confidential computing in Web3 since its launch in 2020. According to Loud, the focus now is to mainstream this technology alongside blockchain and decentralized AI, addressing concerns with centralized AI systems and ensuring data privacy.

Yannik Schrade, CEO of Arcium, highlighted the growing necessity for decentralized confidential computing, calling it the “missing link” for distributed systems. He stressed that as AI models play an increasingly central role in decision-making, conducting computations in encrypted environments is no longer optional but essential.

Schrade also noted the potential of confidential computing in improving applications like decentralized finance (DeFi) by integrating robust privacy measures while maintaining accessibility for end users. However, achieving a balance between privacy and scalability remains a significant hurdle. Schrade pointed out that privacy safeguards often compromise user experience, which can hinder broader adoption. He emphasized that for confidential computing to succeed, it must be seamlessly integrated so users remain unaware they are engaging with such technologies.

Shahaf Bar-Geffen, CEO of COTI, underscored the role of federated learning in training AI models on decentralized datasets without exposing raw data. This approach is particularly valuable in sensitive sectors like healthcare and finance, where confidentiality and compliance are critical.

Innovations in Privacy and Scalability

Henry de Valence, founder of Penumbra Labs, discussed the importance of aligning cryptographic systems with user expectations. Drawing parallels with secure messaging apps like Signal, he emphasized that cryptography should function invisibly, enabling users to interact with systems without technical expertise. De Valence stressed that privacy-first infrastructure is vital as AI’s capabilities to analyze and exploit data grow more advanced.

Other leaders in the field, such as Martin Leclerc of iEXEC, highlighted the complexity of achieving privacy, usability, and regulatory compliance. Innovative approaches like zero-knowledge proof technology, as demonstrated by Lasha Antadze of Rarimo, offer promising solutions. Antadze explained how this technology enables users to prove eligibility for actions like voting or purchasing age-restricted goods without exposing personal data, making blockchain interactions more accessible.

Dominik Schmidt, co-founder of Polygon Miden, reflected on lessons from legacy systems like Ethereum to address challenges in privacy and scalability. By leveraging zero-knowledge proofs and collaborating with decentralized storage providers, his team aims to enhance both developer and user experiences.

As confidential computing evolves, it is clear that privacy and usability must go hand in hand to address the needs of an increasingly data-driven world. Through innovation and collaboration, these technologies are set to redefine how privacy is maintained in AI and Web3 applications.

Zero Trust Endpoint Security: The Future of Cyber Resilience

 

The evolution of cybersecurity has moved far beyond traditional antivirus software, which once served as the primary line of defense against online threats. Endpoint Detection and Response (EDR) tools emerged as a solution to combat the limitations of antivirus programs, particularly in addressing advanced threats like malware. However, even EDR tools have significant weaknesses, as they often detect threats only after they have infiltrated a system. The need for a proactive, zero trust endpoint security solution has become more evident to combat evolving cyber threats effectively. 

Traditional antivirus software struggled to keep up with the rapid creation and distribution of new malware. As a result, EDR tools were developed to identify malicious activity based on behavior rather than known code signatures. These tools have since been enhanced with artificial intelligence (AI) for improved accuracy, automated incident responses to mitigate damage promptly, and managed detection services for expert oversight. Despite these advancements, EDR solutions still act only after malware is active, potentially allowing significant harm before mitigation occurs. 

Cybercriminals now use sophisticated techniques, including AI-driven malware, to bypass detection systems. Traditional EDR tools often fail to recognize such threats until they are running within an environment. This reactive approach highlights a critical flaw: the inability to prevent attacks before they execute. Consequently, organizations are increasingly adopting zero trust security strategies, emphasizing proactive measures to block unauthorized actions entirely. Zero trust endpoint security enforces strict controls across applications, user access, data, and network traffic. 

Unlike blocklisting, which permits all actions except those explicitly banned, application allowlisting ensures that only pre-approved software can operate within a system. This approach prevents both known and unknown threats from executing, offering a more robust defense against ransomware and other cyberattacks. ThreatLocker exemplifies a zero trust security platform designed to address these gaps. Its proactive tools, including application allowlisting, ringfencing to limit software privileges, and storage control to secure sensitive data, provide comprehensive protection. 

ThreatLocker Detect enhances this approach by alerting organizations to indicators of compromise, ensuring swift responses to emerging threats. A recent case study highlights the efficacy of ThreatLocker’s solutions. In January 2024, a ransomware gang attempted to breach a hospital’s network using stolen credentials. ThreatLocker’s allowlisting feature blocked the attackers from executing unauthorized software, while storage controls prevented data theft. Despite gaining initial access, the cybercriminals were unable to carry out their attack due to ThreatLocker’s proactive defenses. 

As cyber threats become more sophisticated, relying solely on detection-based tools like EDR is no longer sufficient. Proactive measures, such as those provided by ThreatLocker, represent the future of endpoint security, ensuring that organizations can prevent attacks before they occur and maintain robust defenses against evolving cyber risks.

Meet Daisy, the AI Grandmother Designed to Outwit Scammers

 

The voice-based AI, known as Daisy or "dAIsy," impersonates a senior citizen to engage in meandering conversation with phone scammers.

Despite its flaws, such as urging people to eat deadly mushrooms, AI can sometimes be utilised for good. O2, the UK's largest mobile network operator, has implemented a voice-based AI chatbot to trick phone scammers into long, useless talks. Daisy, often known as "dAIsy," is a chatbot that mimics the voice of an elderly person, the most typical target for phone scammers. 

Daisy's goal is to automate "scambaiting," which is the technique of deliberately wasting phone fraudsters' time in order to keep them away from potential real victims for as long as possible. Scammers employ social engineering to abuse the elderly's naivety, convincing them, for example, that they owe back taxes and would be arrested if they fail to make payments immediately.

When a fraudster gets Daisy on the phone, they're in for a long chat that won't lead anywhere. If they get to the point when the fraudster requests private data, such as bank account information, Daisy will fabricate it. O2 claims that it is able to contact fraudsters in the first place by adding Daisy's phone number to "easy target" lists that scammers use for leads. 

Of course, the risk with a chatbot like Daisy is that the same technology can be used for opposite ends—we've already seen cases where real people, such as CEOs of major companies, had their voices deepfaked in order to deceive others into giving money to a fraudster. Senior citizens are already exposed enough. If they receive a call from someone who sounds like a grandchild, they will very certainly believe it is genuine.

Finally, preventing fraudulent calls and shutting down the groups orchestrating these frauds would be the best answer. Carriers have enhanced their ability to detect and block scammers' phone numbers, but it remains a cat-and-mouse game. Scammers use automated dialling systems, which allow them to phone numbers quickly and only alert them when they receive an answer. An AI bot that frustrates fraudsters by responding and wasting their time is preferable to nothing.

ChatGPT Vulnerability Exploited: Hacker Demonstrates Data Theft via ‘SpAIware

 

A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware. ChatGPT initially only stored memory within an active conversation session, resetting once the chat ended. This limited the potential for hackers to exploit data, as the information wasn’t saved long-term. 

However, earlier this year, OpenAI introduced a new feature allowing ChatGPT to retain memory between different conversations. This update, meant to personalize the user experience, also created an unexpected opportunity for cybercriminals to manipulate the chatbot’s memory retention. Rehberger identified that through prompt injection, hackers could insert malicious commands into ChatGPT’s memory. This allowed the chatbot to continuously send a user’s conversation history to a remote server, even across different sessions. 

Once a hacker successfully inserted this prompt into ChatGPT’s long-term memory, the user’s data would be collected each time they interacted with the AI tool. This makes the attack particularly dangerous, as most users wouldn’t notice anything suspicious while their information is being stolen in the background. What makes this attack even more alarming is that the hacker doesn’t require direct access to a user’s device to initiate the injection. The payload could be embedded within a website or image, and all it would take is for the user to interact with this media and prompt ChatGPT to engage with it. 

For instance, if a user asked ChatGPT to scan a malicious website, the hidden command would be stored in ChatGPT’s memory, enabling the hacker to exfiltrate data whenever the AI was used in the future. Interestingly, this exploit appears to be limited to the macOS app, and it doesn’t work on ChatGPT’s web version. When Rehberger first reported his discovery, OpenAI dismissed the issue as a “safety” concern rather than a security threat. However, once he built a proof-of-concept demonstrating the vulnerability, OpenAI took action, issuing a partial fix. This update prevents ChatGPT from sending data to remote servers, which mitigates some of the risks. 

However, the bot still accepts prompts from untrusted sources, meaning hackers can still manipulate the AI’s long-term memory. The implications of this exploit are significant, especially for users who rely on ChatGPT for handling sensitive data or important business tasks. It’s crucial that users remain vigilant and cautious, as these prompt injections could lead to severe privacy breaches. For example, any saved conversations containing confidential information could be accessed by cybercriminals, potentially resulting in financial loss, identity theft, or data leaks. To protect against such vulnerabilities, users should regularly review ChatGPT’s memory settings, checking for any unfamiliar entries or prompts. 

As demonstrated in Rehberger’s video, users can manually delete suspicious entries, ensuring that the AI’s long-term memory doesn’t retain harmful data. Additionally, it’s essential to be cautious about the sources from which they ask ChatGPT to retrieve information, avoiding untrusted websites or files that could contain hidden commands. While OpenAI is expected to continue addressing these security issues, this incident serves as a reminder that even advanced AI tools like ChatGPT are not immune to cyber threats. As AI technology continues to evolve, so do the tactics used by hackers to exploit these systems. Staying informed, vigilant, and cautious while using AI tools is key to minimizing potential risks.

AI-Generated Malware Discovered in the Wild

 

Researchers found malicious code that they suspect was developed with the aid of generative artificial intelligence services to deploy the AsyncRAT malware in an email campaign that was directed towards French users. 

While threat actors have employed generative AI technology to design convincing emails, government agencies have cautioned regarding the potential exploit of AI tools to create malicious software, despite the precautions and restrictions that vendors implemented. 

Suspected cases of AI-created malware have been spotted in real attacks. The malicious PowerShell script that was uncovered earlier this year by cybersecurity firm Proofpoint was most likely generated by an AI system. 

As less technical threat actors depend more on AI to develop malware, HP security experts discovered a malicious campaign in early June that employed code commented in the same manner a generative AI system would. 

The VBScript established persistence on the compromised PC by generating scheduled activities and writing new keys to the Windows Registry. The researchers add that some of the indicators pointing to AI-generated malicious code include the framework of the scripts, the comments that explain each line, and the use of native language for function names and variables. 

AsyncRAT, an open-source, publicly available malware that can record keystrokes on the victim device and establish an encrypted connection for remote monitoring and control, is later downloaded and executed by the attacker. The malware can also deliver additional payloads. 

The HP Wolf Security research also states that, in terms of visibility, archives were the most popular delivery option in the first half of the year. Lower-level threat actors can use generative AI to create malware in minutes and customise it for assaults targeting different areas and platforms (Linux, macOS). 

Even if they do not use AI to create fully functional malware, hackers rely on it to accelerate their labour while developing sophisticated threats.

Project Strawberry: Advancing AI with Q-learning, A* Algorithms, and Dual-Process Theory

Project Strawberry, initially known as Q*, has quickly become a focal point of excitement and discussion within the AI community. The project aims to revolutionize artificial intelligence by enhancing its self-learning and reasoning capabilities, crucial steps toward achieving Artificial General Intelligence (AGI). By incorporating advanced algorithms and theories, Project Strawberry pushes the boundaries of what AI can accomplish, making it a topic of intense interest and speculation. 

At the core of Project Strawberry are several foundational algorithms that enable AI systems to learn and make decisions more effectively. The project utilizes Q-learning, a reinforcement learning technique that allows AI to determine optimal actions through trial and error, helping it navigate complex environments. Alongside this, the A* search algorithm provides efficient pathfinding capabilities, ensuring AI can find the best solutions to problems quickly and accurately. 

Additionally, the dual-process theory, inspired by human cognitive processes, is used to balance quick, intuitive judgments with thorough, deliberate analysis, enhancing decision-making abilities. Despite the project’s promising advancements, it also raises several concerns. One of the most significant risks involves encryption cracking, where advanced AI could potentially break encryption codes, posing a severe security threat. 

Furthermore, the issue of “AI hallucinations”—errors in AI outputs—remains a critical challenge that needs to be addressed to ensure accurate and trustworthy AI responses. Another concern is the high computational demands of Project Strawberry, which may lead to increased costs and energy consumption. Efficient resource management and optimization will be crucial to maintaining the project’s scalability and sustainability. The ultimate goal of Project Strawberry is to pave the way for AGI, where AI systems can perform any intellectual task a human can. 

Achieving AGI would revolutionize problem-solving across various fields, enabling AI to tackle long-term and complex challenges with advanced reasoning capabilities. OpenAI envisions developing “reasoners” that exhibit human-like intelligence, pushing the frontiers of AI research even further. While Project Strawberry represents a significant step forward in AI development, it also presents complex challenges that must be carefully navigated. 

The project’s potential has fueled widespread excitement and anticipation within the AI community, with many eagerly awaiting further updates and breakthroughs. As OpenAI continues to refine and develop Project Strawberry, it could set the stage for a new era in AI, bringing both remarkable possibilities and significant responsibilities.