Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Tool. Show all posts

Silicon Valley Crosswalk Buttons Hacked With AI Voices Mimicking Tech Billionaires

 

A strange tech prank unfolded across Silicon Valley this past weekend after crosswalk buttons in several cities began playing AI-generated voice messages impersonating Elon Musk and Mark Zuckerberg.  

Pedestrians reported hearing bizarre and oddly personal phrases coming from audio-enabled crosswalk systems in Menlo Park, Palo Alto, and Redwood City. The altered voices were crafted to sound like the two tech moguls, with messages that ranged from humorous to unsettling. One button, using a voice resembling Zuckerberg, declared: “We’re putting AI into every corner of your life, and you can’t stop it.” Another, mimicking Musk, joked about loneliness and buying a Cybertruck to fill the void.  

The origins of the incident remain unknown, but online speculation points to possible hacktivism—potentially aimed at critiquing Silicon Valley’s AI dominance or simply poking fun at tech culture. Videos of the voice spoof spread quickly on TikTok and X, with users commenting on the surreal experience and sarcastically suggesting the crosswalks had been “venture funded.” This situation prompts serious concern. 

Local officials confirmed they’re investigating the breach and working to restore normal functionality. According to early reports, the tampering may have taken place on Friday. These crosswalk buttons aren’t new—they’re part of accessibility technology designed to help visually impaired pedestrians cross streets safely by playing audio cues. But this incident highlights how vulnerable public infrastructure can be to digital interference. Security researchers have warned in the past that these systems, often managed with default settings and unsecured firmware, can be compromised if not properly protected. 

One expert, physical penetration specialist Deviant Ollam, has previously demonstrated how such buttons can be manipulated using unchanged passwords or open ports. Polara, a leading manufacturer of these audio-enabled buttons, did not respond to requests for comment. The silence leaves open questions about how widespread the vulnerability might be and what cybersecurity measures, if any, are in place. This AI voice hack not only exposed weaknesses in public technology but also raised broader questions about the blending of artificial intelligence, infrastructure, and data privacy. 

What began as a strange and comedic moment at the crosswalk is now fueling a much larger conversation about the cybersecurity risks of increasingly connected cities. With AI becoming more embedded in daily life, events like this might be just the beginning of new kinds of public tech disruptions.

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.

AI and Privacy – Issues and Challenges

 

Artificial intelligence is changing cybersecurity and digital privacy. It promises better security but also raises concerns about ethical boundaries, data exploitation, and spying. From facial recognition software to predictive crime prevention, customers are left wondering where to draw the line between safety and overreach as AI-driven systems become more and more integrated into daily life.

The same artificial intelligence (AI) tools that aid in spotting online threats, optimising security procedures, and stopping fraud can also be used for intrusive data collecting, behavioural tracking, and mass spying. The use of AI-powered surveillance in corporate data mining, law enforcement profiling, and government tracking has drawn criticism in recent years. AI runs the potential of undermining rather than defending basic rights in the absence of clear regulations and transparency. 

AI and data ethics

Despite encouraging developments, there are numerous instances of AI-driven inventions going awry, which raise serious questions. A face recognition business called Clearview AI amassed one of the largest facial recognition databases in the world by illegally scraping billions of photos from social media. Clearview's technology was employed by governments and law enforcement organisations across the globe, leading to legal action and regulatory action about mass surveillance. 

The UK Department for Work and Pensions used an AI system to detect welfare fraud. An internal investigation suggested that the system disproportionately targeted people based on their age, handicap, marital status, and country. This prejudice resulted in certain groups being unfairly picked for fraud investigations, raising questions about discrimination and the ethical use of artificial intelligence in public services. Despite earlier guarantees of impartiality, the findings have fuelled calls for increased openness and supervision in government AI use. 

Regulations and consumer protection

The ethical use of AI is being regulated by governments worldwide, with a number of significant regulations having an immediate impact on consumers. The AI Act of the European Union, which is scheduled to go into force in 2025, divides AI applications into risk categories. 

Strict regulations will be applied to high-risk technology, like biometric surveillance and facial recognition, to guarantee transparency and moral deployment. The EU's commitment to responsible AI governance is further reinforced by the possibility of severe sanctions for non compliant companies. 

Individuals in the United States have more control over their personal data according to California's Consumer Privacy Act. Consumers have the right to know what information firms gather about them, to seek its erasure, and to opt out of data sales. This rule adds an important layer of privacy protection in an era where AI-powered data processing is becoming more common. 

The White House has recently introduced the AI Bill of Rights, a framework aimed at encouraging responsible AI practices. While not legally enforceable, it emphasises the need of privacy, transparency, and algorithmic fairness, pointing to a larger push for ethical AI development in policy making.

Hong Kong Launches Its First Generative AI Model

 

Last week, Hong Kong launched its first generative artificial intelligence (AI) model, HKGAI V1, ushering in a new era in the city's AI development. The tool was designed by the Hong Kong Generative AI Research and Development Centre (HKGAI) for the Hong Kong Special Administrative Region (HKSAR) government's InnoHK innovation program. 

The locally designed AI tool, which is driven by DeepSeek's data learning model, has so far been tested by about 70 HKSAR government departments. According to a press statement from HKGAI, this innovative accomplishment marks the successful localisation of DeepSeek in Hong Kong, injecting new vitality to the city's AI ecosystem and demonstrating the strong collaborative innovation capabilities between Hong Kong and the Chinese mainland in AI. 

Sun Dong, the HKSAR government's Secretary for Innovation, Technology, and Industry, highlighted during the launch ceremony that artificial intelligence (AI) is in the vanguard of a new industrial revolution and technical revolution, and Hong Kong is actively participating in this wave. 

Sun also emphasised the HKSAR government's broader efforts to encourage AI research, which include the building of an AI supercomputing centre, a 3-billion Hong Kong dollar (386 million US dollar) AI funding plan, and the clustering of over 800 AI enterprises at Science Park and Cyberport. He expressed confidence that the locally produced large language model will soon be available for usage by not just enterprises and people, but also for overseas Chinese communities. 

DeepSeek, founded by Liang Wengfeng, previously stunned the world with its low-cost AI model, which was created with substantially fewer processing resources than larger US tech businesses such as OpenAI and Meta. The HKGAI V1 system is the first in the world to use DeepSeek's full-parameter fine-tuning research methodology. 

The financial secretary allocated HK$1 billion (US$128.6 million) in the budget to build the Hong Kong AI Research and Development Institute. The government intends to start the institute by the fiscal year 2026-27, with cash set aside for the first five years to cover operational costs, including employing staff. 

“Our goal is to ensure Hong Kong’s leading role in the development of AI … So the Institute will focus on facilitating upstream research and development [R&D], midstream and downstream transformation of R&D outcomes, and expanding application scenarios,” Sun noted.

AI Use Linked to Decline in Critical Thinking Skills Among Students, Study Finds

 

A recent study has revealed a concerning link between the increased use of artificial intelligence (AI) tools and declining critical thinking abilities among students. The research, which analyzed responses from over 650 individuals aged 17 and older in the UK, found that young people who heavily relied on AI for memory and problem-solving tasks showed lower critical thinking skills. This phenomenon, known as cognitive offloading, suggests that outsourcing mental tasks to AI may hinder essential cognitive development. 

The study, titled AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking, was published in Societies and led by Michael Gerlich of SBS Swiss Business School. The findings indicated a strong correlation between high AI tool usage and lower critical thinking scores, with younger participants being more affected than their older counterparts. Gerlich emphasized the importance of educational interventions to help students engage critically with AI technologies and prevent the erosion of vital cognitive skills.  

Participants in the study were divided into three age groups: 17-25, 26-45, and 46 and older, with diverse educational backgrounds. Data collection included a 23-item questionnaire to measure AI tool usage, cognitive offloading tendencies, and critical thinking skills. Additionally, semi-structured interviews provided further insights into participants’ experiences and concerns about AI reliance. Many respondents expressed worry that their dependence on AI was influencing their decision-making processes. Some admitted to rarely questioning the biases inherent in AI recommendations, while others feared they were being subtly influenced by the technology. 

One participant noted, “I sometimes wonder if AI is nudging me toward decisions I wouldn’t normally make.” The study’s findings have significant implications for educational institutions and workplaces integrating AI tools into daily operations. With AI adoption continuing to grow rapidly, there is an urgent need for schools and universities to implement strategies that promote critical thinking alongside technological advancements. Educational policies may need to prioritize cognitive skill development to counterbalance the potential negative effects of AI dependence. 

As AI continues to shape various aspects of life, striking a balance between leveraging its benefits and preserving essential cognitive abilities will be crucial. The study serves as a wake-up call for educators, policymakers, and individuals to remain mindful of the potential risks associated with AI over-reliance.

The Role of Confidential Computing in AI and Web3

 

 
The rise of artificial intelligence (AI) has amplified the demand for privacy-focused computing technologies, ushering in a transformative era for confidential computing. At the forefront of this movement is the integration of these technologies within the AI and Web3 ecosystems, where maintaining privacy while enabling innovation has become a pressing challenge. A major event in this sphere, the DeCC x Shielding Summit in Bangkok, brought together more than 60 experts to discuss the future of confidential computing.

Pioneering Confidential Computing in Web3

Lisa Loud, Executive Director of the Secret Network Foundation, emphasized in her keynote that Secret Network has been pioneering confidential computing in Web3 since its launch in 2020. According to Loud, the focus now is to mainstream this technology alongside blockchain and decentralized AI, addressing concerns with centralized AI systems and ensuring data privacy.

Yannik Schrade, CEO of Arcium, highlighted the growing necessity for decentralized confidential computing, calling it the “missing link” for distributed systems. He stressed that as AI models play an increasingly central role in decision-making, conducting computations in encrypted environments is no longer optional but essential.

Schrade also noted the potential of confidential computing in improving applications like decentralized finance (DeFi) by integrating robust privacy measures while maintaining accessibility for end users. However, achieving a balance between privacy and scalability remains a significant hurdle. Schrade pointed out that privacy safeguards often compromise user experience, which can hinder broader adoption. He emphasized that for confidential computing to succeed, it must be seamlessly integrated so users remain unaware they are engaging with such technologies.

Shahaf Bar-Geffen, CEO of COTI, underscored the role of federated learning in training AI models on decentralized datasets without exposing raw data. This approach is particularly valuable in sensitive sectors like healthcare and finance, where confidentiality and compliance are critical.

Innovations in Privacy and Scalability

Henry de Valence, founder of Penumbra Labs, discussed the importance of aligning cryptographic systems with user expectations. Drawing parallels with secure messaging apps like Signal, he emphasized that cryptography should function invisibly, enabling users to interact with systems without technical expertise. De Valence stressed that privacy-first infrastructure is vital as AI’s capabilities to analyze and exploit data grow more advanced.

Other leaders in the field, such as Martin Leclerc of iEXEC, highlighted the complexity of achieving privacy, usability, and regulatory compliance. Innovative approaches like zero-knowledge proof technology, as demonstrated by Lasha Antadze of Rarimo, offer promising solutions. Antadze explained how this technology enables users to prove eligibility for actions like voting or purchasing age-restricted goods without exposing personal data, making blockchain interactions more accessible.

Dominik Schmidt, co-founder of Polygon Miden, reflected on lessons from legacy systems like Ethereum to address challenges in privacy and scalability. By leveraging zero-knowledge proofs and collaborating with decentralized storage providers, his team aims to enhance both developer and user experiences.

As confidential computing evolves, it is clear that privacy and usability must go hand in hand to address the needs of an increasingly data-driven world. Through innovation and collaboration, these technologies are set to redefine how privacy is maintained in AI and Web3 applications.

Zero Trust Endpoint Security: The Future of Cyber Resilience

 

The evolution of cybersecurity has moved far beyond traditional antivirus software, which once served as the primary line of defense against online threats. Endpoint Detection and Response (EDR) tools emerged as a solution to combat the limitations of antivirus programs, particularly in addressing advanced threats like malware. However, even EDR tools have significant weaknesses, as they often detect threats only after they have infiltrated a system. The need for a proactive, zero trust endpoint security solution has become more evident to combat evolving cyber threats effectively. 

Traditional antivirus software struggled to keep up with the rapid creation and distribution of new malware. As a result, EDR tools were developed to identify malicious activity based on behavior rather than known code signatures. These tools have since been enhanced with artificial intelligence (AI) for improved accuracy, automated incident responses to mitigate damage promptly, and managed detection services for expert oversight. Despite these advancements, EDR solutions still act only after malware is active, potentially allowing significant harm before mitigation occurs. 

Cybercriminals now use sophisticated techniques, including AI-driven malware, to bypass detection systems. Traditional EDR tools often fail to recognize such threats until they are running within an environment. This reactive approach highlights a critical flaw: the inability to prevent attacks before they execute. Consequently, organizations are increasingly adopting zero trust security strategies, emphasizing proactive measures to block unauthorized actions entirely. Zero trust endpoint security enforces strict controls across applications, user access, data, and network traffic. 

Unlike blocklisting, which permits all actions except those explicitly banned, application allowlisting ensures that only pre-approved software can operate within a system. This approach prevents both known and unknown threats from executing, offering a more robust defense against ransomware and other cyberattacks. ThreatLocker exemplifies a zero trust security platform designed to address these gaps. Its proactive tools, including application allowlisting, ringfencing to limit software privileges, and storage control to secure sensitive data, provide comprehensive protection. 

ThreatLocker Detect enhances this approach by alerting organizations to indicators of compromise, ensuring swift responses to emerging threats. A recent case study highlights the efficacy of ThreatLocker’s solutions. In January 2024, a ransomware gang attempted to breach a hospital’s network using stolen credentials. ThreatLocker’s allowlisting feature blocked the attackers from executing unauthorized software, while storage controls prevented data theft. Despite gaining initial access, the cybercriminals were unable to carry out their attack due to ThreatLocker’s proactive defenses. 

As cyber threats become more sophisticated, relying solely on detection-based tools like EDR is no longer sufficient. Proactive measures, such as those provided by ThreatLocker, represent the future of endpoint security, ensuring that organizations can prevent attacks before they occur and maintain robust defenses against evolving cyber risks.

Meet Daisy, the AI Grandmother Designed to Outwit Scammers

 

The voice-based AI, known as Daisy or "dAIsy," impersonates a senior citizen to engage in meandering conversation with phone scammers.

Despite its flaws, such as urging people to eat deadly mushrooms, AI can sometimes be utilised for good. O2, the UK's largest mobile network operator, has implemented a voice-based AI chatbot to trick phone scammers into long, useless talks. Daisy, often known as "dAIsy," is a chatbot that mimics the voice of an elderly person, the most typical target for phone scammers. 

Daisy's goal is to automate "scambaiting," which is the technique of deliberately wasting phone fraudsters' time in order to keep them away from potential real victims for as long as possible. Scammers employ social engineering to abuse the elderly's naivety, convincing them, for example, that they owe back taxes and would be arrested if they fail to make payments immediately.

When a fraudster gets Daisy on the phone, they're in for a long chat that won't lead anywhere. If they get to the point when the fraudster requests private data, such as bank account information, Daisy will fabricate it. O2 claims that it is able to contact fraudsters in the first place by adding Daisy's phone number to "easy target" lists that scammers use for leads. 

Of course, the risk with a chatbot like Daisy is that the same technology can be used for opposite ends—we've already seen cases where real people, such as CEOs of major companies, had their voices deepfaked in order to deceive others into giving money to a fraudster. Senior citizens are already exposed enough. If they receive a call from someone who sounds like a grandchild, they will very certainly believe it is genuine.

Finally, preventing fraudulent calls and shutting down the groups orchestrating these frauds would be the best answer. Carriers have enhanced their ability to detect and block scammers' phone numbers, but it remains a cat-and-mouse game. Scammers use automated dialling systems, which allow them to phone numbers quickly and only alert them when they receive an answer. An AI bot that frustrates fraudsters by responding and wasting their time is preferable to nothing.