Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Artificial Intelligence. Show all posts

Cyber Threats in Hong Kong Hit Five-Year Peak with AI’s Growing Influence

 




Hong Kong experienced a record surge in cyberattacks last year, marking the highest number of incidents in five years. Hackers are increasingly using artificial intelligence (AI) to strengthen their methods, according to the Hong Kong Computer Emergency Response Team Coordination Centre (HKCERT).

The agency reported a spike of 12,536 cybersecurity incidents in 2024, a dramatic increase of 62% from 7,752 cases in 2023. Phishing attacks dominated these incidents, with cases more than doubling from 3,752 in 2023 to 7,811 last year.

AI is aiding in improving phishing campaign effectiveness. Attackers can now use AI tools to create extremely realistic fake emails and websites that even the most skeptical eye cannot easily distinguish from their legitimate counterparts.

Alex Chan Chung-man, a digital transformation leader at HKCERT, commented that phishing attacks targeted the majority of cases for banking, financial, and payment systems, almost 25% of the total cases. Social media, including WhatsApp and messaging apps, was another main target, 22% of the total cases.

AI allows scammers to create flawless phishing messages and generate fake website links that mimic trusted services," Chan explained. This efficiency has led to a sharp rise in phishing links, with over 48,000 malicious URLs identified last year—an increase of 1.5 times compared to 2023.

Hackers are also targeting other essential services such as healthcare and utilities. A notable case involved Union Hospital in Tai Wai, which suffered a ransomware attack. In this case, cybercriminals used a malware called "LockBit" to demand a $10 million ransom. The hospital did not comply with the ransom demand but the incident illustrates the risks critical infrastructure providers face.

Third-party vendors involved with critical sectors are emerging vulnerabilities for hackers to exploit. Leaks through such third-party partners have the potential to cause heavy damages, ranging from legal to reputation-related.


New Risk: Electronic Sign Boards

Digital signboards, once left unattended, are now being targeted by hackers. According to HKCERT, 40% of companies have not risk-assessed these systems. These displays can easily be hijacked through USB devices or wireless connections and display malicious or inappropriate content.  

Though Hong Kong has not been attacked this way, such attacks in other countries indicate a new threat.


Prevention for Businesses

HKCERT advises organizations to take the following measures against these threats:  

  1. Change passwords regularly and use multi-factor authentication.  
  2. Regularly backup important data to avoid loss.  
  3. Update software regularly to patch security vulnerabilities.

Chan emphasized that AI-driven threats will develop their methods, and thus robust cybersecurity practices are needed to protect sensitive data and infrastructure.




Why AI-Driven Cybercrime Is the Biggest Threat of 2025

 


AI in Cybercrimes: Rising Threats and Challenges

Kuala Lumpur: The increasing use of artificial intelligence (AI) in cybercrimes is becoming a grave issue, says Datuk Seri Ramli Mohamed Yoosuf, Director of Malaysia's Commercial Crime Investigation Department (CCID). Speaking at the Asia International Security Summit and Expo 2025, he highlighted how cybercriminals are leveraging AI to conduct sophisticated attacks, creating unprecedented challenges for cybersecurity efforts.

"AI has enabled criminals to churn through huge datasets with incredible speed, helping them craft highly convincing phishing emails targeted at deceiving individuals," Ramli explained. He emphasized how these advancements in AI make fraudulent communications harder to identify, thus increasing the risk of successful cyberattacks.

Rising Threats to Critical Sectors

Ramli expressed concern over the impact of AI-driven cybercrime on critical sectors such as healthcare and transportation. Attacks on hospital systems could disrupt patient care, putting lives at risk, while breaches in transportation networks could endanger public safety and hinder mobility. These scenarios highlight the urgent need for robust defense mechanisms and efficient response plans to protect critical infrastructure.

One of the key challenges posed by AI is the creation of realistic fake content through deepfake technology. Criminals can generate fake audio or video files that convincingly mimic real individuals, enabling them to manipulate or scam their targets more effectively.

Another area of concern is the automation of phishing attacks. With AI, attackers can identify software vulnerabilities quickly and execute precision attacks at unprecedented speeds, putting defenders under immense pressure to keep up.

Cybercrime Statistics in Malaysia

Over the past five years, Malaysia has seen a sharp rise in cybercrime cases. Between 2020 and 2024, 143,000 cases were reported, accounting for 85% of all commercial crimes during this period. This indicates that cybersecurity threats are becoming increasingly sophisticated, necessitating significant changes in security practices for both individuals and organizations.

Ramli stressed the importance of collective vigilance against evolving cyber threats. He urged the public to be more aware of these risks and called for greater investment in technological advancements to combat AI-driven cybercrime.

"To the extent cybercriminals will become more advanced, we can ensure that people and organizations are educated on how to recognize and deal with these challenges," he stated.

By prioritizing proactive measures and fostering a culture of cybersecurity, Malaysia can strengthen its defenses against the persistent threat of AI-driven cybercrimes.

A Looming Threat to Crypto Keys: The Risk of a Quantum Hack

 


The Quantum Computing Threat to Cryptocurrency Security

The immense computational power that quantum computing offers raises significant concerns, particularly around its potential to compromise private keys that secure digital interactions. Among the most pressing fears is its ability to break the private keys safeguarding cryptocurrency wallets.

While this threat is genuine, it is unlikely to materialize overnight. It is, however, crucial to examine the current state of quantum computing in terms of commercial capabilities and assess its potential to pose a real danger to cryptocurrency security.

Before delving into the risks, it’s essential to understand the basics of quantum computing. Unlike classical computers, which process information using bits (either 0 or 1), quantum computers rely on quantum bits, or qubits. Qubits leverage the principles of quantum mechanics to exist in multiple states simultaneously (0, 1, or both 0 and 1, thanks to the phenomenon of superposition).

Quantum Computing Risks: Shor’s Algorithm

One of the primary risks posed by quantum computing stems from Shor’s algorithm, which allows quantum computers to factor large integers exponentially faster than classical algorithms. The security of several cryptographic systems, including RSA, relies on the difficulty of factoring large composite numbers. For instance, RSA-2048, a widely used cryptographic key size, underpins the private keys used to sign and authorize cryptocurrency transactions.

Breaking RSA-2048 with today’s classical computers, even using massive clusters of processors, would take billions of years. To illustrate, a successful attempt to crack RSA-768 (a 768-bit number) in 2009 required years of effort and hundreds of clustered machines. The computational difficulty grows exponentially with key size, making RSA-2048 virtually unbreakable within any human timescale—at least for now.

Commercial quantum computing offerings, such as IBM Q System One, Google Sycamore, Rigetti Aspen-9, and AWS Braket, are available today for those with the resources to use them. However, the number of qubits these systems offer remains limited — typically only a few dozen. This is far from sufficient to break even moderately sized cryptographic keys within any realistic timeframe. Breaking RSA-2048 would require millions of years with current quantum systems.

Beyond insufficient qubit capacity, today’s quantum computers face challenges in qubit stability, error correction, and scalability. Additionally, their operation depends on extreme conditions. Qubits are highly sensitive to electromagnetic disturbances, necessitating cryogenic temperatures and advanced magnetic shielding for stability.

Future Projections and the Quantum Threat

Unlike classical computing, quantum computing lacks a clear equivalent of Moore’s Law to predict how quickly its power will grow. Google’s Hartmut Neven proposed a “Neven’s Law” suggesting double-exponential growth in quantum computing power, but this model has yet to consistently hold up in practice beyond research and development milestones.

Hypothetically, achieving double-exponential growth to reach the approximately 20 million physical qubits needed to crack RSA-2048 could take another four years. However, this projection assumes breakthroughs in addressing error correction, qubit stability, and scalability—all formidable challenges in their own right.

While quantum computing poses a theoretical threat to cryptocurrency and other cryptographic systems, significant technical hurdles must be overcome before it becomes a tangible risk. Current commercial offerings remain far from capable of cracking RSA-2048 or similar key sizes. However, as research progresses, it is crucial for industries reliant on cryptographic security to explore quantum-resistant algorithms to stay ahead of potential threats.

Quantum Computing: A Rising Challenge Beyond the AI Spotlight

 

Artificial intelligence (AI) often dominates headlines, stirring fascination and fears of a machine-controlled dystopia. With daily interactions through virtual assistants, social media algorithms, and self-driving cars, AI feels familiar, thanks to decades of science fiction embedding it into popular culture. Yet, lurking beneath the AI buzz is a less familiar but potentially more disruptive force: quantum computing.

Quantum computing, unlike AI, is shrouded in scientific complexity and public obscurity. While AI benefits from widespread cultural familiarity, quantum mechanics remains an enigmatic topic, rarely explored in blockbuster movies or bestselling novels. Despite its low profile, quantum computing harbors transformative—and potentially hazardous—capabilities.

Quantum computers excel at solving problems beyond the scope of today's classical computers. For example, in 2019, Google’s quantum computer completed a computation in just over three minutes—a task that would take a classical supercomputer approximately 10,000 years. This unprecedented speed holds the promise to revolutionize fields such as healthcare, logistics, and scientific research. However, it also poses profound risks, particularly in cybersecurity.

The most immediate threat of quantum computing lies in its ability to undermine existing encryption systems. Public-key cryptography, which safeguards online transactions and personal data, relies on mathematical problems that are nearly impossible for classical computers to solve. Quantum computers, however, could crack these codes in moments, potentially exposing sensitive information worldwide.

Many experts warn of a “cryptographic apocalypse” if organizations fail to adopt quantum-resistant encryption. Governments and businesses are beginning to recognize the urgency. The World Economic Forum has called for proactive measures, emphasizing the need to prepare for the quantum era before it is too late. Despite these warnings, the public conversation remains focused on AI, leaving the risks of quantum computing underappreciated.

The race to counter the quantum threat has begun. Leading tech companies like Google and Apple are developing post-quantum encryption protocols to secure their systems. Governments are crafting strategies for transitioning to quantum-safe encryption, but timelines vary. Experts predict that quantum computers capable of breaking current encryption may emerge within 5 to 30 years. Regardless of the timeline, the shift to quantum-resistant systems will be both complex and costly.

While AI captivates the world with its promise and peril, quantum computing remains an under-discussed yet formidable security challenge. Its technical intricacy and lack of cultural presence have kept it in the shadows, but its potential to disrupt digital security demands immediate attention. As society marvels at AI-driven futures, it must not overlook the silent revolution of quantum computing—an unseen threat that could redefine our technological landscape if unaddressed.

Meta's AI Bots on WhatsApp Spark Privacy and Usability Concerns




WhatsApp, the world's most widely used messaging app, is celebrated for its simplicity, privacy, and user-friendly design. However, upcoming changes could drastically reshape the app. Meta, WhatsApp's parent company, is testing a new feature: AI bots. While some view this as a groundbreaking innovation, others question its necessity and raise concerns about privacy, clutter, and added complexity. 
 
Meta is introducing a new "AI" tab in WhatsApp, currently in beta testing for Android users. This feature will allow users to interact with AI-powered chatbots on various topics. These bots include both third-party models and Meta’s in-house virtual assistant, "Meta AI." To make room for this update, the existing "Communities" tab will merge with the "Chats" section, with the AI tab taking its place. Although Meta presents this as an upgrade, many users feel it disrupts WhatsApp's clean and straightforward design. 
 
Meta’s strategy seems focused on expanding its AI ecosystem across its platforms—Instagram, Facebook, and now WhatsApp. By introducing AI bots, Meta aims to boost user engagement and explore new revenue opportunities. However, this shift risks undermining WhatsApp’s core values of simplicity and secure communication. The addition of AI could clutter the interface and complicate user experience. 

Key Concerns Among Users 
 
1. Loss of Simplicity: WhatsApp’s minimalistic design has been central to its popularity. Adding AI features could make the app feel overloaded and detract from its primary function as a messaging platform. 
 
2. Privacy and Security Risks: Known for its end-to-end encryption, WhatsApp prioritizes user privacy. Introducing AI bots raises questions about data security and how Meta will prevent misuse of these bots. 
 
3. Unwanted Features: Many users believe AI bots are unnecessary for a messaging app. Unlike optional AI tools on platforms like ChatGPT or Google Gemini, Meta's integration feels forced.
 
4. Cluttered Interface: Replacing the "Communities" tab with the AI tab consumes valuable space, potentially disrupting how users navigate the app. 

The Bigger Picture 

Meta may eventually allow users to create custom AI bots within WhatsApp, a feature already available on Instagram. However, this could introduce significant risks. Poorly moderated bots might spread harmful or misleading content, threatening user trust and safety. 

WhatsApp users value its security and simplicity. While some might welcome AI bots, most prefer such features to remain optional and unobtrusive. Since the AI bot feature is still in testing, it’s unclear whether Meta will implement it globally. Many hope WhatsApp will stay true to its core strengths—simplicity, privacy, and reliability—rather than adopting features that could alienate its loyal user base. Will this AI integration enhance the platform or compromise its identity? Only time will tell.

Ensuring Governance and Control Over Shadow AI

 


AI has become almost ubiquitous in software development, as a GitHub survey shows, 92 per cent of developers in the United States use artificial intelligence as part of their everyday coding. This has led many individuals to participate in what is termed “shadow AI,” which involves leveraging the technology without the knowledge or approval of their organization’s Information Technology department and/or Chief Information Security Officer (CISO). 

This has increased their productivity. In light of this, it should not come as a surprise to learn that motivated employees will seek out the technology that can maximize their value potential as well as minimize repetitive tasks that interfere with more creative, challenging endeavours. It is not uncommon for companies to be curious about new technologies, especially those that can be used to make work easier and more efficient, such as artificial intelligence (AI) and automation tools. 

Despite the increasing amount of ingenuity, some companies remain reluctant to adopt technology at their first, or even second, glances. Nevertheless, resisting change does not necessarily mean employees will stop secretly using AI in a non-technical way, especially since tools such as Microsoft Copilot, ChatGPT, and Claude make these technologies more accessible to non-technical employees.

Known as shadow AI, shadow AI is a growing phenomenon that has gained popularity across many different sectors. There is a concept known as shadow AI, which is the use of artificial intelligence tools or systems without the official approval or oversight of the organization's information technology or security department. These tools are often adopted to solve immediate problems or boost efficiency within an organization. 

If these tools are not properly governed, they can lead to data breaches, legal violations, or regulatory non-compliance, which could pose significant risks to businesses. When Shadow AI is not properly managed, it can introduce vulnerabilities into users' infrastructure that can lead to unauthorized access to sensitive data. In a world where artificial intelligence is becoming increasingly ubiquitous, organizations should take proactive measures to make sure their operations are protected. 

Shadow generative AI poses specific and substantial risks to an organization's integrity and security, and poses significant threats to both of them. A non-regulated use of artificial intelligence can lead to decisions and actions that could undermine regulatory and corporate compliance. Particularly in industries with very strict data handling protocols, such as finance and healthcare, where strict data handling protocols are essential. 

As a result of the bias inherent in the training data, generative AI models can perpetuate these biases, generate outputs that breach copyrights, or generate code that violates licensing agreements. The untested code may cause the software to become unstable or error-prone, which can increase maintenance costs and cause operational disruptions. In addition, such code may contain undetected malicious elements, which increases the risk of data breach and system downtime, as well.

It is important to recognize that the mismanagement of Artificial Intelligence interactions in customer-facing applications can result in regulatory non-compliance, reputational damage, as well as ethical concerns, particularly when the outputs adversely impact the customer experience. Consequently, organization leaders must ensure that their organizations are protected from unintended and adverse consequences when utilizing generative AI by implementing robust governance measures to mitigate these risks. 

In recent years, AI technology, including generative and conversational AI, has seen incredible growth in popularity, leading to widespread grassroots adoption of these technologies. The accessibility of consumer-facing AI tools, which require little to no technical expertise, combined with a lack of formal AI governance, has enabled employees to utilize unvetted AI solutions, The 2025 CX Trends Report highlights a 250% year-over-year increase in shadow AI usage in some industries, exposing organizations to heightened risks related to data security, compliance, and business ethics. 

There are many reasons why employees turn to shadow AI for personal or team productivity enhancement because they are dissatisfied with their existing tools, because of the ease of access, and because they want to enhance the ability to accomplish specific tasks. In the future, this gap will grow as CX Traditionalists delay the development of AI solutions due to limitations in budget, a lack of knowledge, or an inability to get internal support from their teams. 

As a result, CX Trendsetters are taking steps to address this challenge by adopting approved artificial intelligence solutions like AI agents and customer experience automation, as well as ensuring the appropriate oversight and governance are in place. Identifying AI Implementations: CISOs and security teams, must determine who will be introducing AI throughout the software development lifecycle (SDLC), assess their security expertise, and evaluate the steps taken to minimize risks associated with AI deployment. 

In training programs, it is important to raise awareness among developers of the importance and potential of AI-assisted code as well as develop their skills to address these vulnerabilities. To identify vulnerable phases of the software development life cycle, the security team needs to analyze each phase of the SDLC and identify if any are vulnerable to unauthorized uses of AI. 

Fostering a Security-First Culture: By promoting a proactive protection mindset, organizations can reduce the need for reactive fixes by emphasizing the importance of securing their systems from the onset, thereby saving time and money. In addition to encouraging developers to prioritize safety and transparency over convenience, a robust security-first culture, backed by regular training, encourages a commitment to security. 

CISOs are responsible for identifying and managing risks associated with new tools and respecting decisions made based on thorough evaluations. This approach builds trust, ensures tools are properly vetted before deployment, and safeguards the company's reputation. Incentivizing Success: There is great value in having developers who contribute to bringing AI usage into compliance with their organizations. 

For this reason, these individuals should be promoted, challenged, and given measurable benchmarks to demonstrate their security skills and practices. As organizations reward these efforts, they create a culture in which AI deployment is considered a critical, marketable skill that can be acquired and maintained. If these strategies are implemented effectively, a CISO and development teams can collaborate to manage AI risks the right way, ensuring faster, safer, and more effective software production while avoiding the pitfalls caused by shadow AI. 

As an alternative to setting up sensitive alerts to make sure that confidential data isn't accidentally leaked, it is also possible to set up tools using artificial intelligence, for example, to help detect when a model of artificial intelligence incorrectly inputs or processes personal data, financial information, or other proprietary information. 

It is possible to identify and mitigate security breaches in real-time by providing real-time alerts in real-time, and by enabling management to reduce these breaches before they escalate into a full-blown security incident, adding a layer of security protection, in this way. 

When an API strategy is executed well, it is possible to give employees the freedom to use GenAI tools productively while safeguarding the company's data, ensuring that AI usage is aligned with internal policies, and protecting the company from fraud. To increase innovation and productivity, one must strike a balance between securing control and ensuring that security is not compromised.

AI and Blockchain: Shaping the Future of Personalization and Security

 

The integration of Artificial Intelligence (AI) and blockchain technology is revolutionizing digital experiences, especially for developers aiming to enhance user interaction and improve security. By combining these cutting-edge technologies, digital platforms are becoming more personalized while ensuring that user data remains secure. 

Why Personalization and Security Are Essential 

A global survey conducted in the third quarter of 2024 revealed that 64% of consumers prefer to engage with companies that offer personalized experiences. Simultaneously, 53% of respondents expressed significant concerns about data privacy. These findings highlight a critical balance: users desire tailored interactions but are equally cautious about how their data is managed. The integration of AI and blockchain offers innovative solutions to address both personalization and privacy concerns. 

AI has seamlessly integrated into daily life, with tools like ChatGPT becoming indispensable across industries. A notable advancement in AI is the adoption of Common Crawl's customized blockchain. This system securely stores vast datasets used by AI models, enhancing data transparency and security. Blockchain’s immutable nature ensures data integrity, making it ideal for managing the extensive data required to train AI systems in applications like ChatGPT. 

The combined power of AI and blockchain is already transforming sectors like marketing and healthcare, where personalization and data privacy are paramount.

  • Marketing: Tools such as AURA by AdEx allow businesses to analyze user activity on blockchain platforms like Ethereum. By studying transaction data, AURA helps companies implement personalized marketing strategies. For instance, users frequently interacting with decentralized exchanges (DEXs) or moving assets across blockchains can receive tailored marketing content aligned with their behavior.
  • Healthcare: Blockchain technology is being used to store medical records securely, enabling AI systems to develop personalized treatment plans. This approach allows healthcare professionals to offer customized recommendations for nutrition, medication, and therapies while safeguarding sensitive patient data from unauthorized access.
Enhancing Data Security 

Despite AI's transformative capabilities, data privacy has been a longstanding concern. Earlier AI tools, such as previous versions of ChatGPT, stored user data to refine models without clear consent, raising privacy issues. However, the industry is evolving with the introduction of privacy-centric tools like Sentinel and Scribe. These platforms employ advanced encryption to protect user data, ensuring that information remains secure—even from large technology companies like Google and Microsoft. 
 
The future holds immense potential for developers leveraging AI and blockchain technologies. These innovations not only enhance user experiences through personalized interactions but also address critical privacy challenges that have persisted within the tech industry. As AI and blockchain continue to evolve, industries such as marketing, healthcare, and beyond can expect more powerful tools that prioritize customization and data security. By embracing these technologies, businesses can create engaging, secure digital environments that meet users' growing demands for personalization and privacy.

Apple Faces Backlash Over Misinformation from Apple Intelligence Tool

 



Apple made headlines with the launch of its Apple Intelligence tool, which quickly gained global attention. However, the tech giant now faces mounting criticism after reports emerged that the AI feature has been generating false news notifications, raising concerns about misinformation.

The British Broadcasting Corporation (BBC) was the first to report the problem, directly complaining to Apple that the AI summaries were misrepresenting their journalism. Apple responded belatedly, clarifying that its staff are working to ensure users understand these summaries are AI-generated and not official news reports.

Alan Rusbridger, former editor of The Guardian, criticized Apple, suggesting the company should withdraw the product if it is not yet ready. He warned that Apple’s technology poses a significant risk of spreading misinformation globally, potentially causing unnecessary panic among readers.

Rusbridger further emphasized that public trust in journalism is already fragile. He expressed concern that major American tech companies like Apple should not use the media industry as a testing ground for experimental features.

Pressure from Journalist Organizations

The National Union of Journalists (NUJ), a leading global body representing journalists, joined the criticism, urging Apple to take swift action to curb the spread of misinformation. The NUJ's statement echoes previous concerns raised by Reporters Without Borders (RSF).

Laura Davison, NUJ’s general secretary, stressed the urgency of the matter, stating,

"At a time when access to accurate reporting has never been more important, the public must not be placed in a position of second-guessing the accuracy of news they receive."

Apple is now under increasing pressure from media organizations and watchdog groups to resolve the issue. If the company fails to address these concerns promptly, it may be forced to remove the Apple Intelligence feature altogether.

With legal and regulatory scrutiny intensifying, Apple’s next steps will be closely watched. Prolonging the issue could invite further criticism and potential legal consequences.

This situation highlights the growing responsibility of tech companies to prevent the spread of misinformation, especially when deploying advanced AI tools. Apple must act decisively to regain public trust and ensure its technologies do not compromise the integrity of reliable journalism.

OpenAI's O3 Achieves Breakthrough in Artificial General Intelligence

 



 
In recent times, the rapid development of artificial intelligence took a significant turn when OpenAI introduced its O3 model, a system demonstrating human-level performance on tests designed to measure “general intelligence.” This achievement has reignited discussions on artificial intelligence, with a focus on understanding what makes O3 unique and how it could shape the future of AI.

Performance on the ARC-AGI Test 
 
OpenAI's O3 model showcased its exceptional capabilities by matching the average human score on the ARC-AGI test. This test evaluates an AI system's ability to solve abstract grid problems with minimal examples, measuring how effectively it can generalize information and adapt to new scenarios. Key highlights include:
  • Test Outcomes: O3 not only matched human performance but set a new benchmark in Artificial General Intelligence (AGI) development.
  • Adaptability: The model demonstrated the ability to draw generalized rules from limited examples, a critical capability for AGI progress.
Breakthrough in Science Problem-Solving 
 
Beyond the ARC-AGI test, the O3 model excelled in solving complex scientific questions. It achieved an impressive score of 87.7% compared to the 70% score of PhD-level experts, underscoring its advanced reasoning abilities. 
 
While OpenAI has not disclosed the specifics of O3’s development, its performance suggests the use of simple yet effective heuristics similar to AlphaGo’s training process. By evaluating patterns and applying generalized thought processes, O3 efficiently solves complex problems, redefining AI capabilities. An example rule demonstrates its approach.

“Any shape containing a salient line will be moved to the end of that line and will cover all the overlapping shapes in its new position.”
 
O3 and O3 Mini models represent a significant leap in AI, combining unmatched performance with general learning capabilities. However, their potential brings challenges related to cost, security, and ethical adoption that must be addressed for responsible use. As technology advances into this new frontier, the focus must remain on harnessing AI advancements to facilitate progress and drive positive change. With O3, OpenAI has ushered in a new era of opportunity, redefining the boundaries of what is possible in artificial intelligence.

Cybersecurity in APAC: AI and Quantum Computing Bring New Challenges in 2025

 



Asia-Pacific (APAC) enters 2025 with serious cybersecurity concerns as new technologies such as artificial intelligence (AI) and quantum computing are now posing more complex threats. Businesses and governments in the region are under increased pressure to build stronger defenses against these rapidly evolving risks.

How AI is Changing Cyberattacks

AI is now a primary weapon for cybercriminals, who can now develop more complex attacks. One such alarming example is the emergence of deepfake technology. Deepfakes are realistic but fake audio or video clips that can mislead people or organizations. Recently, deepfakes were used in political disinformation campaigns during elections in countries such as India and Indonesia. In Hong Kong, cybercriminals used deepfake technology to impersonate individuals and steal $25 million from a company. Audio-based deepfakes, and in particular, voice-cloning scams, will likely be used much more by hackers. It means that companies and individuals can be scammed with fake voice recordings, which would increase when this technology gets cheaper and becomes widely available. As described by Simon Green, the cybersecurity leader, this situation represents a "perfect storm" of AI-driven threats in APAC.

The Quantum Computing Threat

Even in its infancy, quantum computing threatens future data security. One of the most pressing is a strategy called "harvest now, decrypt later." Attackers will harvest encrypted data now, planning to decrypt it later when quantum technology advances enough to break current encryption methods.

The APAC region is moving at the edge of quantum technology development. Places like India, Singapore, etc., and international giants like IBM and Microsoft continue to invest so much in such technology. Their advancement is reassuring but also alarms people about having sensitive information safer. Experts speak about the issue of quantum resistant encryption to fend off future threat risks.

With more and more companies embracing AI-powered tools such as Microsoft Copilot, the emphasis on data security is becoming crucial. Companies have now shifted to better management of their data along with compliance in new regulations in order to successfully integrate AI within their operations. According to a data expert Max McNamara, robust security measures are imperative to unlock full potential of AI without compromising the privacy or safety.

To better address the intricate nature of contemporary cyberattacks, many cybersecurity experts suggest unified security platforms. Integrated systems combine and utilize various instruments and approaches used to detect threats and prevent further attacks while curtailing costs as well as minimizing inefficiencies.

The APAC region is now at a critical point for cybersecurity as threats are administered more minutely. Businesses and governments can be better prepared for the challenges of 2025 by embracing advanced defenses and having the foresight of technological developments.




Dutch Authority Flags Concerns Over AI Standardization Delays

 


As the Dutch privacy watchdog DPA announced on Wednesday, it was concerned that software developers developing artificial intelligence (AI) might use personal data. To get more information about this, DPA sent a letter to Microsoft-backed OpenAI. The Dutch Data Protection Authority (Dutch DPA) imposed a fine of 30.5 million euros on Clearview AI and ordered that they be subject to a penalty of up to 5 million euros if they fail to comply. 

As a result of the company's illegal database of billions of photographs of faces, including Dutch people, Clearview is an American company that offers facial recognition services. They have built an illegal database. According to their website, the Dutch DPA warns that Clearview's services are also prohibited. In light of the rapid growth of OpenAI's ChatGPT consumer app, governments, including those of the European Union, are considering how to regulate the technology. 

There is a senior official from the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP), who told Euronews that the process of developing artificial intelligence standards will need to take place faster, in light of the AI Act. Introducing the EU AI Act, which is the first comprehensive AI law in the world. The regulation aims to address health and safety risks, as well as fundamental human rights issues, as well as democracy, the rule of law, and environmental protection. 

By adopting artificial intelligence systems, there is a strong possibility to benefit society, contribute to economic growth, enhance EU innovation and competitiveness as well as enhance EU innovation and global leadership. However, in some cases, the specific characteristics of certain AI systems may pose new risks relating to user safety, including physical safety and fundamental rights. 

There have even been instances where some of these powerful AI models could pose systemic risks if they are widely used. Since there is a lack of trust, this creates legal uncertainty and may result in a slower adoption of AI technologies by businesses, citizens, and public authorities due to legal uncertainties. Regulatory responses by national governments that are disparate could fragment the internal market. 

To address these challenges, legislative action was required to ensure that both the benefits and risks of AI systems were adequately addressed to ensure that the internal market functioned well. As for the standards, they are a way for companies to be reassured, and to demonstrate that they are complying with the regulations, but there is still a great deal of work to be done before they are available, and of course, time is running out,” said Sven Stevenson, who is the agency's director of coordination and supervision for algorithms. 

CEN-CELENEC and ETSI were tasked by the European Commission in May last year to compile the underlying standards for the industry, which are still being developed and this process continues to be carried out. This data protection authority, which also oversees the General Data Protection Regulation (GDPR), is likely to have the shared responsibility of checking the compliance of companies with the AI Act with other authorities, such as the Dutch regulator for digital infrastructure, the RDI, with which they will likely share this responsibility. 

By August next year, all EU member states will have to select their AI regulatory agency, and it appears that in most EU countries, national data protection authorities will be an excellent choice. The AP has already dealt with cases in which companies' artificial intelligence tools were found to be in breach of GDPR in its capacity as a data regulator. 

A US facial recognition company known as Clearview AI was fined €30.5 million in September for building an illegal database of photos and unique biometric codes linked to Europeans in September, which included photos, unique biometric codes, and other information. The AI Act will be complementary to GDPR, since it focuses primarily on data processing, and would have an impact in the sense that it pertains to product safety in future cases. Increasingly, the Dutch government is promoting the development of new technologies, including artificial intelligence, to promote the adoption of these technologies. 

The deployment of such technologies could have a major impact on public values like privacy, equality in the law, and autonomy. This became painfully evident when the scandal over childcare benefits in the Netherlands was brought to public attention in September 2018. The scandal in question concerns thousands of parents who were falsely accused of fraud by the Dutch tax authorities because of discriminatory self-learning algorithms that were applied while attempting to regulate the distribution of childcare benefits while being faced with discriminatory self-learning algorithms. 

It has been over a year since the Amsterdam scandal raised a great deal of controversy in the Netherlands, and there has been an increased emphasis on the supervision of new technologies, and in particular artificial intelligence, as a result, the Netherlands intentionally emphasizes and supports a "human-centred approach" to artificial intelligence. Taking this approach means that AI should be designed and used in a manner that respects human rights as the basis of its purpose, design, and use. AI should not weaken or undermine public values and human rights but rather reinforce them rather than weaken them. 

During the last few months, the Commission has established the so-called AI Pact, which provides workshops and joint commitments to assist businesses in getting ready for the upcoming AI Act. On a national level, the AP has also been organizing pilot projects and sandboxes with the Ministry of RDI and Economic Affairs so that companies can become familiar with the rules as they become more aware of them. 

Further, the Dutch government has also published an algorithm register as of December 2022, which is a public record of algorithms used by the government, which is intended to ensure transparency and explain the results of algorithms, and the administration wants these algorithms to be legally checked for discrimination and arbitrariness.

Understanding Ransomware: A Persistent Cyber Threat

 


Ransomware is a type of malicious software designed to block access to files until a ransom is paid. Over the past 35 years, it has evolved from simple attacks into a global billion-dollar industry. In 2023 alone, ransomware victims reportedly paid approximately $1 billion, primarily in cryptocurrency, underscoring the massive scale of the problem.

The First Recorded Ransomware Attack

The first known ransomware attack occurred in 1989. Joseph Popp, a biologist, distributed infected floppy disks under the guise of software analyzing susceptibility to AIDS. Once installed, the program encrypted file names and, after 90 uses, hid directories before displaying a ransom demand. Victims were instructed to send a cashier’s check to an address in Panama to unlock their files.

This incident, later dubbed the "AIDS Trojan," marked the dawn of ransomware attacks. At the time, the term "ransomware" was unknown, and cybersecurity communities were unprepared for such threats. Popp was eventually apprehended but deemed unfit for trial due to erratic behaviour.

Evolution of Ransomware

Ransomware has undergone significant changes since its inception:

  • 2004 – The Rise of GPCode: A new variant, "GPCode," used phishing emails to target individuals. Victims were lured by fraudulent job offers and tricked into downloading infected attachments. The malware encrypted their files, demanding payment via wire transfer.
  • 2013 – Cryptocurrency and Professional Operations: By the early 2010s, ransomware operations became more sophisticated. Cybercriminals began demanding cryptocurrency payments for anonymity and irreversibility. The "CryptoLocker" ransomware, infamous for its efficiency, marked the emergence of "ransomware-as-a-service," enabling less skilled attackers to launch widespread attacks.
  • 2017 – Global Disruptions: Major attacks like WannaCry and Petya caused widespread disruptions, affecting industries worldwide and highlighting the growing menace of ransomware.

The Future of Ransomware

Ransomware is expected to evolve further, with experts predicting its annual cost could reach $265 billion by 2031. Emerging technologies like artificial intelligence (AI) are likely to play a role in creating more sophisticated malware and delivering targeted attacks more effectively.

Despite advancements, simpler attacks remain highly effective. Cybersecurity experts emphasize the importance of vigilance and proactive defense strategies. Understanding ransomware’s history and anticipating future challenges are key to mitigating this persistent cyber threat.

Knowledge and preparedness remain the best defenses against ransomware. By staying informed and implementing robust security measures, individuals and organizations can better protect themselves from this evolving menace.

Here's How Google Willow Chip Will Impact Startup Innovation in 2025

 

As technology advances at an unprecedented rate, the recent unveiling of Willow, Google's quantum computing device, ushers in a new age for startups. Willow's unprecedented computing capabilities—105 qubits, roughly double those of its predecessor, Sycamore—allow it to accomplish jobs incomprehensibly quicker than today's most powerful supercomputers. This milestone is set to significantly impact numerous sectors, presenting startups with a rare opportunity to innovate and tackle complex issues. 

The Willow chip's ability to effectively tackle complex issues that earlier technologies were unable to handle is among its major implications. Quantum computing can be used by startups in industries like logistics and pharmaceuticals to speed up simulations and streamline procedures. Willow's computational power, for example, can be utilised by a drug-discovery startup to simulate detailed chemical interactions, significantly cutting down on the time and expense required to develop new therapies. 

The combination of quantum computing and artificial intelligence has the potential to lead to ground-breaking developments in AI model capabilities. Startups developing AI-driven solutions can employ quantum algorithms to manage huge data sets more efficiently. This might lead to speedier model training durations and enhanced prediction skills in a variety of applications, including personalised healthcare, where quantum-enhanced machine learning tools can analyse patient data for real-time insights and tailored treatments. 

Cybersecurity challenges 

The powers of Willow offer many benefits, but they also bring with them significant challenges, especially in the area of cybersecurity. The security of existing encryption techniques is called into question by the processing power of quantum devices, as they may be vulnerable to compromise. Startups that create quantum-resistant security protocols will be critical in addressing this growing demand, establishing themselves in a booming niche market.

Access and collaboration

Google’s advancements with the Willow chip might also democratize access to quantum computing. Startups may soon benefit from cloud-based quantum computing resources, eliminating the substantial capital investment required for hardware acquisition. This model could encourage collaborative ecosystems between startups, established tech firms, and academic institutions, fostering knowledge-sharing and accelerating innovation.

The Future of Artificial Intelligence: Progress and Challenges



Artificial intelligence (AI) is rapidly transforming the world, and by 2025, its growth is set to reach new heights. While the advancements in AI promise to reshape industries and improve daily lives, they also bring a series of challenges that need careful navigation. From enhancing workplace productivity to revolutionizing robotics, AI's journey forward is as complex as it is exciting.

In recent years, AI has evolved from basic applications like chatbots to sophisticated systems capable of assisting with diverse tasks such as drafting emails or powering robots for household chores. Companies like OpenAI and Google’s DeepMind are at the forefront of creating AI systems with the potential to match human intelligence. Despite these achievements, the path forward isn’t without obstacles.

One major challenge in AI development lies in the diminishing returns from scaling up AI models. Previously, increasing the size of AI models drove progress, but developers are now focusing on maximizing computing power to tackle complex problems. While this approach enhances AI's capabilities, it also raises costs, limiting accessibility for many users. Additionally, training data has become a bottleneck. Many of the most valuable datasets have already been utilized, leading companies to rely on AI-generated data. This practice risks introducing biases into systems, potentially resulting in inaccurate or unfair outcomes. Addressing these issues is critical to ensuring that AI remains effective and equitable.

The integration of AI into robotics is another area of rapid advancement. Robots like Tesla’s Optimus, which can perform household chores, and Amazon’s warehouse automation systems showcase the potential of AI-powered robotics. However, making such technologies affordable and adaptable remains a significant hurdle. AI is also transforming workplaces by automating repetitive tasks like email management and scheduling. While these tools promise increased efficiency, businesses must invest in training employees to use them effectively.

Regulation plays a crucial role in guiding AI’s development. Countries like those in Europe and Australia are already implementing laws to ensure the safe and ethical use of AI, particularly to mitigate its risks. Establishing global standards for AI regulation is essential to prevent misuse and steer its growth responsibly.

Looking ahead, AI is poised to continue its evolution, offering immense potential to enhance productivity, drive innovation, and create opportunities across industries. While challenges such as rising costs, data limitations, and the need for ethical oversight persist, addressing these issues thoughtfully will pave the way for AI to benefit society responsibly and sustainably.

Fortinet Acquires Perception Point to Enhance AI-Driven Cybersecurity

 


Fortinet, a global leader in cybersecurity with a market valuation of approximately $75 billion, has acquired Israeli company Perception Point to bolster its email and collaboration security capabilities. While the financial terms of the deal remain undisclosed, this acquisition is set to expand Fortinet's AI-driven cybersecurity solutions.

Expanding Protections for Modern Workspaces

Perception Point's advanced technology secures vital business tools such as email platforms like Microsoft Outlook and Slack, as well as cloud storage services. It also extends protection to web browsers and social media platforms, recognizing their increasing vulnerability to cyberattacks.

With businesses shifting to hybrid and cloud-first strategies, the need for robust protection across these platforms has grown significantly. Fortinet has integrated Perception Point's technology into its Security Fabric platform, enhancing protection against sophisticated cyber threats while simplifying security management for organizations.

About Perception Point

Founded in 2015 by Michael Aminov and Shlomi Levin, alumni of Israel’s Intelligence Corps technology unit, Perception Point has become a recognized leader in cybersecurity innovation. The company is currently led by Yoram Salinger, a veteran tech executive and former CEO of RedBand. Over the years, Perception Point has secured $74 million in funding from major investors, including Nokia Growth Partners, Pitango, and SOMV.

The company's expertise extends to browser-based security, which was highlighted by its acquisition of Hysolate. This strategic move demonstrates Perception Point's commitment to innovation and growth in the cybersecurity landscape.

Fortinet's Continued Investment in Israeli Cybersecurity

Fortinet’s acquisition of Perception Point follows its 2019 purchase of Israeli company EnSilo, which specializes in threat detection. These investments underscore Fortinet’s recognition of Israel as a global hub for cutting-edge cybersecurity technologies and innovation.

Addressing the Rise in Cyberattacks

As cyber threats become increasingly sophisticated, companies like Fortinet are proactively strengthening digital security measures. Perception Point’s AI-powered solutions will enable Fortinet to address emerging risks targeting email systems and collaboration tools, ensuring that modern businesses can operate securely in today’s digital-first environment.

Conclusion

Fortinet’s acquisition of Perception Point represents a significant step in its mission to provide comprehensive cybersecurity solutions. By integrating advanced AI technologies, Fortinet is poised to deliver enhanced protection for modern workspaces, meeting the growing demand for secure, seamless operations across industries.

Can Data Embassies Make AI Safer Across Borders?

 


The rapid growth of AI has introduced a significant challenge for data-management organizations: the inconsistent nature of data privacy laws across borders. Businesses face complexities when deploying AI internationally, prompting them to explore innovative solutions. Among these, the concept of data embassies has emerged as a prominent approach. 
 

What Are Data Embassies? 


A data embassy is a data center physically located within the borders of one country but governed by the legal framework of another jurisdiction, much like traditional embassies. This arrangement allows organizations to protect their data from local jurisdictional risks, including potential access by host country governments. 
 
According to a report by the Asian Business Law Institute and Singapore Academy of Law, data embassies address critical concerns related to cross-border data transfers. When organizations transfer data internationally, they often lose control over how it may be accessed under local laws. For businesses handling sensitive information, this loss of control is a significant deterrent. 
 

How Do Data Embassies Work? 

 
Data embassies offer a solution by allowing the host country to agree that the data center will operate under the legal framework of another nation (the guest state). This provides businesses with greater confidence in data security while enabling host countries to benefit from economic and technological advancements. Countries like Estonia and Bahrain have already adopted this model, while nations such as India and Malaysia are considering its implementation. 
 

Why Data Embassies Matter  

 
The global competition to become technology hubs has intensified. Businesses, however, require assurances about the safety and protection of their data. Data embassies provide these guarantees by enabling cloud service providers and customers to agree on a legal framework that bypasses restrictive local laws. 
 
For example, in a data embassy, host country authorities cannot access or seize data without breaching international agreements. This reassurance fosters trust between businesses and host nations, encouraging investment and collaboration. Challenges in AI Development  
 
Global AI development faces additional legal hurdles due to inconsistencies in jurisdictional laws. Key questions, such as ownership of AI-generated outputs, remain unanswered in many regions. For instance, does ownership lie with the creator of the AI model, the user, or the deploying organization? These ambiguities create significant barriers for businesses leveraging AI across borders. 
 

Experts suggest two potential solutions:  

 
1. Restricting AI operations to a single jurisdiction. 
2. Establishing international agreements to harmonize AI laws, similar to global copyright frameworks. The Future of AI and Data Privacy 
 
Combining data embassies with efforts to harmonize global AI regulations could mitigate legal barriers, enhance data security, and ensure responsible AI innovation. As countries and businesses collaborate to navigate these challenges, data embassies may play a pivotal role in shaping the future of cross-border data management.

Are You Using AI in Marketing? Here's How to Do It Responsibly

 


Artificial Intelligence (AI) has emerged as a transformative force, reshaping industries and delivering unprecedented value to businesses worldwide. From automating mundane tasks to offering predictive insights, AI has catalyzed innovation on a massive scale. However, its rapid adoption raises significant concerns about privacy, data ethics, and transparency, prompting urgent discussions on regulation. The need for robust frameworks has grown even more critical as AI technologies become deeply entrenched in everyday operations.

Data Use and the Push for Regulation

During the early development stages of AI, major tech players such as Meta and OpenAI often used public and private datasets without clear guidelines in place. This unregulated experimentation highlighted glaring gaps in data ethics, leading to calls for significant regulatory oversight. The absence of structured frameworks not only undermined public trust but also raised legal and ethical questions about the use of sensitive information.

Today, the regulatory landscape is evolving to address these issues. Europe has taken a pioneering role with the EU AI Act, which came into effect on August 1, 2024. This legislation classifies AI applications based on their level of risk and enforces stricter controls on higher-risk systems to ensure public safety and confidence. By categorizing AI into levels such as minimal, limited, and high risk, the Act provides a comprehensive framework for accountability. On the other hand, the United States is still in the early stages of federal discussions, though states like California and Colorado have enacted targeted laws emphasizing transparency and user privacy in AI applications.

Why Marketing Teams Should Stay Vigilant

AI’s impact on marketing is undeniable, with tools revolutionizing how teams create content, interact with customers, and analyze data. According to a survey, 93% of marketers using AI rely on it to accelerate content creation, optimize campaigns, and deliver personalized experiences. However, this reliance comes with challenges such as intellectual property infringement, algorithmic biases, and ethical dilemmas surrounding AI-generated material.

As regulatory frameworks mature, marketing professionals must align their practices with emerging compliance standards. Proactively adopting ethical AI usage not only mitigates risks but also prepares businesses for stricter regulations. Ethical practices can safeguard brand reputation, ensuring that marketing teams remain compliant and trusted by their audiences.

Best Practices for Responsible AI Use

  1. Maintain Human Oversight
    While AI can streamline workflows, it should not replace human intervention. Marketing teams must rigorously review AI-generated content to ensure originality, eliminate biases, and avoid plagiarism. This approach not only improves content quality but also aligns with ethical standards.
  2. Promote Transparency
    Transparency builds trust. Businesses should be open about their use of AI, particularly when collecting data or making automated decisions. Clear communication about AI processes fosters customer confidence and adheres to evolving legal requirements focused on explainability.
  3. Implement Ethical Data Practices
    Ensure that all data used for AI training complies with privacy laws and ethical guidelines. Avoid using data without proper consent and regularly audit datasets to prevent misuse or biases.
  4. Educate Teams
    Equip employees with knowledge about AI technologies and the implications of their use. Training programs can help teams stay informed about regulatory changes and ethical considerations, promoting responsible practices across the organization.

Preparing for the Future

AI regulation is not just a passing concern but a critical element in shaping its responsible use. By embracing transparency, accountability, and secure data practices, businesses can stay ahead of legal changes while fostering trust with customers and stakeholders. Adopting ethical AI practices ensures that organizations are future-proof, resilient, and prepared to navigate the complexities of the evolving regulatory landscape.

As AI continues to advance, the onus is on businesses to balance innovation with responsibility. Marketing teams, in particular, have an opportunity to demonstrate leadership by integrating AI in ways that enhance customer relationships while upholding ethical and legal standards. By doing so, organizations can not only thrive in an AI-driven world but also set an example for others to follow.

PlayStation Boss : AI can Transform Gaming but Won't Replace Human Creativity

 


According to the management at PlayStation, though artificial intelligence (AI) may potentially change the world of gaming, it can never supplant the human creativity behind game development. Hermen Hulst, co-CEO of PlayStation, stated that AI will complement but not substitute the "human touch" that makes games unique.

AI and Human Creativity

Hulst shared this view on the 30th anniversary of the classic PlayStation at Sony. Referring to the growing role of AI, Hulst admitted that AI has the ability to handle repetitive tasks in game development. However, he reassured fans and creators that human-crafted experiences will always have a place in the market alongside AI-driven innovations. “Striking the right balance between leveraging AI and preserving the human touch is key, indeed,” he said.

Challenges and Successes in 2023

Sony’s year has been marked by both highs and lows. While the PlayStation 5 continues to perform well, the company faced numerous setbacks, including massive job cuts within the gaming industry and the failed launch of the highly anticipated game, Concord. The game resulted in players receiving refunds, and the studio behind it was shut down.

On the hardware side, Sony’s new model, the PlayStation 5 Pro, was heavily criticized for its steep £699.99 price point. However, the company had a major success with the surprise hit Astro Bot, which has received numerous Game of the Year nominations.

New Developments and Expanding Frontiers

Sony is also adapting to changes in how people play games. Its handheld device, the PlayStation Portal, is a controller/screen combination that lets users stream games from their PS5. Recently, Sony launched a beta program that enables cloud streaming directly onto the Portal, marking a shift toward more flexibility in gaming.

In addition to gaming, Sony aims to expand its influence in the entertainment industry by adapting games into films and series. Successful examples include The Last of Us and Uncharted, both based on PlayStation games. Hulst hopes to further elevate PlayStation’s intellectual properties through future projects like God of War, which is being developed as an Amazon Prime series.

Reflecting on 30 Years of PlayStation

Launched in December 1994, the PlayStation console has become a cultural phenomenon, with each of its four main predecessors ranking among the best-selling gaming systems in history. Hulst and his co-CEO Hideaki Nishino, who grew up gaming in different ways, both credit their early experiences with shaping their passion for the industry.

As PlayStation looks toward the future, it aims to maintain a delicate balance between innovation and tradition, ensuring that gaming endures as a creative, immersive medium.

Printer Problems? Don’t Fall for These Dangerous Scams

 


Fixing printer problems is a pain, from paper jams to software bugs. When searching for quick answers, most users rely on search engines or AI solutions to assist them. Unfortunately, this opens the door to scammers targeting unsuspecting people through false ads and other fraudulent sites.

Phony Ads Pretend to be Official Printer Support

When researching online troubleshooting methods for your printer, especially for big-name brands like HP and Canon, you will find many sponsored ads above the search results. Even though they look legitimate, most have been prepared by fraudsters pretending to be official support.

Clicking on these ads can lead users to websites that mimic official brand pages, complete with logos and professional layouts. These sites promise to resolve printer issues but instead, push fake software downloads designed to fail.

How the Driver Scam Works

Printer drivers are a program that allows your computer to connect with your printer. Most modern systems will automatically install these drivers, but some users don’t know how it works and get scammed in the process.

On fraudulent websites, users have to input their printer model in order to download the "necessary" driver. But all the installation processes displayed are fake — pre-recordings typically — and no matter what, the installation fails, leading frustrated users to dial a supposed tech support number for further help.

What are the Risks Involved?

Once the victim contacts the fake support team, scammers usually ask for remote access to the victim's computer to fix the problem. This can lead to:

  • Data theft: Scammers may extract sensitive personal information.
  • Device lockdown: Fraudsters can lock your computer and demand payment for access.
  • Financial loss: They may use your device to log into bank accounts or steal payment details.

These scams not only lead to financial loss but also compromise personal security.

How to Stay Safe

To keep yourself safe, follow these tips:

  1. Do not click on ads when searching for printer help. Instead, look for official websites in the organic search results.
  2. Use reliable security software, such as Malwarebytes Browser Guard, to prevent rogue ads and scam websites.
  3. Look for legitimate support resources, like official support pages, online forums, or tech-savvy friends or family members.

By being vigilant and cautious, you can avoid these scams and troubleshoot your printer issues without getting scammed. Be informed and double-check the legitimacy of support resources.

Meet Chameleon: An AI-Powered Privacy Solution for Face Recognition

 


An artificial intelligence (AI) system developed by a team of researchers can safeguard users from malicious actors' unauthorized facial scanning. The AI model, dubbed Chameleon, employs a unique masking approach to create a mask that conceals faces in images while maintaining the visual quality of the protected image.

Furthermore, the researchers state that the model is resource-optimized, meaning it can be used even with low computing power. While the Chameleon AI model has not been made public yet, the team has claimed they intend to release it very soon.

Researchers at Georgia Tech University described the AI model in a report published in the online pre-print journal arXiv. The tool can add an invisible mask to faces in an image, making them unrecognizable to facial recognition algorithms. This allows users to secure their identities from criminal actors and AI data-scraping bots attempting to scan their faces.

“Privacy-preserving data sharing and analytics like Chameleon will help to advance governance and responsible adoption of AI technology and stimulate responsible science and innovation,” stated Ling Liu, professor of data and intelligence-powered computing at Georgia Tech's School of Computer Science and the lead author of the research paper.

Chameleon employs a unique masking approach known as Customized Privacy Protection (P-3) Mask. Once the mask is applied, the photos cannot be recognized by facial recognition software since the scans depict them "as being someone else."

While face-masking technologies have been available previously, the Chameleon AI model innovates in two key areas:

  1. Resource Optimization:
    Instead of creating individual masks for each photo, the tool develops one mask per user based on a few user-submitted facial images. This approach significantly reduces the computing power required to generate the undetectable mask.
  2. Image Quality Preservation:
    Preserving the image quality of protected photos proved challenging. To address this, the researchers employed Chameleon's Perceptibility Optimization technique. This technique allows the mask to be rendered automatically, without requiring any manual input or parameter adjustments, ensuring the image quality remains intact.

The researchers announced their plans to make Chameleon's code publicly available on GitHub soon, calling it a significant breakthrough in privacy protection. Once released, developers will be able to integrate the open-source AI model into various applications.