European authorities are raising concerns about DeepSeek, a thriving Chinese artificial intelligence (AI) company, due to its data practices. Italy, Ireland, Belgium, Netherlands, France regulators are examining the data collection methods of this firm, seeing whether they comply with the European General Data Protection Regulation or, if they also might consider that personal data is anyway transferred unlawfully to China.
Hence, due to these issues, the Italian authority has released a temporary restrainment to access the DeepSeek chatbot R1 for the time-being under which investigation will be conducted on what and how data get used, and how much has affected training in the AI model.
What Type of Data Does DeepSeek Actually Collect?
DeepSeek collects three main forms of information from the user:
1. Personal data such as names and emails.
2. Device-related data, including IP addresses.
3. Data from third parties, such as Apple or Google logins.
Moreover, there is an action that an app would be able to opt to take if at all that user was active elsewhere on those devices for "Community Security." Unlike many companies I have said where there are actual timelines or limits on data retention, it is stated that retention of data can happen indefinitely by DeepSeek. This can also include possible sharing with others-advertisers, analytics firms, governments, and copyright holders.
Noting that most AI companies like the case of OpenAI's ChatGPT and Anthropic's Claude have met such privacy issues, experts would observe that DeepSeek doesn't expressly provide users the rights to deletion or restrictions on its use of their data as mandated requirement in the GDPR.
The Collected Data Where it Goes
One of major problems of DeepSeek is that it saves user data in China. Supposedly, the company has secure security measures in place for the data set and observes local laws for data transfer, but from a legal perspective, there is no valid basis being presented by DeepSeek concerning the storing of data from its European users outside the EU.
According to the EDPB, privacy laws in China lay more importance on "stability of community than that of individual privacy," thus permitting broadly-reaching access to personal data for purposes such as national security or criminal investigations. Yet it is not clear whether that of foreign users will be treated differently than that of Chinese citizens.
Cybersecurity and Privacy Threats
As accentuated by cyber crime indices in 2024, China is one of the countries most vulnerable to cyberattacks. Cisco's latest report shows that DeepSeek's AI model does not have such strong security against hacking attempts. Other AI models can block at least some "jailbreak" cyberattacks, while DeepSeek turned out to be completely vulnerable to such assaults, which have made it softer for manipulation.
Should Users Worry?
According to experts, users ought to exercise caution when using DeepSeek and avoid sharing highly sensitive personal details. The uncertain policies of the company with respect to data protection, storage in China, and relatively weak security defenses could avail pretty heavy risks to users' privacy and as such warrant such caution.
European regulators will then determine whether DeepSeek will be allowed to conduct business in the EU as investigations continue. Until then, users should weigh risks against their possible exposure when interacting with the platform.
The Ministry of Finance, under Nirmala Sitharaman’s leadership, has issued a directive prohibiting employees from using artificial intelligence (AI) tools such as ChatGPT and DeepSeek for official work. The decision comes over concerns about data security as these AI-powered platforms process and store information externally, potentially putting confidential government data at risk.
Why Has the Finance Ministry Banned AI Tools?
AI chatbots and virtual assistants have gained popularity for their ability to generate text, answer questions, and assist with tasks. However, since these tools rely on cloud-based processing, there is a risk that sensitive government information could be exposed or accessed by unauthorized parties.
The ministry’s concern is that official documents, financial records, and policy decisions could unintentionally be shared with external AI systems, making them vulnerable to cyber threats or misuse. By restricting their use, the government aims to safeguard national data and prevent potential security breaches.
Public Reactions and Social Media Buzz
The announcement quickly sparked discussions online, with many users sharing humorous takes on the decision. Some questioned how government employees would manage their workload without AI assistance, while others speculated whether Indian AI tools like Ola Krutrim might be an approved alternative.
A few of the popular reactions included:
1. "How will they complete work on time now?"
2. "So, only Ola Krutrim is allowed?"
3. "The Finance Ministry is switching back to traditional methods."
4. "India should develop its own AI instead of relying on foreign tools."
India’s Position in the Global AI Race
With AI development accelerating worldwide, several countries are striving to build their own advanced models. China’s DeepSeek has emerged as a major competitor to OpenAI’s ChatGPT and Google’s Gemini, increasing the competition in the field.
The U.S. has imposed trade restrictions on Chinese AI technology, leading to growing tensions in the tech industry. Meanwhile, India has yet to launch an AI model capable of competing globally, but the government’s interest in regulating AI suggests that future developments could be on the horizon.
While the Finance Ministry’s move prioritizes data security, it also raises questions about efficiency. AI tools help streamline work processes, and their restriction could lead to slower operations in certain departments.
Experts suggest that India should focus on developing AI models that are secure and optimized for government use, ensuring that innovation continues without compromising confidential information.
For now, the Finance Ministry’s stance reinforces the need for careful regulation of AI technologies, ensuring that security remains a top priority in government operations.
Italy’s data protection authority, Garante, has ordered Chinese AI chatbot DeepSeek to halt its operations in the country. The decision comes after the company failed to provide clear answers about how it collects and handles user data. Authorities fear that the chatbot’s data practices could pose security risks, leading to its removal from Italian app stores.
Why Did Italy Ban DeepSeek?
The main reason behind the ban is DeepSeek’s lack of transparency regarding its data collection policies. Italian regulators reached out to the company with concerns over whether it was handling user information in a way that aligns with European privacy laws. However, DeepSeek’s response was deemed “totally insufficient,” raising even more doubts about its operations.
Garante stated that DeepSeek denied having a presence in Italy and claimed that European regulations did not apply to it. Despite this, authorities believe that the company’s AI assistant has been accessible to Italian users, making it subject to the region’s data protection rules. To address these concerns, Italy has launched an official investigation into DeepSeek’s activities.
Growing Concerns Over AI and Data Security
DeepSeek is an advanced AI chatbot developed by a Chinese startup, positioned as a competitor to OpenAI’s ChatGPT and Google’s Gemini. With over 10 million downloads worldwide, it is considered a strong contender in the AI market. However, its expansion into Western countries has sparked concerns about how user data might be used.
Italy is not the only country scrutinizing DeepSeek’s data practices. Authorities in France, South Korea, and Ireland have also launched investigations, highlighting global concerns about AI-driven data collection. Many governments fear that personal data gathered by AI chatbots could be misused for surveillance or other security threats.
This is not the first time Italy has taken action against an AI company. In 2023, Garante temporarily blocked OpenAI’s ChatGPT over privacy issues. OpenAI was later fined €15 million after being accused of using personal data to train its AI without proper consent.
Impact on the AI and Tech Industry
The crackdown on DeepSeek comes at a time when AI technology is shaping global markets. Just this week, concerns over China’s growing influence in AI led to a significant drop in the U.S. stock market. The NASDAQ 100 index lost $1 trillion in value, with AI chipmaker Nvidia alone suffering a $600 million loss.
While DeepSeek has been removed from Italian app stores, users who downloaded it before the ban can still access the chatbot. Additionally, its web-based version remains functional, raising questions about how regulators will enforce the restriction effectively.
As AI continues to make new advancements, countries are becoming more cautious about companies that fail to meet privacy and security standards. With multiple nations now investigating DeepSeek, its future in Western markets remains uncertain.
January 27 marked a pivotal day for the artificial intelligence (AI) industry, with two major developments reshaping its future. First, Nvidia, the global leader in AI chips, suffered a historic loss of $589 billion in market value in a single day—the largest one-day loss ever recorded by a company. Second, DeepSeek, a Chinese AI developer, surged to the top of Apple’s App Store, surpassing ChatGPT. What makes DeepSeek’s success remarkable is not just its rapid rise but its ability to achieve high-performance AI with significantly fewer resources, challenging the industry’s reliance on expensive infrastructure.
Unlike many AI companies that rely on costly, high-performance chips from Nvidia, DeepSeek has developed a powerful AI model using far fewer resources. This unexpected efficiency disrupts the long-held belief that AI breakthroughs require billions of dollars in investment and vast computing power. While companies like OpenAI and Anthropic have focused on expensive computing infrastructure, DeepSeek has proven that AI models can be both cost-effective and highly capable.
DeepSeek’s AI models perform at a level comparable to some of the most advanced Western systems, yet they require significantly less computational power. This approach could democratize AI development, enabling smaller companies, universities, and independent researchers to innovate without needing massive financial backing. If widely adopted, it could reduce the dominance of a few tech giants and foster a more inclusive AI ecosystem.
DeepSeek’s success could prompt a strategic shift in the AI industry. Some companies may emulate its focus on efficiency, while others may continue investing in resource-intensive models. Additionally, DeepSeek’s open-source nature adds an intriguing dimension to its impact. Unlike OpenAI, which keeps its models proprietary, DeepSeek allows its AI to be downloaded and modified by researchers and developers worldwide. This openness could accelerate AI advancements but also raises concerns about potential misuse, as open-source AI can be repurposed for unethical applications.
Another significant benefit of DeepSeek’s approach is its potential to reduce the environmental impact of AI development. Training AI models typically consumes vast amounts of energy, often through large data centers. DeepSeek’s efficiency makes AI development more sustainable by lowering energy consumption and resource usage.
However, DeepSeek’s rise also brings challenges. As a Chinese company, it faces scrutiny over data privacy, security, and censorship. Like other AI developers, DeepSeek must navigate issues related to copyright and the ethical use of data. While its approach is innovative, it still grapples with industry-wide challenges that have plagued AI development in the past.
DeepSeek’s emergence signals the start of a new era in the AI industry. Rather than a few dominant players controlling AI development, we could see a more competitive market with diverse solutions tailored to specific needs. This shift could benefit consumers and businesses alike, as increased competition often leads to better technology at lower prices.
However, it remains unclear whether other AI companies will adopt DeepSeek’s model or continue relying on resource-intensive strategies. Regardless, DeepSeek has already challenged conventional thinking about AI development, proving that innovation isn’t always about spending more—it’s about working smarter.
DeepSeek’s rapid rise and innovative approach have disrupted the AI industry, challenging the status quo and opening new possibilities for AI development. By demonstrating that high-performance AI can be achieved with fewer resources, DeepSeek has paved the way for a more inclusive and sustainable future. As the industry evolves, its impact will likely inspire further innovation, fostering a competitive landscape that benefits everyone.
In an era where technology permeates every aspect of our lives, education is undergoing a transformative shift. Imagine a classroom where each student’s learning experience is tailored to their unique needs, interests, and pace. This is no longer a distant dream but a rapidly emerging reality, thanks to the revolutionary impact of artificial intelligence (AI). Personalized learning, once a buzzword, has become a game-changer, with AI at the forefront of this transformation. In this blog, we’ll explore how AI is driving the personalized learning revolution, its benefits and challenges, and what the future holds for this exciting frontier in education.
Personalized learning is an educational approach that tailors teaching and learning experiences to meet the unique needs, strengths, and interests of each student. Unlike traditional one-size-fits-all methods, personalized learning aims to provide a customized educational experience that accommodates diverse learning styles, paces, and preferences. The goal is to enhance student engagement and achievement by addressing individual characteristics such as academic abilities, prior knowledge, and personal interests.
AI is playing a pivotal role in making personalized learning a reality. Here’s how:
The integration of AI into personalized learning offers numerous advantages:
While AI-driven personalized learning holds immense potential, it also presents several challenges:
AI is revolutionizing education by enabling personalized learning experiences that cater to each student’s unique needs and pace. By enhancing engagement, improving outcomes, and optimizing resource use, AI is shaping the future of education. However, as we embrace these advancements, it is essential to address challenges such as data privacy, equitable access, and teacher training. With the right approach, AI-powered personalized learning has the potential to transform education and unlock new opportunities for students worldwide.
The financial sector is facing a sharp increase in cyber threats, with investment firms, such as asset managers, hedge funds, and private equity firms, becoming prime targets for ransomware, AI-driven attacks, and data breaches. These firms rely heavily on uninterrupted access to trading platforms and sensitive financial data, making cyber resilience essential to prevent operational disruptions and reputational damage. A successful cyberattack can lead to severe financial losses and a decline in investor confidence, underscoring the importance of robust cybersecurity measures.
As regulatory requirements tighten, investment firms must stay ahead of evolving cyber risks. In the UK, the upcoming Cyber Resilience and Security Bill, set to be introduced in 2025, will impose stricter cybersecurity obligations on financial institutions. Additionally, while the European Union’s Digital Operational Resilience Act (DORA) is not directly applicable to UK firms, it will impact those operating within the EU market. Financial regulators, including the Bank of England, the Financial Conduct Authority (FCA), and the Prudential Regulation Authority, are emphasizing cyber resilience as a critical component of financial stability.
The rise of artificial intelligence has further complicated the cybersecurity landscape. AI-powered tools are making cyberattacks more sophisticated and difficult to detect. For instance, voice cloning technology allows attackers to impersonate executives or colleagues, deceiving employees into granting unauthorized access or transferring large sums of money. Similarly, generative AI tools are being leveraged to craft highly convincing phishing emails that lack traditional red flags like poor grammar and spelling errors, making them far more effective.
As AI-driven cyber threats grow, investment firms must integrate AI-powered security solutions to defend against these evolving attack methods. However, many investment firms face challenges in building and maintaining effective cybersecurity frameworks on their own. This is where partnering with managed security services providers (MSSPs) can offer a strategic advantage. Companies like Linedata provide specialized cybersecurity solutions tailored for financial services firms, including AI-driven threat detection, 24/7 security monitoring, incident response planning, and employee training.
Investment firms are increasingly attractive targets for cybercriminals due to their high-value transactions and relatively weaker security compared to major banks. Large financial institutions have heavily invested in cyber resilience, making it harder for hackers to breach their systems. As a result, attackers are shifting their focus to investment firms, which may not have the same level of cybersecurity investment. Without robust security measures, these firms face increased risks of operational paralysis and significant financial losses.
To address these challenges, investment firms must prioritize:
By adopting these proactive strategies, investment firms can enhance their cyber resilience and protect their financial assets, sensitive client data, and investor confidence.
As cyber risks continue to escalate, investment firms must take decisive action to reinforce their cybersecurity posture. By investing in robust cyber resilience strategies, adopting AI-driven security measures, and partnering with industry experts, firms can safeguard their operations and maintain trust in an increasingly digital financial landscape. The combination of regulatory compliance, advanced technology, and strategic partnerships will be key to navigating the complex and ever-evolving world of cyber threats.
Hong Kong experienced a record surge in cyberattacks last year, marking the highest number of incidents in five years. Hackers are increasingly using artificial intelligence (AI) to strengthen their methods, according to the Hong Kong Computer Emergency Response Team Coordination Centre (HKCERT).
The agency reported a spike of 12,536 cybersecurity incidents in 2024, a dramatic increase of 62% from 7,752 cases in 2023. Phishing attacks dominated these incidents, with cases more than doubling from 3,752 in 2023 to 7,811 last year.
AI is aiding in improving phishing campaign effectiveness. Attackers can now use AI tools to create extremely realistic fake emails and websites that even the most skeptical eye cannot easily distinguish from their legitimate counterparts.
Alex Chan Chung-man, a digital transformation leader at HKCERT, commented that phishing attacks targeted the majority of cases for banking, financial, and payment systems, almost 25% of the total cases. Social media, including WhatsApp and messaging apps, was another main target, 22% of the total cases.
AI allows scammers to create flawless phishing messages and generate fake website links that mimic trusted services," Chan explained. This efficiency has led to a sharp rise in phishing links, with over 48,000 malicious URLs identified last year—an increase of 1.5 times compared to 2023.
Hackers are also targeting other essential services such as healthcare and utilities. A notable case involved Union Hospital in Tai Wai, which suffered a ransomware attack. In this case, cybercriminals used a malware called "LockBit" to demand a $10 million ransom. The hospital did not comply with the ransom demand but the incident illustrates the risks critical infrastructure providers face.
Third-party vendors involved with critical sectors are emerging vulnerabilities for hackers to exploit. Leaks through such third-party partners have the potential to cause heavy damages, ranging from legal to reputation-related.
New Risk: Electronic Sign Boards
Digital signboards, once left unattended, are now being targeted by hackers. According to HKCERT, 40% of companies have not risk-assessed these systems. These displays can easily be hijacked through USB devices or wireless connections and display malicious or inappropriate content.
Though Hong Kong has not been attacked this way, such attacks in other countries indicate a new threat.
Prevention for Businesses
HKCERT advises organizations to take the following measures against these threats:
Chan emphasized that AI-driven threats will develop their methods, and thus robust cybersecurity practices are needed to protect sensitive data and infrastructure.
Kuala Lumpur: The increasing use of artificial intelligence (AI) in cybercrimes is becoming a grave issue, says Datuk Seri Ramli Mohamed Yoosuf, Director of Malaysia's Commercial Crime Investigation Department (CCID). Speaking at the Asia International Security Summit and Expo 2025, he highlighted how cybercriminals are leveraging AI to conduct sophisticated attacks, creating unprecedented challenges for cybersecurity efforts.
"AI has enabled criminals to churn through huge datasets with incredible speed, helping them craft highly convincing phishing emails targeted at deceiving individuals," Ramli explained. He emphasized how these advancements in AI make fraudulent communications harder to identify, thus increasing the risk of successful cyberattacks.
Ramli expressed concern over the impact of AI-driven cybercrime on critical sectors such as healthcare and transportation. Attacks on hospital systems could disrupt patient care, putting lives at risk, while breaches in transportation networks could endanger public safety and hinder mobility. These scenarios highlight the urgent need for robust defense mechanisms and efficient response plans to protect critical infrastructure.
One of the key challenges posed by AI is the creation of realistic fake content through deepfake technology. Criminals can generate fake audio or video files that convincingly mimic real individuals, enabling them to manipulate or scam their targets more effectively.
Another area of concern is the automation of phishing attacks. With AI, attackers can identify software vulnerabilities quickly and execute precision attacks at unprecedented speeds, putting defenders under immense pressure to keep up.
Over the past five years, Malaysia has seen a sharp rise in cybercrime cases. Between 2020 and 2024, 143,000 cases were reported, accounting for 85% of all commercial crimes during this period. This indicates that cybersecurity threats are becoming increasingly sophisticated, necessitating significant changes in security practices for both individuals and organizations.
Ramli stressed the importance of collective vigilance against evolving cyber threats. He urged the public to be more aware of these risks and called for greater investment in technological advancements to combat AI-driven cybercrime.
"To the extent cybercriminals will become more advanced, we can ensure that people and organizations are educated on how to recognize and deal with these challenges," he stated.
By prioritizing proactive measures and fostering a culture of cybersecurity, Malaysia can strengthen its defenses against the persistent threat of AI-driven cybercrimes.
The immense computational power that quantum computing offers raises significant concerns, particularly around its potential to compromise private keys that secure digital interactions. Among the most pressing fears is its ability to break the private keys safeguarding cryptocurrency wallets.
While this threat is genuine, it is unlikely to materialize overnight. It is, however, crucial to examine the current state of quantum computing in terms of commercial capabilities and assess its potential to pose a real danger to cryptocurrency security.
Before delving into the risks, it’s essential to understand the basics of quantum computing. Unlike classical computers, which process information using bits (either 0 or 1), quantum computers rely on quantum bits, or qubits. Qubits leverage the principles of quantum mechanics to exist in multiple states simultaneously (0, 1, or both 0 and 1, thanks to the phenomenon of superposition).
One of the primary risks posed by quantum computing stems from Shor’s algorithm, which allows quantum computers to factor large integers exponentially faster than classical algorithms. The security of several cryptographic systems, including RSA, relies on the difficulty of factoring large composite numbers. For instance, RSA-2048, a widely used cryptographic key size, underpins the private keys used to sign and authorize cryptocurrency transactions.
Breaking RSA-2048 with today’s classical computers, even using massive clusters of processors, would take billions of years. To illustrate, a successful attempt to crack RSA-768 (a 768-bit number) in 2009 required years of effort and hundreds of clustered machines. The computational difficulty grows exponentially with key size, making RSA-2048 virtually unbreakable within any human timescale—at least for now.
Commercial quantum computing offerings, such as IBM Q System One, Google Sycamore, Rigetti Aspen-9, and AWS Braket, are available today for those with the resources to use them. However, the number of qubits these systems offer remains limited — typically only a few dozen. This is far from sufficient to break even moderately sized cryptographic keys within any realistic timeframe. Breaking RSA-2048 would require millions of years with current quantum systems.
Beyond insufficient qubit capacity, today’s quantum computers face challenges in qubit stability, error correction, and scalability. Additionally, their operation depends on extreme conditions. Qubits are highly sensitive to electromagnetic disturbances, necessitating cryogenic temperatures and advanced magnetic shielding for stability.
Unlike classical computing, quantum computing lacks a clear equivalent of Moore’s Law to predict how quickly its power will grow. Google’s Hartmut Neven proposed a “Neven’s Law” suggesting double-exponential growth in quantum computing power, but this model has yet to consistently hold up in practice beyond research and development milestones.
Hypothetically, achieving double-exponential growth to reach the approximately 20 million physical qubits needed to crack RSA-2048 could take another four years. However, this projection assumes breakthroughs in addressing error correction, qubit stability, and scalability—all formidable challenges in their own right.
While quantum computing poses a theoretical threat to cryptocurrency and other cryptographic systems, significant technical hurdles must be overcome before it becomes a tangible risk. Current commercial offerings remain far from capable of cracking RSA-2048 or similar key sizes. However, as research progresses, it is crucial for industries reliant on cryptographic security to explore quantum-resistant algorithms to stay ahead of potential threats.
The integration of Artificial Intelligence (AI) and blockchain technology is revolutionizing digital experiences, especially for developers aiming to enhance user interaction and improve security. By combining these cutting-edge technologies, digital platforms are becoming more personalized while ensuring that user data remains secure.
Why Personalization and Security Are Essential
A global survey conducted in the third quarter of 2024 revealed that 64% of consumers prefer to engage with companies that offer personalized experiences. Simultaneously, 53% of respondents expressed significant concerns about data privacy. These findings highlight a critical balance: users desire tailored interactions but are equally cautious about how their data is managed. The integration of AI and blockchain offers innovative solutions to address both personalization and privacy concerns.
AI has seamlessly integrated into daily life, with tools like ChatGPT becoming indispensable across industries. A notable advancement in AI is the adoption of Common Crawl's customized blockchain. This system securely stores vast datasets used by AI models, enhancing data transparency and security. Blockchain’s immutable nature ensures data integrity, making it ideal for managing the extensive data required to train AI systems in applications like ChatGPT.
The combined power of AI and blockchain is already transforming sectors like marketing and healthcare, where personalization and data privacy are paramount.