Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Hong Kong Launches Its First Generative AI Model

 

Last week, Hong Kong launched its first generative artificial intelligence (AI) model, HKGAI V1, ushering in a new era in the city's AI development. The tool was designed by the Hong Kong Generative AI Research and Development Centre (HKGAI) for the Hong Kong Special Administrative Region (HKSAR) government's InnoHK innovation program. 

The locally designed AI tool, which is driven by DeepSeek's data learning model, has so far been tested by about 70 HKSAR government departments. According to a press statement from HKGAI, this innovative accomplishment marks the successful localisation of DeepSeek in Hong Kong, injecting new vitality to the city's AI ecosystem and demonstrating the strong collaborative innovation capabilities between Hong Kong and the Chinese mainland in AI. 

Sun Dong, the HKSAR government's Secretary for Innovation, Technology, and Industry, highlighted during the launch ceremony that artificial intelligence (AI) is in the vanguard of a new industrial revolution and technical revolution, and Hong Kong is actively participating in this wave. 

Sun also emphasised the HKSAR government's broader efforts to encourage AI research, which include the building of an AI supercomputing centre, a 3-billion Hong Kong dollar (386 million US dollar) AI funding plan, and the clustering of over 800 AI enterprises at Science Park and Cyberport. He expressed confidence that the locally produced large language model will soon be available for usage by not just enterprises and people, but also for overseas Chinese communities. 

DeepSeek, founded by Liang Wengfeng, previously stunned the world with its low-cost AI model, which was created with substantially fewer processing resources than larger US tech businesses such as OpenAI and Meta. The HKGAI V1 system is the first in the world to use DeepSeek's full-parameter fine-tuning research methodology. 

The financial secretary allocated HK$1 billion (US$128.6 million) in the budget to build the Hong Kong AI Research and Development Institute. The government intends to start the institute by the fiscal year 2026-27, with cash set aside for the first five years to cover operational costs, including employing staff. 

“Our goal is to ensure Hong Kong’s leading role in the development of AI … So the Institute will focus on facilitating upstream research and development [R&D], midstream and downstream transformation of R&D outcomes, and expanding application scenarios,” Sun noted.

North Korea-Linked Hackers Target Crypto with RustDoor and Koi Stealer

 


A significant amount of malware has become a common threat to Mac OS systems in today’s rapidly developing threat landscape. The majority of these threats are associated with cybercriminal activities, including the theft of data and the mining of cryptocurrencies without consent. As of recently, cybercrime operations have been attributed to groups of advanced persistent threat (APT) groups that are sponsored by the North Korean government. 

In addition to this trend, the Federal Bureau of Investigation (FBI) recently issued a public service announcement regarding North Korean social engineering campaigns. In many of these attacks, deceptive tactics are used to manipulate victims into divulging sensitive information or allowing access to the system. This type of attack is usually carried out using deceptive tactics. As such, there have been increasing numbers of such incidents targeting software developers within the cryptocurrency industry, specifically those seeking employment opportunities, in a growing number of such incidents. 

In my view, these sophisticated cyber threats, originating from North Korean threat actors, demonstrate the persistence and evolution of these threats. Known as CL-STA-240, or Contagious Interview, the cyber campaign aims to infiltrate macOS systems with advanced malware strains, including RustDoor and Koi Stealer. It is known that these malicious programs have been specifically designed to exfiltrate sensitive data and can use sophisticated techniques to avoid detection within the macOS environment while doing so. As a result of this campaign's technical proficiency, it reinforces the fact that threats targeting the Apple ecosystem are becoming increasingly complex as time passes. 

he threat actors responsible for this operation are utilizing social engineering as a primary attack vector. By impersonating recruiters or potential employers, they can trick job seekers, especially those working in the cryptocurrency industry, into installing the compromised software unintentionally. It is through this deceptive strategy that attackers can gain access to critical data while maintaining operational stealth. 

These manipulative strategies are becoming increasingly popular, highlighting the persistent threat that state-sponsored cybercriminal groups, especially those linked to North Korea, continue to pose as they continue to refine their methods to exploit human vulnerability to continue their operation. In the course of this cyber campaign, researchers have revealed that Rust-based malware, referred to as RustDoor, is hiding inside legitimate software updates to evade detection. In addition, researchers have discovered that there was an undocumented macOS variant of the Koi Stealer malware that has been discovered for the first time in recent years. 


A recent investigation uncovered rare techniques for evasion, including manipulating macOS system components to conceal their presence and remain undetected. These sophisticated tactics underscore the increasing sophistication of threats aimed at Mac OS. In the past year, several reports have linked North Korean threat actors to cyberattacks targeting job seekers, which are based on the characteristics and methodologies observed in this campaign. 

According to the available evidence, analysts can rely on a moderate degree of confidence that this attack was carried out to further North Korean state-sponsored cyber objectives. By using social engineering to target job seekers, these adversaries are further proving that they are involved in an extensive pattern of attacks. An in-depth technical analysis of the recently identified Koi Stealer macOS variant was performed in this research, which provides an in-depth picture of the attackers’ activities in compromised environments. 

In addition, Cortex XDR is used to examine the various stages of the attack to provide an understanding of the investigation. A suite of advanced security solutions offered by Palo Alto Networks, an established leader in network security solutions, helps Palo Alto Networks' customers protect themselves from these evolving threats, including applications such as: Two products offer enhanced detection and responding capabilities - Cortex XDR and XSIAM. Computer-based security services for firewalls, such as Advanced WildFire, Advanced DNS Security, and Advanced URL Filtering that provide proactive defense against malicious activities. 

The use of these security solutions can help organizations greatly strengthen their defenses against RustDoor, Koi Stealer, and similar malware threats targeting MacOS environments. Often, victims are tricked into downloading malware disguised as legitimate software development tools in the form of fake job interviews associated with this campaign, which results in the infection process starting with a fake job interview. The attackers were particularly noteworthy for using malicious Visual Studio projects, which is a strategy previously documented in similar cyber campaigns analyzed by Jamf Threat Labs. 

When the RustDoor malware is executed, it establishes persistence within the system and attempts to exfiltrate sensitive user information, which is one of the first steps toward completing its operations. Researchers have discovered that the threat actors have attempted to execute several variants of the malware throughout the investigation. As a result of this adaptive behavior, it appears to me that attackers are continuously adapting their approach in response to security controls and detection mechanisms in place.

According to security researchers, when the Cortex XDR was blocked for the initial attempt at infiltration, adversaries quickly tried to re-deploy and execute additional malware payloads to circumvent detection by redeploying and executing additional malware payloads. RustDoor Infection Stages An infection process that involves two RustDoor binaries being executed in hidden system directories to avoid detection of the malware is the process by which the RustDoor malware operates. 

Another stage involves the deployment of additional payloads, such as a reverse shell, that allows attackers to gain remote access. Several sensitive data sets were stolen, and the attackers specifically targeted credentials stored in web browsers, such as LastPass data from Google Chrome, as well as exfiltrating the information into command and control servers under their control. As part of this campaign, it was discovered that an IP address known as 31.41.244[.]92 has previously been used to conduct cybercriminal activities. This was one of our most significant findings. 

The threat has also been associated with the RedLine Stealer infostealer campaign, which further reinforces the sophisticated nature of the ongoing threats that have been identified. The second malware strain identified, Koi Stealer, possesses advanced data exfiltration capabilities, as compared to the previously undocumented macOS variant. According to this discovery, it is clear that macOS-targeted malware continues to evolve and that robust cybersecurity measures are necessary to mitigate the risks posed by these sophisticated threats and help to minimize incidents. 


As a result of the Koi Stealer malware, a run-time string decryption mechanism is utilized by it. Throughout the binary code, there is a single function that is repeatedly invoked. In the decryption function, each character of a hard-coded key (xRdEh3f6g1qxTxsCfg1d30W66JuUgQvVti) is iterated sequentially from index 0 to index 33 and the XOR operation is applied between the key’s characters and the encrypted string's characters, in a way that is applied sequentially. 

To get a better understanding of how Koi Stealer behaves, researchers developed a custom decryption program that replicates the malware's logic to gain insight into the malware's behavior, along with the techniques it uses to disguise its true functionality. Using the same decryption routine, analysts were able to extract and analyze the decrypted strings with success, allowing a more comprehensive understanding of the malware’s capabilities and objectives. There are significant similarities between the code structure and execution flow of different versions of Koi Stealer, as shown by a comparison between the various variants. 

Each variant of malware was designed consistently to steal data. Each category of stolen information was contained within separate functions within each variant. This modular design indicates that the malware has been developed in a structured and organized manner, further proving its sophistication. Besides targeting common types of information stealers, Koi Stealer also has a specific interest in specific directories and configurations that are not commonly found in the information stealer world. 

Interestingly, both of the analyzed samples actively target user data from Steam and Discord, which indicates a deep interest in credentials related to gaming platforms and communication platforms. A wide range of targeted data demonstrates how versatile the malware is and how it is capable of being exploited for a wider range of purposes than traditional financial or credential thefts. The detailed breakdown of the notable decrypted strings and the additional technical findings found in Appendix C provides further insight into Koi Stealer's internal operations and goals, as well as providing additional insight into the company's internal operations.

DBS Bank to Cut 4,000 Jobs Over Three Years as AI Adoption Grows

Singapore’s largest bank, DBS, has announced plans to reduce approximately 4,000 temporary and contract roles over the next three years as artificial intelligence (AI) takes on more tasks currently handled by human workers. 

The job reductions will occur through natural attrition as various projects are completed, a bank spokesperson confirmed. However, permanent employees will not be affected by the move. 

The bank’s outgoing CEO, Piyush Gupta, also revealed that DBS expects to create around 1,000 new positions related to AI, making it one of the first major financial institutions to outline how AI will reshape its workforce. 

Currently, DBS employs between 8,000 and 9,000 temporary and contract workers, while its total workforce stands at around 41,000. According to Gupta, the bank has been investing in AI for more than a decade and has already integrated over 800 AI models across 350 use cases. 

These AI-driven initiatives are projected to generate an economic impact exceeding S$1 billion (approximately $745 million) by 2025. Leadership at DBS is also set for a transition, with Gupta stepping down at the end of March. 

His successor, Deputy CEO Tan Su Shan, will take over the reins. The growing adoption of AI across industries has sparked global discussions about its impact on employment. The International Monetary Fund (IMF) estimates that AI could influence nearly 40% of jobs worldwide, with its managing director Kristalina Georgieva cautioning that AI is likely to exacerbate economic inequality. 

Meanwhile, Bank of England Governor Andrew Bailey has expressed a more balanced outlook, suggesting that while AI presents certain risks, it also offers significant opportunities and is unlikely to lead to widespread job losses. As DBS advances its AI-driven transformation, the bank’s restructuring highlights the evolving nature of work in the financial sector, where automation and human expertise will increasingly coexist.

AI-Driven Changes Lead to Workforce Reduction at Major Asian Bank

 


Over the next three years, DBS, Singapore's largest bank, has announced plans to reduce the number of employees by approximately 4,000 by way of a significant shift toward automation. A key reason for this decision was the growing adoption of artificial intelligence (AI), which will gradually replace human employees in performing functions previously performed by humans. 

Essentially, these job reductions will occur through natural attrition as projects conclude, affecting primarily temporary and contract workers. However, the bank has confirmed that this will not have any adverse effects on permanent employees. A spokesperson for DBS stated that artificial intelligence-driven advances could reduce the need for temporary and contract positions to be renewed, thereby resulting in a gradual decrease in the number of employees as project-based roles are completed. 

According to the bank's website, the bank employs approximately 8,000-9,000 temporary and contract workers and has a total workforce of around 41,000 workers. Former CEO Piyush Gupta has highlighted the bank's longstanding investment in artificial intelligence, noting that DBS has been leveraging artificial intelligence technology for over a decade. According to him, DBS has employed over 800 artificial intelligence models in 350 applications in the bank, with the expected economic impact surpassing S$1 billion by 2025 (US$745 million; £592 million). 

DBS is also changing leadership as Gupta, the current CEO of the bank, is about to step down at the end of March, and his successor, Tan Su Shan, will take over from him. Artificial intelligence is becoming increasingly widely used, which has brought about a lot of discussion about its advantages and shortcomings. According to the International Monetary Fund (IMF), artificial intelligence will influence approximately 40% of global employment by 2050, with Managing Director Kristalina Georgieva cautioning that, in most scenarios, AI could worsen economic inequality. 

According to the International Monetary Fund (IMF), AI could lead to a reduction in nearly 40% of global employment in the future. Several CEOs, including Kristalina Georgieva, have warned that, in many scenarios, artificial intelligence has the potential to significantly increase economic inequality. For this reason, concerns are being raised about its long-term social implications. The Governor of the Bank of England, Andrew Bailey, told the BBC in an interview that artificial intelligence shouldn't be viewed as a 'mass destruction' of jobs, but that human workers will adapt to evolving technologies as they become more advanced. 

Bailey acknowledged the risks associated with artificial intelligence but also noted its vast potential for innovation in a wide range of industries by highlighting its potential. It is becoming increasingly apparent that Artificial Intelligence will play a significant role in the future of employment, productivity, and economic stability. Financial institutions are evaluating the long-term effects on these factors as it grows. In addition to transforming workforce dynamics, the increasing reliance on artificial intelligence (AI) is also delivering significant financial advantages to the banking sector as a whole.

Investing in artificial intelligence could potentially increase the profits of banks by 17%, which could increase to $180 billion in combined earnings, according to Bloomberg. According to Digit News, this will increase their collective earnings by $170 billion. Aside from the substantial financial incentives, banks and corporations are actively seeking professionals with AI and data analytics skills to integrate AI into their operations.

According to the World Economic Forum's Future of Work report, technological skills, particularly those related to artificial intelligence (AI) and big data, are expected to become among the most in-demand skills within the next five years, especially as AI adoption accelerates. As an evolving labor market continues to evolve, employees are increasingly being encouraged to learn new skills to ensure job security. 

The WEF has recommended that companies invest in retraining programs that will help employees adjust to the new workplace environment; however, some organizations are reducing existing positions and recruiting AI experts to fill the gaps left by the existing positions. They are taking a more immediate approach than the WEF has recommended. AI has become increasingly prevalent across various industries, changing employment strategies as well as financial priorities as a result. 

With artificial intelligence continuing to change industries in several ways, its growing presence in the banking sector makes it clear just how transformative it has the potential to be and the challenges that come with it. It is clear that AI is advancing efficiency and financial performance of companies; however, this integration is also forcing organizations to reevaluate their workforce strategies, skill development, and ethical considerations related to job displacement and economic inequality. 

There must be a balance struck between leveraging technological advancements and ensuring a sustainable transition for employees who will be affected by automation. To prepare the workforce for the future of artificial intelligence, governments, businesses, and educational institutions must all play a critical role. A significant amount of effort must be put into reskilling initiatives, policies that support equitable workforce transitions, and an ethical AI governance framework to mitigate the risks associated with job displacement. In addition, the advancement of artificial intelligence, industry leaders, and policymakers can help promote a more inclusive and flexible labor market. 

Financial institutions continue to embrace the technology for its efficiency and economic benefits, but they must also remain conscious of its impact on society at large. For technological progress to become a significant factor in long-term economic and social stability, it will be essential to plan for the workforce early, ethically deploy ethical AI, and upskill employees.

These Four Basic PC Essentials Will Protect You From Hacking Attacks


There was a time when the internet could be considered safe, if the users were careful. Gone are the days, safe internet seems like a distant dream. It is not a user's fault when the data is leaked, passwords are compromised, and malware makes easy prey. 

Online attacks are a common thing in 2025. The rising AI use has contributed to cyberattacks with faster speed and advanced features, the change is unlikely to slow down. To help readers, this blog outlines the basics of digital safety. 

Antivirus

A good antivirus in your system helps you from malware, ransomware, phishing sites, and other major threats. 

For starters, having Microsoft’s built-in Windows Security antivirus is a must (it is usually active in the default settings, unless you have changed it). Microsoft antivirus is reliable and runs without being nosy in the background.

You can also purchase paid antivirus software, which provides an extra security and additional features, in an all-in-one single interface.

Password manager

A password manager is the spine of login security, whether an independent service, or a part of antivirus software, to protect login credentials across the web. In addition they also lower the chances of your data getting saved on the web.

A simple example: to maintain privacy, keep all the credit card info in your password manager, instead of allowing shopping websites to store sensitive details. 

You'll be comparatively safer in case a threat actor gets unauthorized access to your account and tries to scam you.

Two-factor authentication 

In today's digital world, just a standalone password isn't a safe bet to protect you from attackers. Two-factor authentication (2FA) or multi-factor authentication provides an extra security layer before users can access their account. For instance, if a hacker has your login credentials, trying to access your account, they won't have all the details for signing in. 

A safer option for users (if possible) is to use 2FA via app-generated one-time codes; these are safer than codes sent through SMS, which can be intercepted. 

Passkeys

If passwords and 2FA feel like a headache, you can use your phone or PC as a security option, through a passkey.

Passkeys are easy, fast, and simple; you don't have to remember them; you just store them on your device. Unlike passwords, passkeys are linked to the device you've saved them on, this prevents them from getting stolen or misused by hackers. You're done by just using PIN or biometric authentication to allow a passkey use.

Building Robust AI Systems with Verified Data Inputs

 


Artificial intelligence is inherently dependent on the quality of data that powers it for it to function properly. However, this reliance presents a major challenge to the development of artificial intelligence. There is a recent report that indicates that approximately half of executives do not believe their data infrastructure is adequately prepared to handle the evolving demands of artificial intelligence technologies.

As part of the study, conducted by Dun & Bradstreet, executives of companies actively integrating artificial intelligence into their business were surveyed. As a result of the survey, 54% of these executives expressed concern over the reliability and quality of their data, which was conducted on-site during the AI Summit New York, which occurred in December of 2017. Upon a broader analysis of AI-related concerns, it is evident that data governance and integrity are recurring themes.

Several key issues have been identified, including data security (46%), risks associated with data privacy breaches (43%), the possibility of exposing confidential or proprietary data (42%), as well as the role data plays in reinforcing bias in artificial intelligence models (26%) As organizations continue to integrate AI-driven solutions, the importance of ensuring that data is accurate, secure, and ethically used continues to grow. AI applications must be addressed as soon as possible to foster trust and maximize their effectiveness across industries. In today's world, companies are increasingly using artificial intelligence (AI) to enhance innovation, efficiency, and productivity. 

Therefore, ensuring the integrity and security of their data has become a critical priority for them. Using artificial intelligence to automate data processing streamlines business operations; however, it also presents inherent risks, especially in regards to data accuracy, confidentiality, and regulatory compliance. A stringent data governance framework is a critical component of ensuring the security of sensitive financial information within companies that are developing artificial intelligence. 

Developing robust management practices, conducting regular audits, and enforcing rigorous access control measures are crucial steps in safeguarding sensitive financial information in AI development companies. Businesses must remain focused on complying with regulatory requirements so as to mitigate the potential legal and financial repercussions. During business expansion, organizations may be exposed to significant vulnerabilities if they fail to maintain data integrity and security. 

As long as data protection mechanisms are reinforced and regulatory compliance is maintained, businesses will be able to minimize risks, maintain stakeholder trust, and ensure long-term success of AI-driven initiatives by ensuring compliance with regulatory requirements. As far as a variety of industries are concerned, the impact of a compromised AI system could be devastating. From a financial point of view, inaccuracies or manipulations in AI-driven decision-making, as is the case with algorithmic trading, can result in substantial losses for the company. 

Similarly, in safety-critical applications, including autonomous driving, the integrity of artificial intelligence models is directly related to human lives. When data accuracy is compromised or system reliability is compromised, catastrophic failures can occur, endangering both passengers and pedestrians at the same time. The safety of the AI-driven solutions must be maintained and trusted by ensuring robust security measures and continuous monitoring.

Experts in the field of artificial intelligence recognize that there is an insufficient amount of actionable data available to fully support the transforming landscape of artificial intelligence. Because of this scarcity of reliable data, many AI-driven initiatives have been questioned by many people as a result. As Kunju Kashalikar, Senior Director of Product Management at Pentaho points out, organizations often have difficulty seeing their data, since they do not know who owns it, where it originated from, and how it has changed. 

Lack of transparency severely undermines the confidence that users have in the capabilities of AI systems and their results. To be honest, the challenges associated with the use of unverified or unreliable data go beyond inefficiency in operations. According to Kasalikar, if data governance is lacking, proprietary information or biased information may be fed into artificial intelligence models, potentially resulting in intellectual property violations and data protection violations. Further, the absence of clear data accountability makes it difficult to comply with industry standards and regulatory frameworks when there is no clear accountability for data. 

There are several challenges faced by organizations when it comes to managing structured data. Structured data management strategies ensure seamless integration across various AI-driven projects by cataloguing data at its source in standardized, easily understandable terminology. Establishing well-defined governance and discovery frameworks will enhance the reliability of AI systems. These frameworks will also support regulatory compliance, promoting greater trust in AI applications and transparency. 

Ensuring the integrity of AI models is crucial for maintaining their security, reliability, and compliance. To ensure that these systems remain authenticated and safe from tampering or unauthorized modification, several verification techniques have been developed. Hashing and checksums enable organizations to calculate and compare hash values following the training process, allowing them to detect any discrepancies which could indicate corruption. 

Models are watermarked with unique digital signatures to verify their authenticity and prevent unauthorized modifications. In the field of simulation, simulation behavior analysis assists with identifying anomalies that could signal system integrity breaches by tracking model outputs and decision-making patterns. Using provenance tracking, a comprehensive record of all interactions, updates, and modifications is maintained, enhancing accountability and traceability. Although these verification methods have been developed over the last few decades, they remain challenging because of the rapidly evolving nature of artificial intelligence. 

As modern models are becoming more complex, especially large-scale systems with billions of parameters, integrity assessment has become increasingly challenging. Furthermore, AI's ability to learn and adapt creates a challenge in detecting unauthorized modifications from legitimate updates. Security efforts become even more challenging in decentralized deployments, such as edge computing environments, where verifying model consistency across multiple nodes is a significant issue. This problem requires implementing an advanced monitoring, authentication, and tracking framework that integrates advanced monitoring, authentication, and tracking mechanisms to deal with these challenges. 

When organizations are adopting AI at an increasingly rapid rate, they must prioritize model integrity and be equally committed to ensuring that AI deployment is ethical and secure. Effective data management is crucial for maintaining accuracy and compliance in a world where data is becoming increasingly important. 

AI plays a crucial role in maintaining entity records that are as up-to-date as possible with the use of extracting, verifying, and centralized information, thereby lowering the risk of inaccurate or outdated information being generated as a result of overuse of artificial intelligence. The advantages that can be gained by implementing an artificial intelligence-driven data management process are numerous, including increased accuracy and reduced costs through continuous data enrichment, the ability to automate data extraction and organization, and the ability to maintain regulatory compliance with the use of real-time, accurate data that is easily accessible. 

In a world where artificial intelligence is advancing at a faster rate than ever before, its ability to maintain data integrity will become of even greater importance to organizations. Organizations that leverage AI-driven solutions can make their compliance efforts stronger, optimize resources, and handle regulatory changes with confidence.

How AI Agents Are Transforming Cryptocurrency

 



Artificial intelligence (AI) agents are revolutionizing the cryptocurrency sector by automating processes, enhancing security, and improving trading strategies. These smart programs help analyze blockchain data, detect fraud, and optimize financial decisions without human intervention.


What Are AI Agents?

AI agents are autonomous software programs that operate independently, analyzing information and taking actions to achieve specific objectives. These systems interact with their surroundings through data collection, decision-making algorithms, and execution of tasks. They play a critical role in multiple industries, including finance, cybersecurity, and healthcare.


There are different types of AI agents:

1. Simple Reflex Agents: React based on pre-defined instructions.

2. Model-Based Agents: Use internal models to make informed choices.

3. Goal-Oriented Agents: Focus on achieving specific objectives.

4. Utility-Based Agents: Weigh outcomes to determine the best action.

5. Learning Agents: Continuously improve based on new data.


Evolution of AI Agents

AI agents have undergone advancements over the years. Here are some key milestones:

1966: ELIZA, an early chatbot, was developed at MIT to simulate human-like conversations.

1980: MYCIN, an AI-driven medical diagnosis tool, was created at Stanford University.

2011: IBM Watson demonstrated advanced natural language processing by winning on Jeopardy!

2014: AlphaGo, created by DeepMind, outperformed professional players in the complex board game Go.

2020: OpenAI introduced GPT-3, an AI model capable of generating human-like text.

2022: AlphaFold solved long-standing biological puzzles related to protein folding.

2023: AI-powered chatbots like ChatGPT and Claude AI gained widespread use for conversational tasks.

2025: ElizaOS, a blockchain-based AI platform, is set to enhance AI-agent applications.


AI Agents in Cryptocurrency

The crypto industry is leveraging AI agents for automation and security. In late 2024, Virtuals Protocol, an AI-powered Ethereum-based platform, saw its market valuation soar to $1.9 billion. By early 2025, AI-driven crypto tokens collectively reached a $7.02 billion market capitalization.

AI agents are particularly valuable in decentralized finance (DeFi). They assist in managing liquidity pools, adjusting lending and borrowing rates, and securing financial transactions. They also enhance security by identifying fraudulent activities and vulnerabilities in smart contracts, ensuring compliance with regulations like Know Your Customer (KYC) and Anti-Money Laundering (AML).


The Future of AI in Crypto

Tech giants like Amazon and Apple are integrating AI into digital assistants like Alexa and Siri, making them more interactive and capable of handling complex tasks. Similarly, AI agents in cryptocurrency will continue to take new shapes, offering greater efficiency and security for traders, investors, and developers.

As these intelligent systems advance, their role in crypto and blockchain technology will expand, paving the way for more automated, reliable, and secure financial ecosystems.



Chinese Spies Allegedly Engaged in Ransomware Operations

 


Backed by the Chinese government, a cyber-espionage group has been observed engaging in ransomware-related activities as part of its intelligence activities. Further, this observation demonstrates how nation-state cyber operations and financially motivated cybercrimes have become increasingly convergent as a result of financial incentives. 

In late November 2024, Symantec's research team observed that threat actors infiltrated a medium-sized software and services company in South Asia by exploiting a critical authentication bypass vulnerability (CVE-2024-0012) in Palo Alto Network's security systems to gain access to its databases. Several days after the initial compromise, the attackers obtained administrative credentials from the company's intranet, and this gave them access to the Veeam server. 

Upon discovering the AWS S3 credentials on the server, they discovered that data management tools like Veeam are often using these credentials to facilitate access to cloud storage accounts through the use of cloud storage tools. It is believed that these credentials were used by the attackers to gain access to the company's sensitive data stored in an S3 buckettoo to encrypt its Windows-based systems with RA World ransomware. At first, the attackers demanded a ransom of $2 million but offered a $1 million reduction if the ransom was paid within three days. 

Cybersecurity professionals are becoming increasingly concerned about the increasing intersection between state-sponsored cyberespionage, as well as traditional cybercriminal tactics, which further complicates the task of attribution of threat information and developing defense strategies against it. In addition to a legitimate Toshiba executable, which has been deployed on the victims' computers to facilitate DLL sideloading, the threat actors have also used a legitimate Toshiba executable to implement a DLL sideload. The PlugX backdoor is the result of this technique.

It is heavily obfuscated and contains the backdoor, Korplug. It has been previously reported by Symantec that the custom PlugX backdoor you see here has been associated with Mustang Panda (also known as Earth Preta), a Chinese espionage group that is believed to have been used for economic purposes. However, this specific variant has never been associated with non-Chinese threat actors. 

There are four government ministries involved in Southeast Asian countries from differing nations: the foreign ministry of one country in the region, the government of another Southeastern European country, a telecommunications operator from the region, and two other government ministries involved in different Southeast Asian nations. These intrusions are all related to espionage, all of which are driven by espionage purposes.

A Symantec analysis indicates, however, that the same toolset was employed in a November 2024 extortion attempt targeting a medium-sized software and services company based in South Asia, as well. In this case, the attacker leveraged the Toshiba executable to sideload the malicious DLL, which had the same PlugX variant as used in earlier espionage attacks, to install the malicious DLL. As a result, the victim's systems were infected with the ransomware known as RA World, which marked a shift in cyber-espionage towards financial extortion, as opposed to traditional cyber-espionage.

Several cyber-espionage groups allegedly backed by the Chinese government have been observed participating in ransomware activities, thus emphasizing how nation-state cyber operations and financially motivated cybercrime are becoming increasingly intertwined. In a report released by Symantec in late November 2024, a research team uncovered that threat actors successfully infiltrated a medium-sized software and services company in South Asia by exploiting a critical authentication bypass vulnerability found in Palo Alto Networks' security system (CVE-2024-0012).

Aside from stealing administrative credentials from the company's intranet following the initial compromise, the attackers were able to gain access to the Veeam server via the exfiltration of administrative credentials from the company's intranet. They found AWS S3 credentials on this server that are commonly used to facilitate access to cloud storage accounts by data management tools like Veeam. 

Using these credentials, the attackers were able to access sensitive data stored in S3 buckets of the company's servers before encrypting the Windows-based systems with the RA World ransomware. As a first response, the attackers initially demanded a ransom of $2 million. However, if the ransom was paid within three days, they reduced the amount to $1 million. Cybersecurity professionals are becoming increasingly concerned about the increasing intersection between state-sponsored cyberespionage, as well as traditional cybercriminal tactics, which further complicates the task of attribution of threat information and developing defense strategies against it. 

In the latest RA World ransomware attack, Bronze Starlight (also known as Emperor Dragonfly) has been identified as a possible source of the attack, a Chinese-based threat group previously linked with numerous ransomware attacks, including LockFile, AtomSilo, and NightSky. There was also evidence that the attackers used NPS, a proxy tool developed in China and previously associated with Bronze Starlight, which further strengthened the connection between the attackers and Bronze Starlight. 

A group whose mission is to provide espionage services is typically not involved in financially motivated cybercrime on a large scale. However, the possibility that this group may be involved in ransomware operations raises serious concerns. As one theory suggests, the ransomware deployment may have been an attempt to distract from the true espionage objectives of the operation, to obscure these objectives. Despite this, this theory fails to hold water due to the absence of sophisticated concealment techniques as well as the fact that it targets a non-strategic company. 

Several cybersecurity experts have suggested that the most likely explanation is that either one or more individuals in the group are seeking to profit financially from the espionage tools and infrastructure they already have. The same pattern has also been observed by other threat actor groups, in which members repurpose advanced cyber capabilities for their benefit. Even though cyber threats continue to evolve, some lines continue to blur between state-sponsored cyber operations and financially driven cybercrime.

In the case of the RA World ransomware attack, Bronze Starlight (also known as Emperor Dragonfly) has been linked with the attack, which is an established China-based cyber threat group. In the past, this group was responsible for distributing LockFile, AtomSilo, and NightSky ransomware. Moreover, the ransomware operation was also accompanied by the use of NPS, a proxy tool developed by the Chinese government and previously employed by Bronze Starlight, further suggesting a connection between the ransomware operation and the group. Even though the possibility of Bronze Starlight being associated with RA World ransomware raises several concerns, it is unlikely that espionage-focused threat actors will engage in financially motivated cybercrime. 

Ransomware deployments are thought to serve as diversionary tactics that may hide the underlying espionage objectives that are driving the operation. Despite this, the fact that the espionage tools were obfuscated in a way that is not sophisticated and that the company targeted was not a strategic company casts doubt on this hypothesis. Experts in the field of cyber security propose a more plausible explanation for the attack: an individual or a small faction in the threat group aims to gain financial gain through the use of tools and infrastructure that were originally designed to conduct espionage operations during the attack. 

Observations have been made of the same pattern by other cyber threat groups, where members repurpose their skills and access to advanced cyber capabilities for their benefit. State-sponsored cyber operations have been converged with traditional cybercrime for some time, making it more difficult to attribute and mitigate threats of this kind. The analysis conducted by Symantec suggests that the RA World ransomware attack was likely perpetrated by a single individual, likely due to his or her desire to generate personal financial gain by impersonating their employer's operations to exploit the cyber assets of the company. 

Symantec points out several inconsistencies with the alternative theory that the ransomware deployment was merely a decoy of a broader espionage campaign, stating that it may have been a decoy. There was no strategic significance for the target, no effort was put into concealing the attacker's actions, and evidence was found to be that the attacker was actively negotiating with the victim regarding a ransom payment, indicating there was more to it than just a distraction involving financial gain. 

The Symantec report also points out that Chinese cyber-espionage groups usually work together very closely and share resources, so direct involvement in ransomware attacks is an anomaly. This tactic has been observed by North Korean state-sponsored cyber actors in the past, so strategies within the threat landscape may be evolving in the future.

The Upcoming Tech Revolution Foreseen by Sundar Pichai

 


It was at the 2025 World Government Summit in Dubai on 15th-17th November that Sundar Pichai, CEO of Google and its parent company Alphabet, engaged in a virtual fireside conversation with the Moroccan Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, HE Omar Al Olama. In their discussion, they explored Google's AI-first approach, highlighting how the company has consistently embraced long-term investments in foundational technologies and that the company has stayed committed to it.

Additionally, the conversation highlighted Google's culture of innovation that is continuously driving innovation within the organization, as well as its future vision of artificial intelligence and digital transformation. 

 According to Sundar Pichai, three important areas of technology will shape the future of humanity, and quantum computing is poised to lead the way. Pichai highlighted the transformative potential of quantum computing by saying, "Quantum computing will push the boundaries of what technology can do." He also stressed the ability to tackle complex challenges in health care, security, and science. Pichai believes that quantum advancements could lead to a revolution in drug discovery, improve the development of electric vehicle batteries, and accelerate progress in alternatives to conventional power sources, such as fusion. He called quantum computing the next major paradigm shift, following the rise of artificial intelligence. 

In addition to showing the capabilities of Google's cutting-edge Willow quantum chip, Pichai also discussed Google's latest quantum computing breakthrough, highlighting the company's most recent quantum computing breakthrough. The Willow quantum chip, which is at the forefront of the quantum computing world, solved a computation in less than five minutes that would normally take ten septillion years on a classical computer. That’s a one followed by 25 zeros, longer than the universe itself has existed. 

Pichai added that artificial intelligence was another significant force in technological advancement, alongside quantum computing. The prediction he gave was that artificial intelligence would continue to develop, becoming more intelligent, more cost effective, and increasingly integrating into daily lives. According to him, artificial intelligence is set to keep improving, becoming cheaper, and becoming more useful in the years to come, emphasizing its potential to become a part of everyday lives. A number of groundbreaking technological advances have been introduced by Google in recent months, including the release of Gemini 2.0 and the imminent release of Gemini 2.0 Flash for developers in the Gemini app by the end of the year. 

As for developments in artificial intelligence, there is a high probability that these developments will be showcased at the upcoming Google I/O conference, which should be held sometime in May, where the event is expected to take place. Additionally, Google has begun testing a new feature within Search Labs, called "Daily Listen," in addition to these artificial intelligence innovations. This personalized podcast experience curates and delivers news and topics tailored to the interests of the individual user, which improves engagement with relevant content. 

In December, Google announced that Gemini 2.0 Flash would become generally available for developers by January of next year. As part of this rollout, it is expected that the “Experimental” label will be removed from Gemini 2.0 Flash within the Gemini application. In addition, there is an increasing amount of anticipation surrounding "2.0 Experimental Advanced" which will be available to paid subscribers, and we expect more details on what it has to offer upon its official release. 

Google is continuing to expand its artificial intelligence-driven offering with NotebookLM Plus that is expected to be available for Google One subscribers beginning in early 2025. It is also expected that Gemini 2.0 will be integrated into other Google products, including AI Overviews in Search, in the coming months. This timeframe is aligned with the anticipated Google I/O event, traditional to be held in early May, when more Google products are expected to be integrated. 

Sundar Pichai, the CEO of Google, recently shared his views with employees regarding the urgency of the current technological environment, pointing out how technology has rapidly advanced and how Google can reimagine its products and processes for the next era, thanks to the rapid pace of innovation. Besides acknowledging the challenges faced by employees affected by the devastating wildfires in Southern California, he also noted the difficulties facing the company as a whole. 

As Pichai highlighted earlier this month, 2025 is going to be a pivotal year for Google, and he urged employees to increase their efforts in artificial intelligence development and regulatory compliance. Despite the increasing level of competition in artificial intelligence and the increasing level of regulatory scrutiny that surrounds it, he stressed the importance of ensuring the company stays on top of innovation while navigating a dynamic policy environment.

Understanding the Importance of 5G Edge Security

 


As technology advances, the volume of data being generated daily has reached unprecedented levels. In 2024 alone, people are expected to create over 147 zettabytes of data. This rapid growth presents major challenges for businesses in terms of processing, transferring, and safeguarding vast amounts of information efficiently.

Traditional data processing occurs in centralized locations like data centers, but as the demand for real-time insights increases, edge computing is emerging as a game-changer. By handling data closer to its source — such as factories or remote locations, edge computing minimizes delays, enhances efficiency, and enables faster decision-making. However, its widespread adoption also introduces new security risks that organizations must address.

Why Edge Computing Matters

Edge computing reduces the reliance on centralized data centers by allowing devices to process data locally. This approach improves operational speed, reduces network congestion, and enhances overall efficiency. In industries like manufacturing, logistics, and healthcare, edge computing enables real-time monitoring and automation, helping businesses streamline processes and respond to changes instantly.

For example, a UK port leveraging a private 5G network has successfully integrated IoT sensors, AI-driven logistics, and autonomous vehicles to enhance operational efficiency. These advancements allow for better tracking of assets, improved environmental monitoring, and seamless automation of critical tasks, positioning the port as an industry leader.

The Role of 5G in Strengthening Security

While edge computing offers numerous advantages, its effectiveness relies on a robust network. This is where 5G comes into play. The high-speed, low-latency connectivity provided by 5G enables real-time data processing, improvises security features, and supports large-scale deployments of IoT devices.

However, the expansion of connected devices also increases vulnerability to cyber threats. Securing these devices requires a multi-layered approach, including:

1. Strong authentication methods to verify users and devices

2. Data encryption to protect information during transmission and storage

3. Regular software updates to address emerging security threats

4. Network segmentation to limit access and contain potential breaches

Integrating these measures into a 5G-powered edge network ensures that businesses not only benefit from increased speed and efficiency but also maintain a secure digital environment.


Preparing for 5G and Edge Integration

To fully leverage edge computing and 5G, businesses must take proactive steps to modernize their infrastructure. This includes:

1. Upgrading Existing Technology: Implementing the latest networking solutions, such as software-defined WANs (SD-WANs), enhances agility and efficiency.

2. Strengthening Security Policies: Establishing strict cybersecurity protocols and continuous monitoring systems can help detect and prevent threats.

3. Adopting Smarter Tech Solutions: Businesses should invest in advanced IoT solutions, AI-driven analytics, and smart automation to maximize the benefits of edge computing.

4. Anticipating Future Innovations: Staying ahead of technological advancements helps businesses adapt quickly and maintain a competitive edge.

5. Embracing Disruptive Technologies: Organizations that adopt augmented reality, virtual reality, and other emerging tech can create innovative solutions that redefine industry standards.

The transition to 5G-powered edge computing is not just about efficiency — it’s about security and sustainability. Businesses that invest in modernizing their infrastructure and implementing robust security measures will not only optimize their operations but also ensure long-term success in an increasingly digital world.



OpenAI Introduces European Data Residency to Strengthen Compliance with Local Regulations

 

OpenAI has officially launched data residency in Europe, enabling organizations to comply with regional data sovereignty requirements while using its AI-powered services.

Data residency refers to the physical storage location of an organization’s data and the legal frameworks that govern it. Many leading technology firms and cloud providers offer European data residency options to help businesses adhere to privacy and data protection laws such as the General Data Protection Regulation (GDPR), Germany’s Federal Data Protection Act, and the U.K.’s data protection regulations.

Several tech giants have already implemented similar measures. In October, GitHub introduced cloud data residency within the EU for Enterprise plan subscribers. AWS followed suit by launching a sovereign cloud for Europe, ensuring all metadata remains within the EU. Google also introduced data residency for AI processing for U.K. users of its Gemini 1.5 Flash model.

Starting Thursday, OpenAI customers using its API can opt to process data in Europe for "eligible endpoints." New ChatGPT Enterprise and Edu customers will also have the option to store customer content at rest within Europe. Data "at rest" refers to information that is not actively being transferred or accessed across networks.

With European data residency enabled, OpenAI will process API requests within the region without retaining any data, meaning AI model interactions will not be stored on company servers. If activated for ChatGPT, customer information—including conversations, user inputs, images, uploaded files, and custom bots—will be stored in-region. However, OpenAI clarifies that existing projects cannot be retroactively configured for European data residency at this time.

"We look forward to partnering with more organizations across Europe and around the world on their AI initiatives, while maintaining the highest standards of security, privacy, and compliance," OpenAI stated in a blog post on Thursday.

OpenAI has previously faced scrutiny from European regulators over its data handling practices. Authorities in Spain and Germany have launched investigations into ChatGPT’s data processing methods. In December, Italy’s data protection watchdog — which had briefly banned ChatGPT in the past—fined OpenAI €15 million ($15.6 million) for alleged violations of consumer data protection laws.

The debate over AI data storage extends beyond OpenAI. Chinese AI startup DeepSeek, which operates a large language model (LLM) and chatbot, processes user data within China, drawing regulatory attention.

Last year, the European Data Protection Board (EDPB) released guidelines for EU regulators investigating ChatGPT, addressing concerns such as the lawfulness of training data collection, transparency, and data accuracy.

The Future of Data Security Lies in Quantum-Safe Encryption

 


Cybersecurity experts and analysts have expressed growing concerns over the potential threat posed by quantum computing to modern cryptographic systems. Unlike conventional computers that rely on electronic circuits, quantum computers leverage the principles of quantum mechanics, which could enable them to break widely used encryption protocols. 

If realized, this advancement would compromise digital communications, rendering them as vulnerable as unprotected transmissions. However, this threat remains theoretical at present. Existing quantum computers lack the computational power necessary to breach standard encryption methods. According to a 2018 report by the National Academies of Sciences, Engineering, and Medicine, significant technological breakthroughs are still required before quantum computing can effectively decrypt the robust encryption algorithms that secure data across the internet. 

Despite the current limitations, researchers emphasize the importance of proactively developing quantum-resistant cryptographic solutions to mitigate future risks. Traditional computing systems operate on the fundamental principle that electrical signals exist in one of two distinct states, represented as binary bits—either zero or one. These bits serve as the foundation for storing and processing data in conventional computers. 

In contrast, quantum computers harness the principles of quantum mechanics, enabling a fundamentally different approach to data encoding and computation. Instead of binary bits, quantum systems utilize quantum bits, or qubits, which possess the ability to exist in multiple states simultaneously through a phenomenon known as superposition. 

Unlike classical bits that strictly represent a zero or one, a qubit can embody a probabilistic combination of both states at the same time. This unique characteristic allows quantum computers to process and analyze information at an exponentially greater scale, offering unprecedented computational capabilities compared to traditional computing architectures. Leading technology firms have progressively integrated post-quantum cryptographic (PQC) solutions to enhance security against future quantum threats. 

Amazon introduced a post-quantum variant of TLS 1.3 for its AWS Key Management Service (KMS) in 2020, aligning it with evolving NIST recommendations. Apple incorporated the PQ3 quantum-resistant protocol into its iMessage encryption in 2024, leveraging the Kyber algorithm alongside elliptic-curve cryptography for dual-layer security. Cloudflare has supported post-quantum key agreements since 2023, utilizing the widely adopted X25519Kyber768 algorithm. 

Google Chrome enabled post-quantum cryptography by default in version 124, while Mozilla Firefox introduced support for X25519Kyber768, though manual activation remains necessary. VPN provider Mullvad integrates Classic McEliece and Kyber for key exchange, and Signal implemented the PQDXH protocol in 2023. Additionally, secure email service Tutanota employs post-quantum encryption for internal communications. Numerous cryptographic libraries, including OpenSSL and BoringSSL, further facilitate PQC adoption, supported by the Open Quantum Safe initiative. 

Modern encryption relies on advanced mathematical algorithms to convert plaintext data into secure, encrypted messages for storage and transmission. These cryptographic processes operate using digital keys, which determine how data is encoded and decoded. Encryption is broadly categorized into two types: symmetric and asymmetric. 

Symmetric encryption employs a single key for both encryption and decryption, offering high efficiency, making it the preferred method for securing stored data and communications. In contrast, asymmetric encryption, also known as public-key cryptography, utilizes a key pair—one publicly shared for encryption and the other privately held for decryption. This method is essential for securely exchanging symmetric keys and digitally verifying identities through signatures on messages, documents, and certificates. 

Secure websites utilizing HTTPS protocols rely on public-key cryptography to authenticate certificates before establishing symmetric encryption for communication. Given that most digital systems employ both cryptographic techniques, ensuring their robustness remains critical to maintaining cybersecurity. Quantum computing presents a significant cybersecurity challenge, with the potential to break modern cryptographic algorithms in mere minutes—tasks that would take even the most advanced supercomputers thousands of years. 

The moment when a quantum computer becomes capable of compromising widely used encryption is known as Q-Day, and such a machine is termed a Cryptographically Relevant Quantum Computer (CRQC). While governments and defense organizations are often seen as primary targets for cyber threats, the implications of quantum computing extend far beyond these sectors. With public-key cryptography rendered ineffective, all industries risk exposure to cyberattacks. 

Critical infrastructure, including power grids, water supplies, public transportation, telecommunications, financial markets, and healthcare systems, could face severe disruptions, posing both economic and life-threatening consequences. Notably, quantum threats will not be limited to entities utilizing quantum technology; any business or individual relying on current encryption methods remains at risk. Ensuring quantum-resistant cryptographic solutions is therefore imperative to safeguarding digital security in the post-quantum era. 

As the digital landscape continues to evolve, the inevitability of quantum computing necessitates a proactive approach to cybersecurity. The widespread adoption of quantum-resistant cryptographic solutions is no longer a theoretical consideration but a fundamental requirement for ensuring long-term data security. 

Governments, enterprises, and technology providers must collaborate to accelerate the development and deployment of post-quantum cryptography to safeguard critical infrastructure and sensitive information. While the full realization of quantum threats remains in the future, the urgency to act is now. Organizations must assess their current security frameworks, invest in quantum-safe encryption technologies, and adhere to emerging standards set forth by cryptographic experts.

The transition to quantum-resilient security will be a complex but essential undertaking to maintain the integrity, confidentiality, and resilience of digital communications. By preparing today, industries can mitigate the risks posed by quantum advancements and uphold the security of global digital ecosystems in the years to come.

Finance Ministry Bans Use of AI Tools Like ChatGPT and DeepSeek in Government Work

 


The Ministry of Finance, under Nirmala Sitharaman’s leadership, has issued a directive prohibiting employees from using artificial intelligence (AI) tools such as ChatGPT and DeepSeek for official work. The decision comes over concerns about data security as these AI-powered platforms process and store information externally, potentially putting confidential government data at risk.  


Why Has the Finance Ministry Banned AI Tools?  

AI chatbots and virtual assistants have gained popularity for their ability to generate text, answer questions, and assist with tasks. However, since these tools rely on cloud-based processing, there is a risk that sensitive government information could be exposed or accessed by unauthorized parties.  

The ministry’s concern is that official documents, financial records, and policy decisions could unintentionally be shared with external AI systems, making them vulnerable to cyber threats or misuse. By restricting their use, the government aims to safeguard national data and prevent potential security breaches.  


Public Reactions and Social Media Buzz

The announcement quickly sparked discussions online, with many users sharing humorous takes on the decision. Some questioned how government employees would manage their workload without AI assistance, while others speculated whether Indian AI tools like Ola Krutrim might be an approved alternative.  

A few of the popular reactions included:  

1. "How will they complete work on time now?" 

2. "So, only Ola Krutrim is allowed?"  

3. "The Finance Ministry is switching back to traditional methods."  

4. "India should develop its own AI instead of relying on foreign tools."  


India’s Position in the Global AI Race

With AI development accelerating worldwide, several countries are striving to build their own advanced models. China’s DeepSeek has emerged as a major competitor to OpenAI’s ChatGPT and Google’s Gemini, increasing the competition in the field.  

The U.S. has imposed trade restrictions on Chinese AI technology, leading to growing tensions in the tech industry. Meanwhile, India has yet to launch an AI model capable of competing globally, but the government’s interest in regulating AI suggests that future developments could be on the horizon.  

While the Finance Ministry’s move prioritizes data security, it also raises questions about efficiency. AI tools help streamline work processes, and their restriction could lead to slower operations in certain departments.  

Experts suggest that India should focus on developing AI models that are secure and optimized for government use, ensuring that innovation continues without compromising confidential information.  

For now, the Finance Ministry’s stance reinforces the need for careful regulation of AI technologies, ensuring that security remains a top priority in government operations.



Hackers Steal Login Details via Fake Microsoft ADFS login pages

Microsoft ADFS login pages

A help desk phishing campaign attacked a company's Microsoft Active Directory Federation Services (ADFS) via fake login pages and stole credentials by escaping multi-factor authentication (MFA) safety.

The campaign attacked healthcare, government, and education organizations, targeting around 150 victims, according to Abnormal Security. The attacks aim to get access to corporate mail accounts for sending emails to more victims inside a company or launch money motivated campaigns such as business e-mail compromise (BEC), where the money is directly sent to the attackers’ accounts. 

Fake Microsoft ADFS login pages 

ADFS from Microsoft is a verification mechanism that enables users to log in once and access multiple apps/services, saving the troubles of entering credentials repeatedly. 

ADFS is generally used by large businesses, as it offers single sign-on (SSO) for internal and cloud-based apps. 

The threat actors send emails to victims spoofing their company's IT team, asking them to sign in to update their security configurations or accept latest policies. 

How victims are trapped

When victims click on the embedded button, it takes them to a phishing site that looks same as their company's authentic ADFS sign-in page. After this, the fake page asks the victim to put their username, password, and other MFA code and baits then into allowing the push notifications.

The phishing page asks the victim to enter their username, password, and the MFA code or tricks them into approving the push notification.

What do the experts say

The security report by Abnormal suggests, "The phishing templates also include forms designed to capture the specific second factor required to authenticate the targets account, based on the organization's configured MFA settings.” Additionally, "Abnormal observed templates targeting multiple commonly used MFA mechanisms, including Microsoft Authenticator, Duo Security, and SMS verification."

After the victim gives all the info, they are sent to the real sign-in page to avoid suspicious and make it look like an authentic process. 

However, the threat actors immediately jump to loot the stolen info to sign into the victim's account, steal important data, make new email filter rules, and try lateral phishing. 

According to Abnormal, the threat actors used Private Internet Access VPN to hide their location and allocate an IP address with greater proximity to the organization.  

Dangers of AI Phishing Scam and How to Spot Them

Dangers of AI Phishing Scam and How to Spot Them

Supercharged AI phishing campaigns are extremely challenging to notice. Attackers use AI phishing scams with better grammar, structure, and spelling, to appear legit and trick the user. In this blog, we learn how to spot AI scams and avoid becoming victims

Checking email language

Earlier, it was easier to spot irregularities in an e-mail, all it took was one glance. As Gen AI models use flawless grammar,  it is almost impossible to find errors in your mail copy, 

Analyze the Language of the Email Carefully

In the past, one quick skim was enough to recognize something is off with an email, typically the incorrect grammar and laughable typos being the giveaways. Since scammers now use generative AI language models, most phishing messages have flawless grammar.

But there is hope. It is easier to identify Gen AI text, and keep an eye out for an unnatural flow of sentences, if everything seems to be too perfect, chances are it’s AI.

Red flags are everywhere, even mails

Though AI has made it difficult for users to find phishing scams, they show some classic behavior. The same tips apply to detect phishing emails.

In most cases, scammers mimic businesses and wish you won’t notice. For instance, instead of an official “info@members.hotstar.com” email ID, you may notice something like “info@members.hotstar-support.com.” You may also get unrequested links or attachments, which are a huge tell. URLs (mismatched) having subtle typos or extra words/letters are comparatively difficult to notice but a huge ti-off that you are on a malicious website or interacting with a fake business.

Beware of Deepfake video scams

The biggest issue these days is combating deepfakes, which are also difficult to spot. 

The attacker makes realistic video clips using photo and video prompts and uses video calling like Zoom or FaceTime to trap potential victims (especially elders and senior citizens) to give away sensitive data. 

One may think that only old people may fall for deepfakes, but due to their sophistication, even experts fall prey to them. One famous incident happened in Hong Kong, where scammers deepfake a company CFO and looted HK$200 million (roughly $25 million).

AI is advancing, and becoming stronger every day. It is a double-edged sword, both a blessing and a curse. One should tread the ethical lines carefully and hope they don’t fall to the dark side of AI.

RSA Encryption Breached by Quantum Computing Advancement

 


A large proportion of the modern digital world involves everyday transactions taking place on the internet, from simple purchases to the exchange of highly sensitive corporate data that is highly confidential. In this era of rapid technological advancement, quantum computing is both perceived as a transformative opportunity and a potential security threat. 

Quantum computing has been generating considerable attention in recent years, but as far as the 2048-bit RSA standard is concerned, it defies any threat these advances pose to the existing encryption standards that have been in use for decades. Several cybersecurity experts have expressed concern about quantum technologies potentially compromising military-grade encryption because of the widespread rumours.

However, these developments have not yet threatened robust encryption protocols like AES and TLS, nor do they threaten high-security encryption protocols like SLA or PKI. In addition to being a profound advancement over classical computing, quantum computing utilizes quantum mechanics principles to produce computations that are far superior to classical computation. 

Despite the inherent complexity of this technology, it has the potential to revolutionize fields such as pharmaceutical research, manufacturing, financial modelling, and cybersecurity by bringing enormous benefits. The quantum computer is a device that combines the unique properties of subatomic particles with the ability to perform high-speed calculations and is expected to revolutionize the way problems are solved across a wide range of industries by exploiting their unique properties. 

Although quantum-resistant encryption has been the focus of much attention lately, ongoing research is still essential if we are to ensure the long-term security of our data. As a major milestone in this field occurred in 2024, researchers reported that they were able to successfully compromise RSA encryption, a widely used cryptography system, with a quantum computer. 

To ensure the security of sensitive information transferred over digital networks, data encryption is an essential safeguard. It converts the plaintext into an unintelligible format that can only be decrypted with the help of a cryptographic key that is designated by the sender of the encrypted data. It is a mathematical value which is known to both the sender and the recipient but it is only known to them. This set of mathematical values ensures that only authorized parties can access the original information. 

To be able to function, cryptographic key pairs must be generated, containing both a public key and a private key. Plaintext is encrypted using the public key, which in turn encrypts it into ciphertext and is only decryptable with the corresponding private key. The primary principle of RSA encryption is that it is computationally challenging to factor large composite numbers, which are formed by multiplying two large prime numbers by two. 

Therefore, RSA encryption is considered highly secure. As an example, let us consider the composite number that is released when two 300-digit prime numbers are multiplied together, resulting in a number with a 600-digit component, and whose factorization would require a very long period if it were to be done by classical computing, which could extend longer than the estimated lifespan of the universe.

Despite the inherent complexity of the RSA encryption standard, this standard has proven to be extremely resilient when it comes to securing digital communications. Nevertheless, the advent of quantum computing presents a formidable challenge to this system. A quantum computer has the capability of factoring large numbers exponentially faster than classical computers through Shor's algorithm, which utilizes quantum superposition to perform multiple calculations at once, which facilitates the simultaneous execution of many calculations at the same time. 

Among the key components of this process is the implementation of the Quantum Fourier Transform (QFT), which extracts critical periodic values that are pertinent to refining the factorization process through the extraction of periodic values. Theoretically, a quantum computer capable of processing large integers could be able to break down the RSA encryption into smaller chunks of data within a matter of hours or perhaps minutes, effectively rendering the security of the encryption susceptible. 

As quantum computing advances, the security implications for cryptographic systems such as RSA are under increasing threat, necessitating that quantum-resistant encryption methodologies must be developed. There is a significant threat posed by quantum computers being able to decrypt such encryption mechanisms, and this could pose a substantial challenge to current cybersecurity frameworks, underscoring the importance of continuing to improve quantum-resistant cryptographic methods. 

The classical computing system uses binary bits for the representation of data, which are either zero or one digits. Quantum computers on the other hand use quantum bits, also called qubits, which are capable of occupying multiple states at the same time as a result of the superposition principle. As a result of this fundamental distinction, quantum computers can perform highly complex computations much faster than classical machines, which are capable of performing highly complex computations. 

As an example of the magnitude of this progress, Google reported a complex calculation that it successfully performed within a matter of seconds on its quantum processor, whereas conventional computing technology would have taken approximately 10,000 years to accomplish. Among the various domains in which quantum computing can be applied, a significant advantage can be seen when it comes to rapidly processing vast datasets, such as the artificial intelligence and machine learning space. 

As a result of this computational power, there are also cybersecurity concerns, as it may undermine existing encryption protocols by enabling the decryption of secure data at an unprecedented rate, which would undermine existing encryption protocols. As a result of quantum computing, it is now possible for long-established cryptographic systems to be compromised by quantum computers, raising serious concerns about the future security of the internet. However, there are several important caveats to the recent study conducted by Chinese researchers which should be taken into account. 

In the experiment, RSA encryption keys were used based on a 50-bit integer, which is considerably smaller and less complex than the encryption standards used today in security protocols that are far more sophisticated. RSA encryption is a method of encrypting data that relies on the mathematical difficulty of factoring large prime numbers or integers—complete numbers that cannot be divided into smaller fractions by factors. 

To increase the security of the encryption, the process is exponentially more complicated with larger integers, resulting in a greater degree of complexity. Although the study by Shanghai University proved that 50-bit integers can be decrypted successfully, as Ron Rivest, Adi Shamir, and Leonard Adleman have stressed to me, this achievement has no bearing on breaking the 2048-bit encryption commonly used in current RSA implementations. This achievement, however, is far from achieving any breakthrough in RSA. As a proof of concept, the experiment serves rather as a potential threat to global cybersecurity rather than as an immediate threat. 

It was demonstrated in the study that quantum computers are capable of decrypting relatively simple RSA encryption keys, however, they are unable to crack the more robust encryption protocols that are currently used to protect sensitive digital communications. The RSA algorithm, as highlighted by RSA Security, is the basis for all encryption frameworks across the World Wide Web, which means that almost all internet users have a vested interest in whether or not these cryptographic protections remain reliable for as long as possible. Even though this experiment does not signal an imminent crisis, it certainly emphasizes the importance of continuing to be vigilant as quantum computing technology advances in the future.