Search This Blog

Popular Posts

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Technology. Show all posts

Understanding the Importance of 5G Edge Security

 


As technology advances, the volume of data being generated daily has reached unprecedented levels. In 2024 alone, people are expected to create over 147 zettabytes of data. This rapid growth presents major challenges for businesses in terms of processing, transferring, and safeguarding vast amounts of information efficiently.

Traditional data processing occurs in centralized locations like data centers, but as the demand for real-time insights increases, edge computing is emerging as a game-changer. By handling data closer to its source — such as factories or remote locations, edge computing minimizes delays, enhances efficiency, and enables faster decision-making. However, its widespread adoption also introduces new security risks that organizations must address.

Why Edge Computing Matters

Edge computing reduces the reliance on centralized data centers by allowing devices to process data locally. This approach improves operational speed, reduces network congestion, and enhances overall efficiency. In industries like manufacturing, logistics, and healthcare, edge computing enables real-time monitoring and automation, helping businesses streamline processes and respond to changes instantly.

For example, a UK port leveraging a private 5G network has successfully integrated IoT sensors, AI-driven logistics, and autonomous vehicles to enhance operational efficiency. These advancements allow for better tracking of assets, improved environmental monitoring, and seamless automation of critical tasks, positioning the port as an industry leader.

The Role of 5G in Strengthening Security

While edge computing offers numerous advantages, its effectiveness relies on a robust network. This is where 5G comes into play. The high-speed, low-latency connectivity provided by 5G enables real-time data processing, improvises security features, and supports large-scale deployments of IoT devices.

However, the expansion of connected devices also increases vulnerability to cyber threats. Securing these devices requires a multi-layered approach, including:

1. Strong authentication methods to verify users and devices

2. Data encryption to protect information during transmission and storage

3. Regular software updates to address emerging security threats

4. Network segmentation to limit access and contain potential breaches

Integrating these measures into a 5G-powered edge network ensures that businesses not only benefit from increased speed and efficiency but also maintain a secure digital environment.


Preparing for 5G and Edge Integration

To fully leverage edge computing and 5G, businesses must take proactive steps to modernize their infrastructure. This includes:

1. Upgrading Existing Technology: Implementing the latest networking solutions, such as software-defined WANs (SD-WANs), enhances agility and efficiency.

2. Strengthening Security Policies: Establishing strict cybersecurity protocols and continuous monitoring systems can help detect and prevent threats.

3. Adopting Smarter Tech Solutions: Businesses should invest in advanced IoT solutions, AI-driven analytics, and smart automation to maximize the benefits of edge computing.

4. Anticipating Future Innovations: Staying ahead of technological advancements helps businesses adapt quickly and maintain a competitive edge.

5. Embracing Disruptive Technologies: Organizations that adopt augmented reality, virtual reality, and other emerging tech can create innovative solutions that redefine industry standards.

The transition to 5G-powered edge computing is not just about efficiency — it’s about security and sustainability. Businesses that invest in modernizing their infrastructure and implementing robust security measures will not only optimize their operations but also ensure long-term success in an increasingly digital world.



OpenAI Introduces European Data Residency to Strengthen Compliance with Local Regulations

 

OpenAI has officially launched data residency in Europe, enabling organizations to comply with regional data sovereignty requirements while using its AI-powered services.

Data residency refers to the physical storage location of an organization’s data and the legal frameworks that govern it. Many leading technology firms and cloud providers offer European data residency options to help businesses adhere to privacy and data protection laws such as the General Data Protection Regulation (GDPR), Germany’s Federal Data Protection Act, and the U.K.’s data protection regulations.

Several tech giants have already implemented similar measures. In October, GitHub introduced cloud data residency within the EU for Enterprise plan subscribers. AWS followed suit by launching a sovereign cloud for Europe, ensuring all metadata remains within the EU. Google also introduced data residency for AI processing for U.K. users of its Gemini 1.5 Flash model.

Starting Thursday, OpenAI customers using its API can opt to process data in Europe for "eligible endpoints." New ChatGPT Enterprise and Edu customers will also have the option to store customer content at rest within Europe. Data "at rest" refers to information that is not actively being transferred or accessed across networks.

With European data residency enabled, OpenAI will process API requests within the region without retaining any data, meaning AI model interactions will not be stored on company servers. If activated for ChatGPT, customer information—including conversations, user inputs, images, uploaded files, and custom bots—will be stored in-region. However, OpenAI clarifies that existing projects cannot be retroactively configured for European data residency at this time.

"We look forward to partnering with more organizations across Europe and around the world on their AI initiatives, while maintaining the highest standards of security, privacy, and compliance," OpenAI stated in a blog post on Thursday.

OpenAI has previously faced scrutiny from European regulators over its data handling practices. Authorities in Spain and Germany have launched investigations into ChatGPT’s data processing methods. In December, Italy’s data protection watchdog — which had briefly banned ChatGPT in the past—fined OpenAI €15 million ($15.6 million) for alleged violations of consumer data protection laws.

The debate over AI data storage extends beyond OpenAI. Chinese AI startup DeepSeek, which operates a large language model (LLM) and chatbot, processes user data within China, drawing regulatory attention.

Last year, the European Data Protection Board (EDPB) released guidelines for EU regulators investigating ChatGPT, addressing concerns such as the lawfulness of training data collection, transparency, and data accuracy.

The Future of Data Security Lies in Quantum-Safe Encryption

 


Cybersecurity experts and analysts have expressed growing concerns over the potential threat posed by quantum computing to modern cryptographic systems. Unlike conventional computers that rely on electronic circuits, quantum computers leverage the principles of quantum mechanics, which could enable them to break widely used encryption protocols. 

If realized, this advancement would compromise digital communications, rendering them as vulnerable as unprotected transmissions. However, this threat remains theoretical at present. Existing quantum computers lack the computational power necessary to breach standard encryption methods. According to a 2018 report by the National Academies of Sciences, Engineering, and Medicine, significant technological breakthroughs are still required before quantum computing can effectively decrypt the robust encryption algorithms that secure data across the internet. 

Despite the current limitations, researchers emphasize the importance of proactively developing quantum-resistant cryptographic solutions to mitigate future risks. Traditional computing systems operate on the fundamental principle that electrical signals exist in one of two distinct states, represented as binary bits—either zero or one. These bits serve as the foundation for storing and processing data in conventional computers. 

In contrast, quantum computers harness the principles of quantum mechanics, enabling a fundamentally different approach to data encoding and computation. Instead of binary bits, quantum systems utilize quantum bits, or qubits, which possess the ability to exist in multiple states simultaneously through a phenomenon known as superposition. 

Unlike classical bits that strictly represent a zero or one, a qubit can embody a probabilistic combination of both states at the same time. This unique characteristic allows quantum computers to process and analyze information at an exponentially greater scale, offering unprecedented computational capabilities compared to traditional computing architectures. Leading technology firms have progressively integrated post-quantum cryptographic (PQC) solutions to enhance security against future quantum threats. 

Amazon introduced a post-quantum variant of TLS 1.3 for its AWS Key Management Service (KMS) in 2020, aligning it with evolving NIST recommendations. Apple incorporated the PQ3 quantum-resistant protocol into its iMessage encryption in 2024, leveraging the Kyber algorithm alongside elliptic-curve cryptography for dual-layer security. Cloudflare has supported post-quantum key agreements since 2023, utilizing the widely adopted X25519Kyber768 algorithm. 

Google Chrome enabled post-quantum cryptography by default in version 124, while Mozilla Firefox introduced support for X25519Kyber768, though manual activation remains necessary. VPN provider Mullvad integrates Classic McEliece and Kyber for key exchange, and Signal implemented the PQDXH protocol in 2023. Additionally, secure email service Tutanota employs post-quantum encryption for internal communications. Numerous cryptographic libraries, including OpenSSL and BoringSSL, further facilitate PQC adoption, supported by the Open Quantum Safe initiative. 

Modern encryption relies on advanced mathematical algorithms to convert plaintext data into secure, encrypted messages for storage and transmission. These cryptographic processes operate using digital keys, which determine how data is encoded and decoded. Encryption is broadly categorized into two types: symmetric and asymmetric. 

Symmetric encryption employs a single key for both encryption and decryption, offering high efficiency, making it the preferred method for securing stored data and communications. In contrast, asymmetric encryption, also known as public-key cryptography, utilizes a key pair—one publicly shared for encryption and the other privately held for decryption. This method is essential for securely exchanging symmetric keys and digitally verifying identities through signatures on messages, documents, and certificates. 

Secure websites utilizing HTTPS protocols rely on public-key cryptography to authenticate certificates before establishing symmetric encryption for communication. Given that most digital systems employ both cryptographic techniques, ensuring their robustness remains critical to maintaining cybersecurity. Quantum computing presents a significant cybersecurity challenge, with the potential to break modern cryptographic algorithms in mere minutes—tasks that would take even the most advanced supercomputers thousands of years. 

The moment when a quantum computer becomes capable of compromising widely used encryption is known as Q-Day, and such a machine is termed a Cryptographically Relevant Quantum Computer (CRQC). While governments and defense organizations are often seen as primary targets for cyber threats, the implications of quantum computing extend far beyond these sectors. With public-key cryptography rendered ineffective, all industries risk exposure to cyberattacks. 

Critical infrastructure, including power grids, water supplies, public transportation, telecommunications, financial markets, and healthcare systems, could face severe disruptions, posing both economic and life-threatening consequences. Notably, quantum threats will not be limited to entities utilizing quantum technology; any business or individual relying on current encryption methods remains at risk. Ensuring quantum-resistant cryptographic solutions is therefore imperative to safeguarding digital security in the post-quantum era. 

As the digital landscape continues to evolve, the inevitability of quantum computing necessitates a proactive approach to cybersecurity. The widespread adoption of quantum-resistant cryptographic solutions is no longer a theoretical consideration but a fundamental requirement for ensuring long-term data security. 

Governments, enterprises, and technology providers must collaborate to accelerate the development and deployment of post-quantum cryptography to safeguard critical infrastructure and sensitive information. While the full realization of quantum threats remains in the future, the urgency to act is now. Organizations must assess their current security frameworks, invest in quantum-safe encryption technologies, and adhere to emerging standards set forth by cryptographic experts.

The transition to quantum-resilient security will be a complex but essential undertaking to maintain the integrity, confidentiality, and resilience of digital communications. By preparing today, industries can mitigate the risks posed by quantum advancements and uphold the security of global digital ecosystems in the years to come.

Finance Ministry Bans Use of AI Tools Like ChatGPT and DeepSeek in Government Work

 


The Ministry of Finance, under Nirmala Sitharaman’s leadership, has issued a directive prohibiting employees from using artificial intelligence (AI) tools such as ChatGPT and DeepSeek for official work. The decision comes over concerns about data security as these AI-powered platforms process and store information externally, potentially putting confidential government data at risk.  


Why Has the Finance Ministry Banned AI Tools?  

AI chatbots and virtual assistants have gained popularity for their ability to generate text, answer questions, and assist with tasks. However, since these tools rely on cloud-based processing, there is a risk that sensitive government information could be exposed or accessed by unauthorized parties.  

The ministry’s concern is that official documents, financial records, and policy decisions could unintentionally be shared with external AI systems, making them vulnerable to cyber threats or misuse. By restricting their use, the government aims to safeguard national data and prevent potential security breaches.  


Public Reactions and Social Media Buzz

The announcement quickly sparked discussions online, with many users sharing humorous takes on the decision. Some questioned how government employees would manage their workload without AI assistance, while others speculated whether Indian AI tools like Ola Krutrim might be an approved alternative.  

A few of the popular reactions included:  

1. "How will they complete work on time now?" 

2. "So, only Ola Krutrim is allowed?"  

3. "The Finance Ministry is switching back to traditional methods."  

4. "India should develop its own AI instead of relying on foreign tools."  


India’s Position in the Global AI Race

With AI development accelerating worldwide, several countries are striving to build their own advanced models. China’s DeepSeek has emerged as a major competitor to OpenAI’s ChatGPT and Google’s Gemini, increasing the competition in the field.  

The U.S. has imposed trade restrictions on Chinese AI technology, leading to growing tensions in the tech industry. Meanwhile, India has yet to launch an AI model capable of competing globally, but the government’s interest in regulating AI suggests that future developments could be on the horizon.  

While the Finance Ministry’s move prioritizes data security, it also raises questions about efficiency. AI tools help streamline work processes, and their restriction could lead to slower operations in certain departments.  

Experts suggest that India should focus on developing AI models that are secure and optimized for government use, ensuring that innovation continues without compromising confidential information.  

For now, the Finance Ministry’s stance reinforces the need for careful regulation of AI technologies, ensuring that security remains a top priority in government operations.



Hackers Steal Login Details via Fake Microsoft ADFS login pages

Microsoft ADFS login pages

A help desk phishing campaign attacked a company's Microsoft Active Directory Federation Services (ADFS) via fake login pages and stole credentials by escaping multi-factor authentication (MFA) safety.

The campaign attacked healthcare, government, and education organizations, targeting around 150 victims, according to Abnormal Security. The attacks aim to get access to corporate mail accounts for sending emails to more victims inside a company or launch money motivated campaigns such as business e-mail compromise (BEC), where the money is directly sent to the attackers’ accounts. 

Fake Microsoft ADFS login pages 

ADFS from Microsoft is a verification mechanism that enables users to log in once and access multiple apps/services, saving the troubles of entering credentials repeatedly. 

ADFS is generally used by large businesses, as it offers single sign-on (SSO) for internal and cloud-based apps. 

The threat actors send emails to victims spoofing their company's IT team, asking them to sign in to update their security configurations or accept latest policies. 

How victims are trapped

When victims click on the embedded button, it takes them to a phishing site that looks same as their company's authentic ADFS sign-in page. After this, the fake page asks the victim to put their username, password, and other MFA code and baits then into allowing the push notifications.

The phishing page asks the victim to enter their username, password, and the MFA code or tricks them into approving the push notification.

What do the experts say

The security report by Abnormal suggests, "The phishing templates also include forms designed to capture the specific second factor required to authenticate the targets account, based on the organization's configured MFA settings.” Additionally, "Abnormal observed templates targeting multiple commonly used MFA mechanisms, including Microsoft Authenticator, Duo Security, and SMS verification."

After the victim gives all the info, they are sent to the real sign-in page to avoid suspicious and make it look like an authentic process. 

However, the threat actors immediately jump to loot the stolen info to sign into the victim's account, steal important data, make new email filter rules, and try lateral phishing. 

According to Abnormal, the threat actors used Private Internet Access VPN to hide their location and allocate an IP address with greater proximity to the organization.  

Dangers of AI Phishing Scam and How to Spot Them

Dangers of AI Phishing Scam and How to Spot Them

Supercharged AI phishing campaigns are extremely challenging to notice. Attackers use AI phishing scams with better grammar, structure, and spelling, to appear legit and trick the user. In this blog, we learn how to spot AI scams and avoid becoming victims

Checking email language

Earlier, it was easier to spot irregularities in an e-mail, all it took was one glance. As Gen AI models use flawless grammar,  it is almost impossible to find errors in your mail copy, 

Analyze the Language of the Email Carefully

In the past, one quick skim was enough to recognize something is off with an email, typically the incorrect grammar and laughable typos being the giveaways. Since scammers now use generative AI language models, most phishing messages have flawless grammar.

But there is hope. It is easier to identify Gen AI text, and keep an eye out for an unnatural flow of sentences, if everything seems to be too perfect, chances are it’s AI.

Red flags are everywhere, even mails

Though AI has made it difficult for users to find phishing scams, they show some classic behavior. The same tips apply to detect phishing emails.

In most cases, scammers mimic businesses and wish you won’t notice. For instance, instead of an official “info@members.hotstar.com” email ID, you may notice something like “info@members.hotstar-support.com.” You may also get unrequested links or attachments, which are a huge tell. URLs (mismatched) having subtle typos or extra words/letters are comparatively difficult to notice but a huge ti-off that you are on a malicious website or interacting with a fake business.

Beware of Deepfake video scams

The biggest issue these days is combating deepfakes, which are also difficult to spot. 

The attacker makes realistic video clips using photo and video prompts and uses video calling like Zoom or FaceTime to trap potential victims (especially elders and senior citizens) to give away sensitive data. 

One may think that only old people may fall for deepfakes, but due to their sophistication, even experts fall prey to them. One famous incident happened in Hong Kong, where scammers deepfake a company CFO and looted HK$200 million (roughly $25 million).

AI is advancing, and becoming stronger every day. It is a double-edged sword, both a blessing and a curse. One should tread the ethical lines carefully and hope they don’t fall to the dark side of AI.

RSA Encryption Breached by Quantum Computing Advancement

 


A large proportion of the modern digital world involves everyday transactions taking place on the internet, from simple purchases to the exchange of highly sensitive corporate data that is highly confidential. In this era of rapid technological advancement, quantum computing is both perceived as a transformative opportunity and a potential security threat. 

Quantum computing has been generating considerable attention in recent years, but as far as the 2048-bit RSA standard is concerned, it defies any threat these advances pose to the existing encryption standards that have been in use for decades. Several cybersecurity experts have expressed concern about quantum technologies potentially compromising military-grade encryption because of the widespread rumours.

However, these developments have not yet threatened robust encryption protocols like AES and TLS, nor do they threaten high-security encryption protocols like SLA or PKI. In addition to being a profound advancement over classical computing, quantum computing utilizes quantum mechanics principles to produce computations that are far superior to classical computation. 

Despite the inherent complexity of this technology, it has the potential to revolutionize fields such as pharmaceutical research, manufacturing, financial modelling, and cybersecurity by bringing enormous benefits. The quantum computer is a device that combines the unique properties of subatomic particles with the ability to perform high-speed calculations and is expected to revolutionize the way problems are solved across a wide range of industries by exploiting their unique properties. 

Although quantum-resistant encryption has been the focus of much attention lately, ongoing research is still essential if we are to ensure the long-term security of our data. As a major milestone in this field occurred in 2024, researchers reported that they were able to successfully compromise RSA encryption, a widely used cryptography system, with a quantum computer. 

To ensure the security of sensitive information transferred over digital networks, data encryption is an essential safeguard. It converts the plaintext into an unintelligible format that can only be decrypted with the help of a cryptographic key that is designated by the sender of the encrypted data. It is a mathematical value which is known to both the sender and the recipient but it is only known to them. This set of mathematical values ensures that only authorized parties can access the original information. 

To be able to function, cryptographic key pairs must be generated, containing both a public key and a private key. Plaintext is encrypted using the public key, which in turn encrypts it into ciphertext and is only decryptable with the corresponding private key. The primary principle of RSA encryption is that it is computationally challenging to factor large composite numbers, which are formed by multiplying two large prime numbers by two. 

Therefore, RSA encryption is considered highly secure. As an example, let us consider the composite number that is released when two 300-digit prime numbers are multiplied together, resulting in a number with a 600-digit component, and whose factorization would require a very long period if it were to be done by classical computing, which could extend longer than the estimated lifespan of the universe.

Despite the inherent complexity of the RSA encryption standard, this standard has proven to be extremely resilient when it comes to securing digital communications. Nevertheless, the advent of quantum computing presents a formidable challenge to this system. A quantum computer has the capability of factoring large numbers exponentially faster than classical computers through Shor's algorithm, which utilizes quantum superposition to perform multiple calculations at once, which facilitates the simultaneous execution of many calculations at the same time. 

Among the key components of this process is the implementation of the Quantum Fourier Transform (QFT), which extracts critical periodic values that are pertinent to refining the factorization process through the extraction of periodic values. Theoretically, a quantum computer capable of processing large integers could be able to break down the RSA encryption into smaller chunks of data within a matter of hours or perhaps minutes, effectively rendering the security of the encryption susceptible. 

As quantum computing advances, the security implications for cryptographic systems such as RSA are under increasing threat, necessitating that quantum-resistant encryption methodologies must be developed. There is a significant threat posed by quantum computers being able to decrypt such encryption mechanisms, and this could pose a substantial challenge to current cybersecurity frameworks, underscoring the importance of continuing to improve quantum-resistant cryptographic methods. 

The classical computing system uses binary bits for the representation of data, which are either zero or one digits. Quantum computers on the other hand use quantum bits, also called qubits, which are capable of occupying multiple states at the same time as a result of the superposition principle. As a result of this fundamental distinction, quantum computers can perform highly complex computations much faster than classical machines, which are capable of performing highly complex computations. 

As an example of the magnitude of this progress, Google reported a complex calculation that it successfully performed within a matter of seconds on its quantum processor, whereas conventional computing technology would have taken approximately 10,000 years to accomplish. Among the various domains in which quantum computing can be applied, a significant advantage can be seen when it comes to rapidly processing vast datasets, such as the artificial intelligence and machine learning space. 

As a result of this computational power, there are also cybersecurity concerns, as it may undermine existing encryption protocols by enabling the decryption of secure data at an unprecedented rate, which would undermine existing encryption protocols. As a result of quantum computing, it is now possible for long-established cryptographic systems to be compromised by quantum computers, raising serious concerns about the future security of the internet. However, there are several important caveats to the recent study conducted by Chinese researchers which should be taken into account. 

In the experiment, RSA encryption keys were used based on a 50-bit integer, which is considerably smaller and less complex than the encryption standards used today in security protocols that are far more sophisticated. RSA encryption is a method of encrypting data that relies on the mathematical difficulty of factoring large prime numbers or integers—complete numbers that cannot be divided into smaller fractions by factors. 

To increase the security of the encryption, the process is exponentially more complicated with larger integers, resulting in a greater degree of complexity. Although the study by Shanghai University proved that 50-bit integers can be decrypted successfully, as Ron Rivest, Adi Shamir, and Leonard Adleman have stressed to me, this achievement has no bearing on breaking the 2048-bit encryption commonly used in current RSA implementations. This achievement, however, is far from achieving any breakthrough in RSA. As a proof of concept, the experiment serves rather as a potential threat to global cybersecurity rather than as an immediate threat. 

It was demonstrated in the study that quantum computers are capable of decrypting relatively simple RSA encryption keys, however, they are unable to crack the more robust encryption protocols that are currently used to protect sensitive digital communications. The RSA algorithm, as highlighted by RSA Security, is the basis for all encryption frameworks across the World Wide Web, which means that almost all internet users have a vested interest in whether or not these cryptographic protections remain reliable for as long as possible. Even though this experiment does not signal an imminent crisis, it certainly emphasizes the importance of continuing to be vigilant as quantum computing technology advances in the future.

AI and Quantum Computing Revive Search Efforts for Missing Malaysia Airlines Flight MH370

 

A decade after the mysterious disappearance of Malaysia Airlines Flight MH370, advancements in technology are breathing new life into the search for answers. Despite extensive global investigations, the aircraft’s exact whereabouts remain unknown. However, emerging tools like artificial intelligence (AI), quantum computing, and cutting-edge underwater exploration are revolutionizing the way data is analyzed and search efforts are conducted, offering renewed hope for a breakthrough. 

AI is now at the forefront of processing and interpreting vast datasets, including satellite signals, ocean currents, and previous search findings. By identifying subtle patterns that might have gone unnoticed before, AI-driven algorithms are refining estimates of the aircraft’s possible location. 

At the same time, quantum computing is dramatically accelerating complex calculations that would take traditional systems years to complete. Researchers, including those from IBM’s Quantum Research Team, are using simulations to model how ocean currents may have dispersed MH370’s debris, leading to more accurate predictions of its final location. Underwater exploration is also taking a major leap forward with AI-equipped autonomous drones. 

These deep-sea vehicles, fitted with advanced sensors, can scan the ocean floor in unprecedented detail and access depths that were once unreachable. A new fleet of these drones is set to be deployed in the southern Indian Ocean, targeting previously difficult-to-explore regions. Meanwhile, improvements in satellite imaging are allowing analysts to reassess older data with enhanced clarity. 

High-resolution sensors and advanced real-time processing are helping experts identify potential debris that may have been missed in earlier searches. Private space firms are collaborating with global investigative teams to leverage these advancements and refine MH370’s last known trajectory. 

The renewed search efforts are the result of international cooperation, bringing together experts from aviation, oceanography, and data science to create a more comprehensive investigative approach. Aviation safety specialist Grant Quixley underscored the importance of these innovations, stating, “New technologies could finally help solve the mystery of MH370’s disappearance.” 

This fusion of expertise and cutting-edge science is making the investigation more thorough and data-driven than ever before. Beyond the ongoing search, these technological breakthroughs have far-reaching implications for the aviation industry.

AI and quantum computing are expected to transform areas such as predictive aircraft maintenance, air traffic management, and emergency response planning. Insights gained from the MH370 case may contribute to enhanced safety protocols, potentially preventing similar incidents in the future.

EU Bans AI Systems Deemed ‘Unacceptable Risk’

 


As outlined in the European Union's (EU) Artificial Intelligence Act (AI Act), which was first presented in 2023, the AI Act establishes a common regulatory and legal framework for the development and application of artificial intelligence. In April 2021, the European Commission (EC) proposed the law, which was passed by the European Parliament in May 2024 following its proposal by the EC in April 2021. 

EC guidelines introduced this week now specify that the use of AI practices whose risk assessment was deemed to be "unacceptable" or "high" is prohibited. The AI Act categorizes AI systems into four categories, each having a degree of oversight that varies. It remains relatively unregulated for low-risk artificial intelligence such as spam filters, recommendation algorithms, and customer service chatbots, whereas limited-risk artificial intelligence, such as customer service chatbots, must meet basic transparency requirements. 

Artificial intelligence that is considered high-risk, such as in medical diagnostics or autonomous vehicles, is subjected to stricter compliance measures, including risk assessments required by law. As a result of the AI Act, Europeans can be assured of the benefits of artificial intelligence while also being protected from potential risks associated with its application. The majority of AI systems present minimal to no risks and are capable of helping society overcome societal challenges, but certain applications need to be regulated to prevent negative outcomes from occurring. 

It is an issue of major concern that AI decision-making lacks transparency, which causes problems when it comes to determining whether individuals have been unfairly disadvantaged, for instance in the hiring process for jobs or in the application for public benefits. Despite existing laws offering some protection, they are insufficient to address the unique challenges posed by AI, which is why the EU has now enacted a new set of regulations. 

It has been decided that AI systems that pose unacceptable risks, or those that constitute a clear threat to people's safety, livelihoods, and rights, should be banned in the EU. Among their plethora are social scoring and data scraping for facial recognition databases through the use of internet or CCTV footage, as well as the use of AI algorithms to manipulate, deceive, and exploit other vulnerabilities in a harmful way. Although it is not forbidden, the EC is also going to monitor the applications categorised as "high risk." These are applications that seem to have been developed in good faith, but if something were to go wrong, could have catastrophic consequences.

The use of artificial intelligence in critical infrastructures, such as transportation, that are susceptible to failure, which could lead to human life or death citizens; AI solutions used in education institutions, which can have a direct impact on someone's ability to gain an education and their career path. An example of where AI-based products will be used, such as the scoring of exams, the use of robots in surgery, or even the use of AI in law enforcement with the potential to override people's rights, such as the evaluation of evidence, there may be some issues with human rights. 

AI Act is the first piece of legislation to be enforced in the European Union, marking an important milestone in the region's approach to artificial intelligence regulation. Even though the European Commission has not yet released comprehensive compliance guidelines, organizations are now required to follow newly established guidelines concerning prohibited artificial intelligence applications and AI literacy requirements, even though no comprehensive compliance guidelines have yet been released. 

It explicitly prohibits artificial intelligence systems that are deemed to pose an “unacceptable risk,” which includes those that manipulate human behaviour in harmful ways, take advantage of vulnerabilities associated with age, disability, and socioeconomic status, as well as those that facilitate the implementation of social scoring by the government. There is also a strong prohibition in this act against the use of real-time biometric identification in public places, except under specified circumstances, as well as the creation of facial recognition databases that are based on online images or surveillance footage scraped from online sources. 

The use of artificial intelligence for the recognition of emotions in the workplace or educational institutions is also restricted, along with the use of predictive policing software. There are severely fined companies found to be using these banned AI systems within the EU, and the fines can reach as high as 7% of their global annual turnover or 35 million euros, depending on which is greater. In the days following the enactment of these regulations, companies operating in the AI sector must pay attention to compliance challenges while waiting for further guidance from the EU authorities on how to accomplish compliance. 

There is an antitrust law that prohibits the use of artificial intelligence systems that use information about an individual's background, skin colour, or social media behaviour as a way of ranking their likelihood of defaulting on a loan or defrauding a social welfare program. A law enforcement agency must follow strict guidelines to ensure that they do not use artificial intelligence (AI) to predict criminal behaviour based only on facial features or personal characteristics, without taking any objective, verifiable facts into account.

Moreover, the legislation also forbids AI tools which extract facial images from the internet, or CCTV footage, indiscriminately to create large-scale databases that can be accessed by any surveillance agencies, as this is a form of mass surveillance. An organization is restricted from using artificial intelligence-driven webcams or voice recognition to detect the emotions of its employees, and it is forbidden to use subliminal or deceptive AI interfaces to manipulate the user into making a purchase. 

As a further measure, it is also prohibited to introduce AI-based toys or systems specifically designed to target children, the elderly, or vulnerable individuals who are likely to engage in harmful behaviour. There is also a provision of the Act that prohibits artificial intelligence systems from interpreting political opinions and sexual orientation from facial analysis, thus ensuring stricter protection of individuals' privacy rights and privacy preferences.

Italy Takes Action Against DeepSeek AI Over User Data Risks

 



Italy’s data protection authority, Garante, has ordered Chinese AI chatbot DeepSeek to halt its operations in the country. The decision comes after the company failed to provide clear answers about how it collects and handles user data. Authorities fear that the chatbot’s data practices could pose security risks, leading to its removal from Italian app stores.  


Why Did Italy Ban DeepSeek?  

The main reason behind the ban is DeepSeek’s lack of transparency regarding its data collection policies. Italian regulators reached out to the company with concerns over whether it was handling user information in a way that aligns with European privacy laws. However, DeepSeek’s response was deemed “totally insufficient,” raising even more doubts about its operations.  

Garante stated that DeepSeek denied having a presence in Italy and claimed that European regulations did not apply to it. Despite this, authorities believe that the company’s AI assistant has been accessible to Italian users, making it subject to the region’s data protection rules. To address these concerns, Italy has launched an official investigation into DeepSeek’s activities.  


Growing Concerns Over AI and Data Security  

DeepSeek is an advanced AI chatbot developed by a Chinese startup, positioned as a competitor to OpenAI’s ChatGPT and Google’s Gemini. With over 10 million downloads worldwide, it is considered a strong contender in the AI market. However, its expansion into Western countries has sparked concerns about how user data might be used.  

Italy is not the only country scrutinizing DeepSeek’s data practices. Authorities in France, South Korea, and Ireland have also launched investigations, highlighting global concerns about AI-driven data collection. Many governments fear that personal data gathered by AI chatbots could be misused for surveillance or other security threats.  

This is not the first time Italy has taken action against an AI company. In 2023, Garante temporarily blocked OpenAI’s ChatGPT over privacy issues. OpenAI was later fined €15 million after being accused of using personal data to train its AI without proper consent.  


Impact on the AI and Tech Industry

The crackdown on DeepSeek comes at a time when AI technology is shaping global markets. Just this week, concerns over China’s growing influence in AI led to a significant drop in the U.S. stock market. The NASDAQ 100 index lost $1 trillion in value, with AI chipmaker Nvidia alone suffering a $600 million loss.  

While DeepSeek has been removed from Italian app stores, users who downloaded it before the ban can still access the chatbot. Additionally, its web-based version remains functional, raising questions about how regulators will enforce the restriction effectively.  

As AI continues to make new advancements, countries are becoming more cautious about companies that fail to meet privacy and security standards. With multiple nations now investigating DeepSeek, its future in Western markets remains uncertain.



New Microsoft "Scareware Blocker" Prevents Users from Tech Support Scams

New Microsoft "Scareware Blocker" Prevents Users from Tech Support Scams

Scareware is a malware type that uses fear tactics to trap users and trick them into installing malware unknowingly or disclosing private information before they realize they are being scammed. Generally, the scareware attacks are disguised as full-screen alerts that spoof antivirus warnings. 

Scareware aka Tech Support Scam

One infamous example is the “tech support scam,” where a fake warning tells the user their device is infected with malware and they need to reach out to contact support number (fake) or install fake anti-malware software to restore the system and clean up things. Over the years, users have noticed a few Microsoft IT support fraud pop-ups.

Realizing the threat, Microsoft is combating the issue with its new Scareware Blockers feature in Edge, which was first rolled out in November last year at the Ignite conference.

Defender SmartScreen, a feature that saves Edge users from scams, starts after a malicious site is caught and added to its index of abusive web pages to protect users globally.

AI-powered Edge scareware blocker

The new AI-powered Edge scareware blocker by Microsoft “offers extra protection by detecting signs of scareware scams in real-time using a local machine learning model,” says Bleeping Computer.

Talking about Scareware, Microsoft says, “The blocker adds a new, first line of defense to help protect the users exposed to a new scam if it attempts to open a full-screen page.” “Scareware blocker uses a machine learning model that runs on the local computer,” it further adds.

Once the blocker catches a scam page, it informs users and allows them to continue using the webpage if they trust the website. 

Activating Scareware Blocker

Before activating the blocker, the user needs to install the Microsoft Edge beta version. The version installs along with the main release variant of Edge, easing the user’s headache of co-mingling the versions. If the user is on a managed system, they should make sure previews are enabled admin. 

"After making sure you have the latest updates, you should see the scareware blocker preview listed under "Privacy Search and Services,'" Microsoft says. Talking about reporting the scam site from users’ end for the blocker to work, Microsoft says it helps them “make the feature more reliable to catch the real scams. 

Beyond just blocking individual scam outbreaks” their Digital Crimes Unit “goes even further to target the cybercrime supply chain directly.”

DeepSeek’s Rise: A Game-Changer in the AI Industry


January 27 marked a pivotal day for the artificial intelligence (AI) industry, with two major developments reshaping its future. First, Nvidia, the global leader in AI chips, suffered a historic loss of $589 billion in market value in a single day—the largest one-day loss ever recorded by a company. Second, DeepSeek, a Chinese AI developer, surged to the top of Apple’s App Store, surpassing ChatGPT. What makes DeepSeek’s success remarkable is not just its rapid rise but its ability to achieve high-performance AI with significantly fewer resources, challenging the industry’s reliance on expensive infrastructure.

DeepSeek’s Innovative Approach to AI Development

Unlike many AI companies that rely on costly, high-performance chips from Nvidia, DeepSeek has developed a powerful AI model using far fewer resources. This unexpected efficiency disrupts the long-held belief that AI breakthroughs require billions of dollars in investment and vast computing power. While companies like OpenAI and Anthropic have focused on expensive computing infrastructure, DeepSeek has proven that AI models can be both cost-effective and highly capable.

DeepSeek’s AI models perform at a level comparable to some of the most advanced Western systems, yet they require significantly less computational power. This approach could democratize AI development, enabling smaller companies, universities, and independent researchers to innovate without needing massive financial backing. If widely adopted, it could reduce the dominance of a few tech giants and foster a more inclusive AI ecosystem.

Implications for the AI Industry

DeepSeek’s success could prompt a strategic shift in the AI industry. Some companies may emulate its focus on efficiency, while others may continue investing in resource-intensive models. Additionally, DeepSeek’s open-source nature adds an intriguing dimension to its impact. Unlike OpenAI, which keeps its models proprietary, DeepSeek allows its AI to be downloaded and modified by researchers and developers worldwide. This openness could accelerate AI advancements but also raises concerns about potential misuse, as open-source AI can be repurposed for unethical applications.

Another significant benefit of DeepSeek’s approach is its potential to reduce the environmental impact of AI development. Training AI models typically consumes vast amounts of energy, often through large data centers. DeepSeek’s efficiency makes AI development more sustainable by lowering energy consumption and resource usage.

However, DeepSeek’s rise also brings challenges. As a Chinese company, it faces scrutiny over data privacy, security, and censorship. Like other AI developers, DeepSeek must navigate issues related to copyright and the ethical use of data. While its approach is innovative, it still grapples with industry-wide challenges that have plagued AI development in the past.

A More Competitive AI Landscape

DeepSeek’s emergence signals the start of a new era in the AI industry. Rather than a few dominant players controlling AI development, we could see a more competitive market with diverse solutions tailored to specific needs. This shift could benefit consumers and businesses alike, as increased competition often leads to better technology at lower prices.

However, it remains unclear whether other AI companies will adopt DeepSeek’s model or continue relying on resource-intensive strategies. Regardless, DeepSeek has already challenged conventional thinking about AI development, proving that innovation isn’t always about spending more—it’s about working smarter.

DeepSeek’s rapid rise and innovative approach have disrupted the AI industry, challenging the status quo and opening new possibilities for AI development. By demonstrating that high-performance AI can be achieved with fewer resources, DeepSeek has paved the way for a more inclusive and sustainable future. As the industry evolves, its impact will likely inspire further innovation, fostering a competitive landscape that benefits everyone.

AI-Powered Personalized Learning: Revolutionizing Education

 


In an era where technology permeates every aspect of our lives, education is undergoing a transformative shift. Imagine a classroom where each student’s learning experience is tailored to their unique needs, interests, and pace. This is no longer a distant dream but a rapidly emerging reality, thanks to the revolutionary impact of artificial intelligence (AI). Personalized learning, once a buzzword, has become a game-changer, with AI at the forefront of this transformation. In this blog, we’ll explore how AI is driving the personalized learning revolution, its benefits and challenges, and what the future holds for this exciting frontier in education.

Personalized learning is an educational approach that tailors teaching and learning experiences to meet the unique needs, strengths, and interests of each student. Unlike traditional one-size-fits-all methods, personalized learning aims to provide a customized educational experience that accommodates diverse learning styles, paces, and preferences. The goal is to enhance student engagement and achievement by addressing individual characteristics such as academic abilities, prior knowledge, and personal interests.

The Role of AI in Personalized Learning

AI is playing a pivotal role in making personalized learning a reality. Here’s how:

  • Adaptive Learning Platforms: These platforms use AI to dynamically adjust educational content based on a student’s performance, learning style, and pace. By analyzing how students interact with the material, adaptive systems can modify task difficulty and provide tailored resources to meet individual needs. This ensures a personalized learning experience that evolves as students progress.
  • Analyzing Student Performance and Behavior: AI-driven analytics processes vast amounts of data on student behavior, performance, and engagement to identify patterns and trends. These insights help educators pinpoint areas where students excel or struggle, enabling timely interventions and support.

Benefits of AI-Driven Personalized Learning

The integration of AI into personalized learning offers numerous advantages:

  1. Enhanced Student Engagement: AI-powered personalized learning makes education more relevant and engaging by adapting content to individual interests and needs. This approach fosters a deeper connection to the subject matter and keeps students motivated.
  2. Improved Learning Outcomes: Studies have shown that personalized learning tools lead to higher test scores and better grades. By addressing individual academic gaps, AI ensures that students master concepts more effectively.
  3. Efficient Use of Resources: AI streamlines lesson planning and focuses on areas where students need the most support. By automating repetitive tasks and providing actionable insights, AI helps educators manage their time and resources more effectively.

Challenges and Considerations

While AI-driven personalized learning holds immense potential, it also presents several challenges:

  1. Data Privacy and Security: Protecting student data is a critical concern. Schools and technology providers must implement robust security measures and transparent data policies to safeguard sensitive information.
  2. Equity and Access: Ensuring equal access to AI-powered tools is essential to prevent widening educational disparities. Efforts must be made to provide all students with the necessary devices and internet connectivity.
  3. Teacher Training and Integration: Educators need comprehensive training to effectively use AI tools in the classroom. Ongoing support and resources are crucial to help teachers integrate these technologies into their lesson plans.

AI is revolutionizing education by enabling personalized learning experiences that cater to each student’s unique needs and pace. By enhancing engagement, improving outcomes, and optimizing resource use, AI is shaping the future of education. However, as we embrace these advancements, it is essential to address challenges such as data privacy, equitable access, and teacher training. With the right approach, AI-powered personalized learning has the potential to transform education and unlock new opportunities for students worldwide.

Rising Cyber Threats in the Financial Sector: A Call for Enhanced Resilience


The financial sector is facing a sharp increase in cyber threats, with investment firms, such as asset managers, hedge funds, and private equity firms, becoming prime targets for ransomware, AI-driven attacks, and data breaches. These firms rely heavily on uninterrupted access to trading platforms and sensitive financial data, making cyber resilience essential to prevent operational disruptions and reputational damage. A successful cyberattack can lead to severe financial losses and a decline in investor confidence, underscoring the importance of robust cybersecurity measures.

As regulatory requirements tighten, investment firms must stay ahead of evolving cyber risks. In the UK, the upcoming Cyber Resilience and Security Bill, set to be introduced in 2025, will impose stricter cybersecurity obligations on financial institutions. Additionally, while the European Union’s Digital Operational Resilience Act (DORA) is not directly applicable to UK firms, it will impact those operating within the EU market. Financial regulators, including the Bank of England, the Financial Conduct Authority (FCA), and the Prudential Regulation Authority, are emphasizing cyber resilience as a critical component of financial stability.

The Growing Complexity of Cyber Threats

The rise of artificial intelligence has further complicated the cybersecurity landscape. AI-powered tools are making cyberattacks more sophisticated and difficult to detect. For instance, voice cloning technology allows attackers to impersonate executives or colleagues, deceiving employees into granting unauthorized access or transferring large sums of money. Similarly, generative AI tools are being leveraged to craft highly convincing phishing emails that lack traditional red flags like poor grammar and spelling errors, making them far more effective.

As AI-driven cyber threats grow, investment firms must integrate AI-powered security solutions to defend against these evolving attack methods. However, many investment firms face challenges in building and maintaining effective cybersecurity frameworks on their own. This is where partnering with managed security services providers (MSSPs) can offer a strategic advantage. Companies like Linedata provide specialized cybersecurity solutions tailored for financial services firms, including AI-driven threat detection, 24/7 security monitoring, incident response planning, and employee training.

Why Investment Firms Are Prime Targets

Investment firms are increasingly attractive targets for cybercriminals due to their high-value transactions and relatively weaker security compared to major banks. Large financial institutions have heavily invested in cyber resilience, making it harder for hackers to breach their systems. As a result, attackers are shifting their focus to investment firms, which may not have the same level of cybersecurity investment. Without robust security measures, these firms face increased risks of operational paralysis and significant financial losses.

To address these challenges, investment firms must prioritize:

  1. Strengthening Cyber Defenses: Implementing advanced security measures, such as multi-factor authentication (MFA), encryption, and endpoint protection.
  2. Rapid Incident Response: Developing and regularly testing incident response plans to ensure quick recovery from cyberattacks.
  3. Business Continuity Planning: Ensuring continuity of operations during and after a cyber incident to minimize disruptions.

By adopting these proactive strategies, investment firms can enhance their cyber resilience and protect their financial assets, sensitive client data, and investor confidence.

As cyber risks continue to escalate, investment firms must take decisive action to reinforce their cybersecurity posture. By investing in robust cyber resilience strategies, adopting AI-driven security measures, and partnering with industry experts, firms can safeguard their operations and maintain trust in an increasingly digital financial landscape. The combination of regulatory compliance, advanced technology, and strategic partnerships will be key to navigating the complex and ever-evolving world of cyber threats.

Cryptojacking: The Silent Cybersecurity Threat Surging in 2023

Cryptojacking, the unauthorized exploitation of an organization’s computing resources to mine cryptocurrency, has emerged as a significant yet often overlooked cybersecurity threat. Unlike ransomware, which overtly disrupts operations, cryptojacking operates covertly, leading to substantial financial and operational impacts. In 2023, cryptojacking attacks surged by 659%, totaling 1.1 billion incidents, according to SonicWall’s 2024 Cyber Threat Report.

This dramatic increase underscores the growing appeal of cryptojacking among cybercriminals. The financial implications for businesses are severe. Research indicates that for every dollar’s worth of cryptocurrency mined illicitly, companies incur approximately USD 53 in cloud service costs. This disparity highlights the hidden expenses organizations face when their systems are compromised for unauthorized mining activities.

How Cryptojacking Works and Its Impact

Attackers employ various methods to infiltrate systems, including:

  • Drive-by Downloads: Compromised websites automatically download mining scripts onto visitors’ devices.
  • Phishing Emails: Trick users into installing malware that enables cryptojacking.
  • Exploiting Vulnerabilities: Targeting unpatched software to gain unauthorized access.

The rise of containerized environments has also provided new avenues for attackers. For example, cybercriminals can embed mining scripts within public repository images or target exposed Docker APIs to deploy cryptojacking malware.

Beyond financial losses, cryptojacking degrades system performance by overutilizing CPU and GPU resources. This leads to slower operations, reduced productivity, and increased energy consumption. Over time, the strain on hardware can cause overheating and potential equipment failure. Additionally, compromised systems are more vulnerable to further security breaches, as attackers can leverage their access to escalate attacks.

Combating Cryptojacking: Proactive Measures

To defend against cryptojacking, organizations must implement proactive security measures. Key strategies include:

  1. Endpoint Protection Tools: Deploy solutions that monitor for unusual resource usage, such as sudden spikes in CPU or GPU activity, which may indicate cryptojacking.
  2. Network Traffic Analysis: Analyze network traffic for connections to known cryptocurrency mining pools, which are often used by attackers to process mined coins.
  3. Cloud Monitoring Solutions: Utilize cloud-based tools to detect unauthorized mining activities in cloud environments, where cryptojacking is increasingly prevalent.
  4. Regular Testing and Validation: Simulate cryptojacking attacks to identify vulnerabilities and strengthen defenses before actual threats materialize.

Organizations should also prioritize employee training to recognize phishing attempts and other common attack vectors. Regularly updating and patching software can close vulnerabilities that attackers exploit to infiltrate systems. Additionally, implementing robust access controls and monitoring for unusual user activity can help prevent unauthorized access.

The surge in cryptojacking attacks highlights the growing sophistication of cybercriminals and the need for organizations to adopt comprehensive cybersecurity measures. While cryptojacking may not be as visibly disruptive as ransomware, its financial and operational impacts can be equally devastating. By deploying advanced detection tools, analyzing network traffic, and regularly testing defenses, businesses can mitigate the risks posed by cryptojacking and protect their computing resources from unauthorized exploitation. As cyber threats continue to evolve, proactive and adaptive security strategies will be essential to safeguarding organizational assets and maintaining operational efficiency.

Generative AI in Cybersecurity: A Double-Edged Sword

Generative AI (GenAI) is transforming the cybersecurity landscape, with 52% of CISOs prioritizing innovation using emerging technologies. However, a significant disconnect exists, as only 33% of board members view these technologies as a top priority. This gap underscores the challenge of aligning strategic priorities between cybersecurity leaders and company boards.

The Role of AI in Cybersecurity

According to the latest Splunk CISO Report, cyberattacks are becoming more frequent and sophisticated. Yet, 41% of security leaders believe that the requirements for protection are becoming easier to manage, thanks to advancements in AI. Many CISOs are increasingly relying on AI to:

  • Identify risks (39%)
  • Analyze threat intelligence (39%)
  • Detect and prioritize threats (35%)

However, GenAI is a double-edged sword. While it enhances threat detection and protection, attackers are also leveraging AI to boost their efforts. For instance:

  • 32% of attackers use AI to make attacks more effective.
  • 28% use AI to increase the volume of attacks.
  • 23% use AI to develop entirely new types of threats.

This has led to growing concerns among security professionals, with 36% of CISOs citing AI-powered attacks as their biggest worry, followed by cyber extortion (24%) and data breaches (23%).

Challenges and Opportunities in Cybersecurity

One of the major challenges is the gap in budget expectations. Only 29% of CISOs feel they have sufficient funding to secure their organizations, compared to 41% of board members who believe their budgets are adequate. Additionally, 64% of CISOs attribute the cyberattacks their firms experience to a lack of support.

Despite these challenges, there is hope. A vast majority of cybersecurity experts (86%) believe that AI can help attract entry-level talent to address the skills shortage, while 65% say AI enables seasoned professionals to work more productively. Collaboration between security teams and other departments is also improving:

  • 91% of organizations are increasing security training for legal and compliance staff.
  • 90% are enhancing training for security teams.

To strengthen cyber defenses, experts emphasize the importance of foundational practices:

  1. Strong Passwords and MFA: Poor password security is linked to 80% of data breaches. Companies are encouraged to use password managers and enforce robust password policies.
  2. Regular Cybersecurity Training: Educating employees on risk management and security practices, such as using antivirus software and maintaining firewalls, can significantly reduce vulnerabilities.
  3. Third-Party Vendor Assessments: Organizations must evaluate third-party vendors for security risks, as breaches through these channels can expose even the most secure systems.

Generative AI is reshaping the cybersecurity landscape, offering both opportunities and challenges. While it enhances threat detection and operational efficiency, it also empowers attackers to launch more sophisticated and frequent attacks. To navigate this evolving landscape, organizations must align strategic priorities, invest in AI-driven solutions, and reinforce foundational cybersecurity practices. By doing so, they can better protect their systems and data in an increasingly complex threat environment.

Hackers Exploit WordPress Sites to Attack Mac and Windows Users


According to security experts, threat actors are abusing out-of-date versions of WordPress and plug-ins to modify thousands of sites to trap visitors into downloading and installing malware.

In a conversation with cybersecurity news portal TechCrunch, Simon Wijckmans, founder and CEO of the web security company c/side, said the hacking campaign is still “very much live”.

Spray and pray campaign

The hackers aim to distribute malware to loot passwords and sensitive data from Mac and Windows users. According to c/side, a few hacked websites rank among the most popular ones on the internet. Reporting on the company’s findings, Himanshu Anand believes it is a “widespread and very commercialized attack” and told TechCrunch the campaign is a “spray and pray” cyber attack targeting website visitors instead of a specific group or a person.

After the hacked WordPress sites load in a user’s browser, the content immediately turns to show a false Chrome browser update page, asking the website visitor (user) to download and install an update to access the website, researchers believe. 

Users tricked via fake sites

When a visitor agrees to the update, the compromised website will ask the user to download a harmful malware file disguised as the update, depending on whether the visitor is a Mac or Windows user. Researchers have informed Automattic (the company) that makes and distributes Wordpress.com about the attack campaign and sent a list of harmful domains. 

According to TechCrunch, Megan Fox, spokesperson for Automattic, did not comment at the time of press. Later, Automattic clarified that the security of third-party plugins is the responsibility of WordPress developers.

“There are specific guidelines that plugin authors must consult and adhere to ensure the overall quality of their plugins and the safety of their users,” Ms Fox told TechCrunch. “Authors have access to a Plugin Handbook which covers numerous security topics, including best practices and managing plugin security,” she added. 

C/side has traced over 10,000 sites that may have been a target of this hacking campaign. The company found malicious scripts on various domains by crawling the internet, using a reverse DNS lookup to find domains and sites linked with few IP addresses which exposed a wider number of domains hosting malicious scripts. TechCrunch has not confirmed claims of C/side’s data, but it did find a WordPress site showing malicious content earlier this week.

AI-Designed Drugs by DeepMind Expected to Enter Clinical Trials This Year

 

Isomorphic Labs, a Google DeepMind spinoff, is set to see its AI-designed drugs enter clinical trials this year, according to Nobel Prize-winning CEO Demis Hassabis.

“We’ll hopefully have some AI-designed drugs in clinical trials by the end of the year,” Hassabis shared during a panel at the World Economic Forum in Davos. “That’s the plan.”

The company aims to drastically reduce the drug discovery timeline from years to mere weeks or months, leveraging breakthroughs in artificial intelligence. Hassabis, along with DeepMind scientist John Jumper and a US professor, was awarded the 2024 Nobel Prize in Chemistry for their innovative work in predicting protein structures.

While AI's ability to analyze vast data sets holds promise for speeding up drug development, a December report by Bloomberg Intelligence highlighted a cautious adoption of the technology by major pharmaceutical companies. The report, led by analyst Andrew Galler, noted that initial data for clinical candidates has been mixed.

Despite this, partnerships between tech firms and pharmaceutical companies are growing. In 2023, Isomorphic Labs entered into strategic research collaborations with Eli Lilly & Co. and Novartis AG.

Founded in 2021 to commercialize DeepMind’s AI in drug discovery, Isomorphic Labs builds on the success of AlphaFold, DeepMind’s revolutionary tool for predicting protein patterns. Since its launch in 2018, AlphaFold has evolved to its third iteration, now capable of modeling a wide range of molecular structures, including DNA and RNA, and predicting their interactions.

A Looming Threat to Crypto Keys: The Risk of a Quantum Hack

 


The Quantum Computing Threat to Cryptocurrency Security

The immense computational power that quantum computing offers raises significant concerns, particularly around its potential to compromise private keys that secure digital interactions. Among the most pressing fears is its ability to break the private keys safeguarding cryptocurrency wallets.

While this threat is genuine, it is unlikely to materialize overnight. It is, however, crucial to examine the current state of quantum computing in terms of commercial capabilities and assess its potential to pose a real danger to cryptocurrency security.

Before delving into the risks, it’s essential to understand the basics of quantum computing. Unlike classical computers, which process information using bits (either 0 or 1), quantum computers rely on quantum bits, or qubits. Qubits leverage the principles of quantum mechanics to exist in multiple states simultaneously (0, 1, or both 0 and 1, thanks to the phenomenon of superposition).

Quantum Computing Risks: Shor’s Algorithm

One of the primary risks posed by quantum computing stems from Shor’s algorithm, which allows quantum computers to factor large integers exponentially faster than classical algorithms. The security of several cryptographic systems, including RSA, relies on the difficulty of factoring large composite numbers. For instance, RSA-2048, a widely used cryptographic key size, underpins the private keys used to sign and authorize cryptocurrency transactions.

Breaking RSA-2048 with today’s classical computers, even using massive clusters of processors, would take billions of years. To illustrate, a successful attempt to crack RSA-768 (a 768-bit number) in 2009 required years of effort and hundreds of clustered machines. The computational difficulty grows exponentially with key size, making RSA-2048 virtually unbreakable within any human timescale—at least for now.

Commercial quantum computing offerings, such as IBM Q System One, Google Sycamore, Rigetti Aspen-9, and AWS Braket, are available today for those with the resources to use them. However, the number of qubits these systems offer remains limited — typically only a few dozen. This is far from sufficient to break even moderately sized cryptographic keys within any realistic timeframe. Breaking RSA-2048 would require millions of years with current quantum systems.

Beyond insufficient qubit capacity, today’s quantum computers face challenges in qubit stability, error correction, and scalability. Additionally, their operation depends on extreme conditions. Qubits are highly sensitive to electromagnetic disturbances, necessitating cryogenic temperatures and advanced magnetic shielding for stability.

Future Projections and the Quantum Threat

Unlike classical computing, quantum computing lacks a clear equivalent of Moore’s Law to predict how quickly its power will grow. Google’s Hartmut Neven proposed a “Neven’s Law” suggesting double-exponential growth in quantum computing power, but this model has yet to consistently hold up in practice beyond research and development milestones.

Hypothetically, achieving double-exponential growth to reach the approximately 20 million physical qubits needed to crack RSA-2048 could take another four years. However, this projection assumes breakthroughs in addressing error correction, qubit stability, and scalability—all formidable challenges in their own right.

While quantum computing poses a theoretical threat to cryptocurrency and other cryptographic systems, significant technical hurdles must be overcome before it becomes a tangible risk. Current commercial offerings remain far from capable of cracking RSA-2048 or similar key sizes. However, as research progresses, it is crucial for industries reliant on cryptographic security to explore quantum-resistant algorithms to stay ahead of potential threats.