Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Latest News

Thousands of WordPress Sites at Risk as Motors Theme Flaw Enables Admin Account Takeovers

  A critical security flaw tracked as CVE-2025-4322 has left a widely used premium WordPress theme exposed to attackers. Cybercriminals hav...

All the recent news you need to know

Doctors Warned Over Use of Unapproved AI Tools to Record Patient Conversations

 


Healthcare professionals in the UK are under scrutiny for using artificial intelligence tools that haven’t been officially approved to record and transcribe conversations with patients. A recent investigation has uncovered that several doctors and medical facilities are relying on AI software that does not meet basic safety and data protection requirements, raising serious concerns about patient privacy and clinical safety.

This comes despite growing interest in using artificial intelligence to help doctors with routine tasks like note-taking. Known as Ambient Voice Technology (AVT), these tools are designed to save time by automatically recording and summarising patient consultations. In theory, this allows doctors to focus more on care and less on paperwork. However, not all AVT tools being used in medical settings have passed the necessary checks set by national authorities.

Earlier this year, NHS England encouraged the use of AVT and outlined the minimum standards required for such software. But in a more recent internal communication dated 9 June, the agency issued a clear warning. It stated that some AVT providers are not following NHS rules, yet their tools are still being adopted in real-world clinical settings.

The risks associated with these non-compliant tools include possible breaches of patient confidentiality, financial liabilities, and disruption to the wider digital strategy of the NHS. Some AI programs may also produce inaccurate outputs— a phenomenon known as “hallucination”— which can lead to serious errors in medical records or decision-making.

The situation has left many general practitioners in a difficult position. While eager to embrace new technologies, many lack the technical expertise to determine whether a product is safe and compliant. Dr. David Wrigley, a senior representative of the British Medical Association, stressed the need for stronger guidance and oversight. He believes doctors should not be left to evaluate software quality alone and that central NHS support is essential to prevent unsafe usage.

Healthcare leaders are also concerned about the growing number of lesser-known AI companies aggressively marketing their tools to individual clinics and hospitals. With many different options flooding the market, there’s a risk that unsafe or poorly regulated tools might slip through the cracks.

Matthew Taylor, head of the NHS Confederation, called the situation a “turning point” and suggested that national authorities need to offer clearer recommendations on which AI systems are safe to use. Without such leadership, he warned, the current approach could become chaotic and risky.

Interestingly, the UK Health Secretary recently acknowledged that some doctors are already experimenting with AVT tools before receiving official approval. While not endorsing this behaviour, he saw it as a sign that healthcare workers are open to digital innovation.

On a positive note, some AVT software does meet current NHS standards. One such tool, Accurx Scribe, is being used successfully and is developed in close consultation with NHS leaders.

As AI continues to reshape healthcare, experts agree on one thing: innovation must go hand-in-hand with accountability and safety.

New Report Ranks Best And Worst Generative AI Tools For Privacy

 

Most generative AI companies use client data to train their chatbots. For this, they may use private or public data. Some services take a more flexible and non-intrusive approach to gathering customer data. Not so much for others. A recent analysis from data removal firm Incogni weighs the benefits and drawbacks of AI in terms of protecting your personal data and privacy.

As part of its "Gen AI and LLM Data Privacy Ranking 2025," Incogni analysed nine well-known generative AI services and evaluated their data privacy policies using 11 distinct factors. The following queries were addressed by the criteria: 

  • What kind of data do the models get trained on? 
  • Is it possible to train the models using user conversations? 
  • Can non-service providers or other appropriate entities receive prompts? 
  • Can the private data from users be erased from the training dataset?
  • How clear is it when training is done via prompts? 
  • How simple is it to locate details about the training process of models? 
  • Does the data collection process have a clear privacy policy?
  • How easy is it to read the privacy statement? 
  • Which resources are used to gather information about users?
  • Are third parties given access to the data? 
  • What information are gathered by the AI apps? 

The research involved Mistral AI's Le Chat, OpenAI's ChatGPT, xAI's Grok, Anthropic's Claude, Inflection AI's Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI performed well on certain questions but not so well on others. 

For instance, Grok performed poorly on the readability of its privacy policy but received a decent rating for how clearly it communicates that prompts are used for training. As another example, the ratings that ChatGPT and Gemini received for gathering data from their mobile apps varied significantly between the iOS and Android versions.

However, Le Chat emerged as the best privacy-friendly AI service overall. It did well in the transparency category, despite losing a few points. Additionally, it only collects a small amount of data and achieves excellent scores for additional privacy concerns unique to AI. 

Second place went to ChatGPT. Researchers at Incogni were a little worried about how user data interacts with the service and how OpenAI trains its models. However, ChatGPT explains the company's privacy standards in detail, lets you know what happens to your data, and gives you explicit instructions on how to restrict how your data is used. Claude and PI came in third and fourth, respectively, after Grok. Each performed reasonably well in terms of protecting user privacy overall, while there were some issues in certain areas. 

"Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind," Incogni noted in its report. "These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.” 

In its investigation, Incogni discovered that AI firms exchange data with a variety of parties, including service providers, law enforcement, members of the same corporate group, research partners, affiliates, and third parties. 

"Microsoft's privacy policy implies that user prompts may be shared with 'third parties that perform online advertising services for Microsoft or that use Microsoft's advertising technologies,'" Incogni added in the report. "DeepSeek's and Meta's privacy policies indicate that prompts can be shared with companies within its corporate group. Meta's and Anthropic's privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.” 

You can prevent the models from being trained using your prompts with some providers. This is true for Grok, Mistral AI, Copilot, and ChatGPT. However, based on their privacy rules and other resources, it appears that other services do not allow this kind of data collecting to be stopped. Gemini, DeepSeek, Pi AI, and Meta AI are a few of these. In response to this concern, Anthropic stated that it never gathers user input for model training. 

Ultimately, a clear and understandable privacy policy significantly helps in assisting you in determining what information is being gathered and how to opt out.

How AI Impacts KYC and Financial Security

How AI Impacts KYC and Financial Security

Finance has become a top target for deepfake-enabled fraud in the KYC process, undermining the integrity of identity-verification frameworks that help counter-terrorism financing (CTF) and anti-money laundering (AML) systems.

Experts have found a rise in suspicious activity using AI-generated media, highlighting that threat actors exploit GenAI to “defraud… financial institutions and their customers.”

Wall Street’s FINRA has warned that deepfake audio and video scams can cause losses of $40 billion by 2027 in the finance sector.

Biometric safety measures do not work anymore. A 2024 Regula research revealed that 49% businesses throughout industries such as fintech and banking have faced fraud attacks using deepfakes, with average losses of $450,000 per incident. 

As these numbers rise, it becomes important to understand how deepfake invasion can be prevented to protect customers and the financial industry globally. 

More than 1,100 deepfake attacks in Indonesia

Last year, an Indonesian bank reported over 1,100 attempts to escape its digital KYC loan-application process within 3 months, cybersecurity firm Group-IB reports.

Threat actors teamed AI-powered face-swapping with virtual-camera tools to imitate the bank’s  liveness-detection controls, despite the bank’s “robust, multi-layered security measures." According to Forbes, the estimated losses “from these intrusions have been estimated at $138.5 million in Indonesia alone.”

The AI-driven face-swapping tools allowed actors to replace the target’s facial features with those of another person, allowing them to exploit “virtual camera software to manipulate biometric data, deceiving institutions into approving fraudulent transactions,” Group-IB reports.

How does the deepfake KYC fraud work

Scammers gather personal data via malware, the dark web, social networking sites, or phishing scams. The date is used to mimic identities. 

After data acquisition, scammers use deepfake technology to change identity documents, swapping photos, modifying details, and re-creating entire ID to escape KYC checks.

Threat actors then use virtual cameras and prerecorded deepfake videos, helping them avoid security checks by simulating real-time interactions. 

This highlights that traditional mechanisms are proving to be inadequate against advanced AI scams. A study revealed that every 5 minutes, one deepfake attempt was made. Only 0.1 of people could spot deepfakes. 

Iranian Hackers Threaten More Trump Email Leaks Amid Rising U.S. Cyber Tensions

 

Iran-linked hackers have renewed threats against the U.S., claiming they plan to release more emails allegedly stolen from former President Donald Trump’s associates. The announcement follows earlier leaks during the 2024 presidential race, when a batch of messages was distributed to the media. 

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) responded by calling the incident “digital propaganda,” warning it was a calculated attempt to discredit public officials and mislead the public. CISA added that those responsible would be held accountable, describing the operation as part of a broader campaign by hostile foreign actors to sow division. 

Speaking virtually with Reuters, a hacker using the alias “Robert” claimed the group accessed roughly 100 GB of emails from individuals including Trump adviser Roger Stone, legal counsel Lindsey Halligan, White House chief of staff Susie Wiles, and Trump critic Stormy Daniels. Though the hackers hinted at selling the material, they provided no specifics or content. 

The initial leaks reportedly involved internal discussions, legal matters, and possible financial dealings involving RFK Jr.’s legal team. Some information was verified, but had little influence on the election, which Trump ultimately won. U.S. authorities later linked the operation to Iran’s Revolutionary Guard, though the hackers declined to confirm this. 

Soon after Trump ordered airstrikes on Iranian nuclear sites, Iranian-aligned hackers began launching cyberattacks. Truth Social, Trump’s platform, was briefly knocked offline by a distributed denial-of-service (DDoS) attack claimed by a group known as “313 Team.” Security experts confirmed the group’s ties to Iranian and pro-Palestinian cyber networks. 

The outage occurred shortly after Trump posted about the strikes. Users encountered error messages, and monitoring organizations warned that “313 Team” operates within a wider ecosystem of groups supporting anti-U.S. cyber activity. 

The Department of Homeland Security (DHS) issued a national alert on June 22, citing rising cyber threats linked to Iran-Israel tensions. The bulletin highlighted increased risks to U.S. infrastructure, especially from loosely affiliated hacktivists and state-backed cyber actors. DHS also warned that extremist rhetoric could trigger lone-wolf attacks inspired by Iran’s ideology. 

Federal agencies remain on high alert, with targeted sectors including defense, finance, and energy. Though large-scale service disruptions have not yet occurred, cybersecurity teams have documented attempted breaches. Two groups backing the Palestinian cause claimed responsibility for further attacks across more than a dozen U.S. sectors. 

At the same time, the U.S. faces internal challenges in cyber preparedness. The recent dismissal of Gen. Timothy Haugh, who led both the NSA and Cyber Command, has created leadership uncertainty. Budget cuts to election security programs have added to concerns. 

While a military ceasefire between Iran and Israel may be holding, experts warn the cyber conflict is far from over. Independent threat actors and ideological sympathizers could continue launching attacks. Analysts stress the need for sustained investment in cybersecurity infrastructure—both public and private—as digital warfare becomes a long-term concern.

Russian APT28 Targets Ukraine Using Signal to Deliver New Malware Families

 

The Russian state-sponsored threat group APT28, also known as UAC-0001, has been linked to a fresh wave of cyberattacks against Ukrainian government targets, using Signal messenger chats to distribute two previously undocumented malware strains—BeardShell and SlimAgent. 

While the Signal platform itself remains uncompromised, its rising adoption among government personnel has made it a popular delivery vector for phishing attacks. Ukraine’s Computer Emergency Response Team (CERT-UA) initially discovered these attacks in March 2024, though critical infection vector details only surfaced after ESET notified the agency in May 2025 of unauthorised access to a “gov.ua” email account. 

Investigations revealed that APT28 used Signal to send a macro-laced Microsoft Word document titled "Акт.doc." Once opened, it initiates a macro that drops two payloads—a malicious DLL file (“ctec.dll”) and a disguised PNG file (“windows.png”)—while modifying the Windows Registry to enable persistence via COM-hijacking. 

These payloads execute a memory-resident malware framework named Covenant, which subsequently deploys BeardShell. BeardShell, written in C++, is capable of downloading and executing encrypted PowerShell scripts, with execution results exfiltrated via the Icedrive API. The malware maintains stealth by encrypting communications using the ChaCha20-Poly1305 algorithm. 

Alongside BeardShell, CERT-UA identified another tool dubbed SlimAgent. This lightweight screenshot grabber captures images using multiple Windows API calls, then encrypts them with a combination of AES and RSA before local storage. These are presumed to be extracted later by an auxiliary tool. 

APT28’s involvement was further corroborated through their exploitation of vulnerabilities in Roundcube and other webmail software, using phishing emails mimicking Ukrainian news publications to exploit flaws like CVE-2020-35730, CVE-2021-44026, and CVE-2020-12641. These emails injected malicious JavaScript files—q.js, e.js, and c.js—to hijack inboxes, redirect emails, and extract credentials from over 40 Ukrainian entities. CERT-UA recommends organisations monitor traffic linked to suspicious domains such as “app.koofr.net” and “api.icedrive.net” to detect any signs of compromise.

Navigating AI Security Risks in Professional Settings


 

There is no doubt that generative artificial intelligence is one of the most revolutionary branches of artificial intelligence, capable of producing entirely new content across many different types of media, including text, image, audio, music, and even video. As opposed to conventional machine learning models, which are based on executing specific tasks, generative AI systems learn patterns and structures from large datasets and are able to produce outputs that aren't just original, but are sometimes extremely realistic as well. 

It is because of this ability to simulate human-like creativity that generative AI has become an industry leader in technological innovation. Its applications go well beyond simple automation, touching almost every sector of the modern economy. As generative AI tools reshape content creation workflows, they produce compelling graphics and copy at scale in a way that transforms the way content is created. 

The models are also helpful in software development when it comes to generating code snippets, streamlining testing, and accelerating prototyping. AI also has the potential to support scientific research by allowing the simulation of data, modelling complex scenarios, and supporting discoveries in a wide array of areas, such as biology and material science.

Generative AI, on the other hand, is unpredictable and adaptive, which means that organisations are able to explore new ideas and achieve efficiencies that traditional systems are unable to offer. There is an increasing need for enterprises to understand the capabilities and the risks of this powerful technology as adoption accelerates. 

Understanding these capabilities has become an essential part of staying competitive in a digital world that is rapidly changing. In addition to reproducing human voices and creating harmful software, generative artificial intelligence is rapidly lowering the barriers for launching highly sophisticated cyberattacks that can target humans. There is a significant threat from the proliferation of deepfakes, which are realistic synthetic media that can be used to impersonate individuals in real time in convincing ways. 

In a recent incident in Italy, cybercriminals manipulated and deceived the Defence Minister Guido Crosetto by leveraging advanced audio deepfake technology. These tools demonstrate the alarming ability of such tools for manipulating and deceiving the public. Also, a finance professional recently transferred $25 million after being duped into transferring it by fraudsters using a deepfake simulation of the company's chief financial officer, which was sent to him via email. 

Additionally, the increase in phishing and social engineering campaigns is concerning. As a result of the development of generative AI, adversaries have been able to craft highly personalised and context-aware messages that have significantly enhanced the quality and scale of these attacks. It has now become possible for hackers to create phishing emails that are practically indistinguishable from legitimate correspondence through the analysis of publicly available data and the replication of authentic communication styles. 

Cybercriminals are further able to weaponise these messages through automation, as this enables them to create and distribute a huge volume of tailored lures that are tailored to match the profile and behaviour of each target dynamically. Using the power of AI to generate large language models (LLMs), attackers have also revolutionised malicious code development. 

A large language model can provide attackers with the power to design ransomware, improve exploit techniques, and circumvent conventional security measures. Therefore, organisations across multiple industries have reported an increase in AI-assisted ransomware incidents, with over 58% of them stating that the increase has been significant.

It is because of this trend that security strategies must be adapted to address threats that are evolving at machine speed, making it crucial for organisations to strengthen their so-called “human firewalls”. While it has been demonstrated that employee awareness remains an essential defence, studies have indicated that only 24% of organisations have implemented continuous cyber awareness programs, which is a significant amount. 

As companies become more sophisticated in their security efforts, they should update training initiatives to include practical advice on detecting hyper-personalised phishing attempts, detecting subtle signs of deepfake audio and identifying abnormal system behaviours that can bypass automated scanners in order to protect themselves from these types of attacks. Providing a complement to human vigilance, specialised counter-AI solutions are emerging to mitigate these risks. 

In order to protect against AI-driven phishing campaigns, DuckDuckGoose Suite, for example, uses behavioural analytics and threat intelligence to prevent AI-based phishing campaigns from being initiated. Tessian, on the other hand, employs behavioural analytics and threat intelligence to detect synthetic media. As well as disrupting malicious activity in real time, these technologies also provide adaptive coaching to assist employees in developing stronger, instinctive security habits in the workplace. 
Organisations that combine informed human oversight with intelligent defensive tools will have the capacity to build resilience against the expanding arsenal of AI-enabled cyber threats. Recent legal actions have underscored the complexity of balancing AI use with privacy requirements. It was raised by OpenAI that when a judge ordered ChatGPT to keep all user interactions, including deleted chats, they might inadvertently violate their privacy commitments if they were forced to keep data that should have been wiped out.

AI companies face many challenges when delivering enterprise services, and this dilemma highlights the challenges that these companies face. OpenAI and Anthropic are platforms offering APIs and enterprise products that often include privacy safeguards; however, individuals using their personal accounts are exposed to significant risks when handling sensitive information that is about them or their business. 

AI accounts should be managed by the company, users should understand the specific privacy policies of these tools, and they should not upload proprietary or confidential materials unless specifically authorised by the company. Another critical concern is the phenomenon of AI hallucinations that have occurred in recent years. This is because large language models are constructed to predict language patterns rather than verify facts, which can result in persuasively presented, but entirely fictitious content.

As a result of this, there have been several high-profile incidents that have resulted, including fabricated legal citations in court filings, as well as invented bibliographies. It is therefore imperative that human review remains part of professional workflows when incorporating AI-generated outputs. Bias is another persistent vulnerability.

Due to the fact that artificial intelligence models are trained on extensive and imperfect datasets, these models can serve to mirror and even amplify the prejudices that exist within society as a whole. As a result of the system prompts that are used to prevent offensive outputs, there is an increased risk of introducing new biases, and system prompt adjustments have resulted in unpredictable and problematic responses, complicating efforts to maintain a neutral environment. 

Several cybersecurity threats, including prompt injection and data poisoning, are also on the rise. A malicious actor may use hidden commands or false data to manipulate model behaviour, thus causing outputs that are inaccurate, offensive, or harmful. Additionally, user error remains an important factor as well. Instances such as unintentionally sharing private AI chats or recording confidential conversations illustrate just how easy it is to breach confidentiality, even with simple mistakes.

It has also been widely reported that intellectual property concerns complicate the landscape. Many of the generative tools have been trained on copyrighted material, which has raised legal questions regarding how to use such outputs. Before deploying AI-generated content commercially, companies should seek legal advice. 

As AI systems develop, even their creators are not always able to predict the behaviour of these systems, leaving organisations with a challenging landscape where threats continue to emerge in unexpected ways. However, the most challenging risk is the unknown. The government is facing increasing pressure to establish clear rules and safeguards as artificial intelligence moves from the laboratory to virtually every corner of the economy at a rapid pace. 

Before the 2025 change in administration, there was a growing momentum behind early regulatory efforts in the United States. For instance, Executive Order 14110 outlined the appointment of chief AI officers by federal agencies and the development of uniform guidelines for assessing and managing AI risks. As a result of this initiative, a baseline of accountability for AI usage in the public sector was established. 

A change in strategy has taken place in the administration's approach to artificial intelligence since they rescinded the order. This signalled a departure from proactive federal oversight. The future outlook for artificial intelligence regulation in the United States is highly uncertain, however. The Trump-backed One Big Beautiful Bill proposes sweeping restrictions that would prevent state governments from enacting artificial intelligence regulations for at least the next decade. 

As a result of this measure becoming law, it could effectively halt local and regional governance at a time when AI is gaining a greater influence across practically all industries. Meanwhile, the European Union currently seems to be pursuing a more consistent approach to AI. 

As of March 2024, a comprehensive framework titled the Artificial Intelligence Act was established. This framework categorises artificial intelligence applications according to the level of risk they pose and imposes strict requirements for applications that pose a significant risk, such as those in the healthcare field, education, and law enforcement. 

Also included in the legislation are certain practices, such as the use of facial recognition systems in public places, that are outright banned, reflecting a commitment to protecting the individual's rights. In terms of how AI oversight is defined and enforced, there is a widening gap between regions as a result of these different regulatory strategies. 

Technology will continue to evolve, and to ensure compliance and manage emerging risks effectively, organisations will have to remain vigilant and adapt to the changing legal landscape as a result of this.

Think Twice Before Using Text Messages for Security Codes — Here’s a Safer Way

 



In today’s digital world, many of us protect our online accounts using two-step verification. This process, known as multi-factor authentication (MFA), usually requires a password and an extra code, often sent via SMS, to log in. It adds an extra layer of protection, but there’s a growing concern: receiving these codes through text messages might not be as secure as we think.


Why Text Messages Aren’t the Safest Option

When you get a code on your phone, you might assume it’s sent directly by the company you’re logging into—whether it’s your bank, email, or social media. In reality, these codes are often delivered by external service providers hired by big tech firms. Some of these third-party firms have been connected to surveillance operations and data breaches, raising serious concerns about privacy and security.

Worse, these companies operate with little public transparency. Several investigative reports have highlighted how this lack of oversight puts user information at risk. Additionally, government agencies such as the U.S. Cybersecurity and Infrastructure Security Agency (CISA) have warned people not to rely on SMS for authentication. Text messages are not encrypted, which means hackers who gain access to a telecom network can intercept them easily.


What Should You Do Instead?

Don’t ditch multi-factor authentication altogether. It’s still a critical defense against account hijacking. But you should consider switching to a more secure method—such as using an authenticator app.


How Authenticator Apps Work

Authenticator apps are programs installed on your smartphone or computer. They generate temporary codes for your accounts that refresh every 30 seconds. Because these codes live inside your device and aren’t sent over the internet or phone networks, they’re far more difficult for criminals to intercept.

Apps like Google Authenticator, Microsoft Authenticator, LastPass, and even Apple’s built-in password tools provide this functionality. Most major platforms now allow you to connect an authenticator app instead of relying on SMS.


Want Even Better Protection? Try Passkeys

If you want the most secure login method available today, look into passkeys. These are a newer, password-free login option developed by a group of leading tech companies. Instead of typing in a password or code, you unlock your account using your face, fingerprint, or device PIN.

Here’s how it works: your device stores a private key, while the website keeps the matching public key. Only when these two keys match—and you prove your identity through a biometric scan — are you allowed to log in. Because there are no codes or passwords involved, there’s nothing for hackers to steal or intercept.

Passkeys are also backed up to your cloud account, so if you lose your device, you can still regain access securely.


Multi-factor authentication is essential—but how you receive your codes matters. Avoid text messages when possible. Opt for an authenticator app, or better yet, move to passkeys where available. Taking this step could be the difference between keeping your data safe or leaving it vulnerable.