Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label McAfee. Show all posts

SpyAgent Malware Uses OCR Tech to Attack Crypto Wallets

SpyAgent Malware Uses OCR Tech to Attack Crypto Wallets

Malware Using OCR to Steal Crypto Keys

Cybersecurity experts have found a new malware threat that lures users into downloading a malicious app to grow. An advanced malware strain campaign has surfaced from North Korea, it attacks cryptocurrency wallets by exploiting the mnemonic keys of the users. McAfee researcher SangRyo found the malware after tracking stolen data from malicious apps for breaking servers and gaining access. 

The working of SpyAgent

The malware is called SpyAgent, and it targets cryptocurrency enthusiasts. What makes this malware unique is its ability to use OCR technology for scanning images, it leverages Optical Character Recognition (OCR) technology to steal mnemonic keys stored in the images of infected devices. Hackers use these mnemonic keys to gain unauthorized entry into digital assets. 

These keys are twelve-word phrases used for recovering cryptocurrency wallets. There has been a rise in the use of mnemonic phrases for crypto wallet security because they are easy to remember if compared to a long strain of random characters. 

Spy Agent pretends to be a legitimate application, such as banking, streaming, government services, or utility software. McAfee has discovered over 280 fake applications.

Distribution of SpyAgent

When a victim downloads a malicious app containing SpyAgent, the malware builds a command and control  (C2 )server that allows threat actors to launch remote commands. Later, the attacker extracts contact lists, text messages, and stored images from the compromised device. 

“Due to the server’s misconfiguration, not only were its internal components unintentionally exposed, but the sensitive personal data of victims, which had been compromised, also became publicly accessible. In the ‘uploads’ directory, individual folders were found, each containing photos collected from the victims, highlighting the severity of the data breach,” the report says.

Reach of SpyAgent

SpyAgent has been found working in Korea, but its range has widened to other countries as well. The malware is capable of disguising itself as a legitimate application, which makes it dangerous. SpyAgent has recently expanded to the United Kingdom. 

It has also moved from simple HTTP requests to web socket connections, allowing real-time two-way communication with the C2 server. It escapes security researchers via techniques like function remaining and string encoding. 

The McAfee report recommends “users to be cautious about their actions, like installing apps and granting permissions. It is advisable to keep important information securely stored and isolated from devices. Security software has become not just a recommendation but a necessity for protecting devices.”

Supreme Court Directive Mandates Self-Declaration Certificates for Advertisements

 

In a landmark ruling, the Supreme Court of India recently directed every advertiser and advertising agency to submit a self-declaration certificate confirming that their advertisements do not make misleading claims and comply with all relevant regulatory guidelines before broadcasting or publishing. This directive stems from the case of Indian Medical Association vs Union of India. 

To enforce this directive, the Ministry of Information and Broadcasting has issued comprehensive guidelines outlining the procedure for obtaining these certificates, which became mandatory from June 18, 2024, onwards. This move is expected to significantly impact advertisers, especially those using deepfakes generated by Generative AI (GenAI) on social media platforms like Instagram, Facebook, and YouTube. The use of deepfakes in advertisements has been a growing concern. 

In a previous op-ed titled “Urgently needed: A law to protect consumers from deepfake ads,” the rising menace of deepfake ads making misleading or fraudulent claims was highlighted, emphasizing the adverse effects on consumer rights and public figures. A survey conducted by McAfee revealed that 75% of Indians encountered deepfake content, with 38% falling victim to deepfake scams, and 18% directly affected by such fraudulent schemes. Alarmingly, 57% of those targeted mistook celebrity deepfakes for genuine content. The new guidelines aim to address these issues by requiring advertisers to provide bona fide details and final versions of advertisements to support their declarations. This measure is expected to aid in identifying and locating advertisers, thus facilitating tracking once complaints are filed. 

Additionally, it empowers courts to impose substantial fines on offenders. Despite the potential benefits, industry bodies such as the Indian Internet and Mobile Association of India (IAMAI), Indian Newspaper Association (INS), and the Indian Society of Advertisers (ISA) have expressed concerns over the additional compliance burden, particularly for smaller advertisers. These bodies argue that while self-certification has merit, the process needs to be streamlined to avoid hampering legitimate advertising activities. The challenge of regulating AI-enabled deepfake ads is further complicated by the sheer volume of digital advertisements, making it difficult for regulators to review each one. 

Therefore, it is suggested that online platforms be obligated to filter out deepfake ads, leveraging their technology and resources for efficient detection. The Ministry of Electronics and Information Technology highlighted the negligence of social media intermediaries in fulfilling their due diligence obligations under the IT Rules in a March 2024 advisory. 

Although non-binding, the advisory stipulates that intermediaries must not allow unlawful content on their platforms. The Supreme Court is set to hear the matter again on July 9, 2024, when industry bodies are expected to present their views on the new guidelines. This intervention could address the shortcomings of current regulatory approaches and set a precedent for robust measures against deceptive advertising practices. 

As the country grapples with the growing threat of dark patterns in online ads, the apex court’s involvement is crucial in ensuring consumer protection and the integrity of advertising practices in India.

Information Stealer Malware Preys on Gamers via Deceptive Cheat Code Baits

 


There is a new info-stealing malware that appears as a cheat on a game called Cheat Lab, and it promises downloaders that if they convince their friends to download it too, they will receive a free copy. It is possible to harvest sensitive information from infected computers by using Redline malware, including passwords, cookies, autofill information, and cryptocurrency wallet information, which is one of the most powerful information-stealing malware programs. 

As a result of the malware's popularity among cybercriminals and its widespread distribution channels, it has become widespread. According to McAfee threat researchers, the new malware leverages Lua bytecode to evade detection. This makes it possible to inject malicious code into legitimate processes for stealth, while also benefiting from Just-In-Time compilations (JIT). 

Using a command and control server associated with the malware, the researchers link this variant to Redline, which has been linked to the malware for a long time. The tests BleepingComputer conducted revealed that the malware does not exhibit the typical behaviour associated with Redline, such as stealing browser information, saving passwords, and stealing cookies. 

Through a URL linked to Microsoft's 'vcpkg' GitHub repository, the malicious Redline payloads resemble demonstrations of cheating tools named "Cheat Lab" and "Cheater Pro". When the malware is executed, it unpacks two files, compiler.exe and lua51.dll, once the MSI installer is installed.  The malicious Lua bytecode is also dropped in a file called 'readme.txt'. 

The campaign uses an interesting lure to spread the malware even further by telling victims that if they convince their friends to install the cheating program, they will receive a free, fully licensed copy of the cheating program. As an added layer of legitimacy, the malware payload is distributed in the form of an uncompiled bytecode rather than an executable to avoid detection. 

To make sure that the malware is not detected, it comes in the form of an activation key included. Upon installation of the compiler.exe program, Lua bytecode is compiled and executed by it, and it also creates scheduled tasks that execute during system startup when the program is installed. The same executable also sets up persistence by creating scheduled tasks. 

McAfee reports that a fallback mechanism is used by the malware to persist the three files, copying them to a long random path under the program directory that the malware is active on the infected system, it will communicate with a C2 server and send screenshots and system information to the server, then wait for commands to be executed by the server on the host system. 

Even though it is unknown exactly how information thieves first infect computers, they are typically spread through malvertising, YouTube video descriptions, P2P downloads, and deceptive software download sites that can lead to infection. The Redline virus is a highly dangerous one, which is why users are urged not to use unsigned executables or download files from unreliable websites. 

As a result of this atta seemingly trustworthy programs, such as those found on Microsoft's GitHub, are at risk of infection by the Even though BleepingComputer contacted Microsoft about the executables that were distributed via its GitHub URLs, the company had not respond to the publication date.

Google Removes 22 Malicious Android Apps Exposed by McAfee

Google recently took action against 22 apps that are available on the Google Play Store, which has alarmed Android users. These apps, which have been downloaded over 2.5 million times in total, have been discovered to engage in harmful behavior that compromises users' privacy and severely drains their phone's battery. This disclosure, made by cybersecurity company McAfee, sheds light on the hidden threats that might be present in otherwise innocent programs.

These apps allegedly consumed an inordinate amount of battery life and decreased device performance while secretly running in the background. Users were enticed to install the programs by the way they disguised themselves as various utilities, photo editors, and games. Their genuine intentions, however, were anything but harmless.

Several well-known programs, like 'Photo Blur Studio,' 'Super Smart Cleaner,' and 'Magic Cut Out,' are on the list of prohibited applications. These applications took use of background processes to carry out tasks including sending unwanted adverts, following users without their permission, and even possibly stealing private data. This instance emphasizes the need for caution while downloading apps, especially from sites that might seem reliable, like the Google Play Store.

Google's swift response to remove these malicious apps demonstrates its commitment to ensuring the security and privacy of its users. However, this incident also emphasizes the ongoing challenges faced by app marketplaces in identifying and preventing such threats. While Google employs various security measures to vet apps before they are listed, some malicious software can still evade detection, slipping through the cracks.

As a precautionary measure, users are strongly advised to review the apps currently installed on their Android devices and uninstall any that match the names on the list provided by McAfee. Regularly checking app permissions and reviews can also provide insights into potential privacy concerns.

The convenience of app stores shouldn't take precedence over the necessity of cautious and educated downloading, as this instance offers as a sharp reminder. Users must actively participate in securing their digital life as fraudsters become more skilled. A secure and reliable digital environment will depend on public understanding of cybersecurity issues as well as ongoing efforts from internet behemoths like Google.

Top Victim of AI Voice Scams with 83% Losing Money

A new report has revealed that India tops the list of countries most affected by AI-powered voice scams. The report, released by cybersecurity firm McAfee, shows that 83% of Indians who fell victim to voice scams lost money, making them the most financially affected.

Voice scams are a growing concern in India and around the world. Criminals use artificial intelligence (AI) technology to create lifelike voice bots that mimic real human voices, making it harder for victims to detect fraud. Once they gain the victim's trust, scammers use various tactics to steal their money or personal information.

According to the McAfee report, almost half of all Indians have experienced an AI-enabled voice scam. These scams can take many forms, such as impersonating bank officials, telecom providers, or even government officials. The scammers trick victims into revealing their bank account or credit card details or even convincing them to transfer money to a fake account.

The report highlights the need for greater awareness of AI-powered voice scams and how to avoid falling victim to them. It recommends that individuals take basic precautions such as not sharing personal information over the phone, verifying the identity of the person calling before divulging any information, and being wary of unsolicited calls.

McAfee also recommends that organizations invest in anti-fraud technology to help detect and prevent these scams. The report suggests that organizations could use advanced voice analytics to identify fraudulent calls and stop them in real time.

As AI technology continues to evolve, it is likely that voice scams will become even more sophisticated and harder to detect. It is therefore essential that individuals and organizations remain vigilant and take proactive steps to protect themselves from this growing threat.

The rise of AI-powered voice scams is a cause for concern in India and globally. With India topping the list of victims, it is clear that more needs to be done to combat this threat. By raising awareness, investing in anti-fraud technology, and taking basic precautions, individuals and organizations can help protect themselves from these scams and prevent criminals from profiting at their expense.


Cybercriminals Use ChatGPT to Ease Their Operations

 

Cybercriminals have already leveraged the power of AI to develop code that may be used in a ransomware attack, according to Sergey Shykevich, a lead ChatGPT researcher at the cybersecurity firm Checkpoint security.

Threat actors can use the capabilities of AI in ChatGPT to scale up their current attack methods, many of which depend on humans. Similar to how they aid cybercriminals in general, AI chatbots also aid a subset of them known as romance scammers. An earlier McAfee investigation noted that cybercriminals frequently have lengthy discussions in order to seem trustworthy and entice unwary victims. AI chatbots like ChatGPT can help the bad guys by producing texts, which makes their job easier.

The ChatGPT has safeguards in place to keep hackers from utilizing it for illegal activities, but they are far from infallible. The desire for a romantic rendezvous was turned down, as was the request to prepare a letter asking for financial assistance to leave Ukraine.

Security experts are concerned about the misuse of ChatGPT, which is now powering Bing's new, troublesome chatbot. They see the potential for chatbots to help in phishing, malware, and hacking assaults.

When it comes to phishing attacks, the entry barrier is already low, but ChatGPT could make it simple for people to proficiently create dozens of targeted scam emails — as long as they craft good prompts, according to Justin Fier, director for Cyber Intelligence & Analytics at Darktrace, a cybersecurity firm.

Most tech businesses refer to Section 230 of the Communications Decency Act of 1996 when addressing illegal or criminal content posted on their websites by third party users. According to the law, owners of websites where users can submit content, such as Facebook or Twitter, are not accountable for what is said there. Governments should be in charge of developing and enforcing legislation, according to 95% of IT respondents in the Blackberry study.

The open-source ChatGPT API models, which do not have the same content limitations as the online user interface, are being used by certain hackers, according to Shykevich.ChatGPT is notorious for being boldly incorrect, which might be an issue for a cybercriminal seeking to create an email meant to imitate someone else, experts told Insider. This could make cybercrime more difficult. Moreover, ChatGPT still uses barriers to stop illegal conduct, even if the correct script can frequently get around these barriers.

According to Europol, Deepfakes are Used Frequently in Organized Crime

 

The Europol Innovation Lab recently released its inaugural report, titled "Facing reality? Law enforcement and the challenge of deepfakes", as part of its Observatory function. The paper presents a full overview of the illegal use of deepfake technology, as well as the obstacles faced by law enforcement in identifying and preventing the malicious use of deepfakes, based on significant desk research and in-depth interaction with law enforcement specialists. 

Deepfakes are audio and audio-visual consents that "convincingly show individuals expressing or doing activities they never did, or build personalities which never existed in the first place" using artificial intelligence. Deepfakes are being utilized for malevolent purposes in three important areas, according to the study: disinformation, non-consensual obscenity, and document fraud. As technology further advances in the near future, it is predicted such attacks would become more realistic and dangerous.

  1. Disinformation: Europol provided several examples of how deepfakes could be used to distribute false information, with potentially disastrous results. In the geopolitical domain, for example, producing a phony emergency warning that warns of an oncoming attack. The US charged the Kremlin with a disinformation scheme to use as a pretext for an invasion of Ukraine in February, just before the crisis between Russia and Ukraine erupted.  The technique may also be used to attack corporations, for example, by constructing a video or audio deepfake which makes it appear as if a company's leader committed contentious or unlawful conduct. Criminals imitating the voice of the top executive of an energy firm robbed the company of $243,000. 
  2. Non-consensual obscenity: According to the analysis, Sensity found non-consensual obscenity was present in 96 percent of phony videos. This usually entails superimposing a victim's face onto the body of a philanderer, giving the impression of the victim is performing the act.
  3. Document fraud: While current fraud protection techniques are making it more difficult to fake passports, the survey stated that "synthetic media and digitally modified facial photos present a new way for document fraud." These technologies, for example, can mix or morph the faces of the person who owns the passport and the person who wants to obtain one illegally, boosting the likelihood the photo will pass screening, including automatic ones. 

Deepfakes might also harm the court system, according to the paper, by artificially manipulating or producing media to show or deny someone's guilt. In a recent child custody dispute, a mother of a kid edited an audiotape of her husband to persuade the court he was abusive to her. 

Europol stated all law enforcement organizations must acquire new skills and tools to properly deal with these types of threats. Manual detection strategies, such as looking for discrepancies, and automatic detection techniques, such as deepfake detection software uses artificial intelligence and is being developed by companies like Facebook and McAfee, are among them. 

It is quite conceivable that malicious threat actors would employ deepfake technology to assist various criminal crimes and undertake misinformation campaigns to influence or corrupt public opinion in the months and years ahead. Machine learning and artificial intelligence advancements will continue to improve the software used to make deepfakes.

Google Authenticator Codes for Android is Targeted by Nefarious Escobar Banking Trojan

 


'Escobar' virus has resurfaced in the form of a novel threat, this time targeting Google Authenticator MFA codes. 

The spyware, which goes by the package name com.escobar.pablo is the latest Aberebot version which was discovered by researchers from Cyble, a security research firm, who combed through a cybercrime-related forum. Virtual view, phishing overlays, screen captures, text-message captures, and even multi-factor authentication capture are all included in the feature set. 

All of these characteristics are utilized in conjunction with a scheme to steal a user's financial data. This malware even tries to pass itself off as McAfee antivirus software, with the McAfee logo as its icon. It is not uncommon for malware to disguise itself as a security software; in fact, it was recently reported that the malware was installed straight inside of a completely functional 2-factor authentication app. 

The malicious author is leasing the beta version of the malware to a maximum of five customers for $3,000 per month, with threat actors getting three days to test the bot for free. After development, the threat actor intends to raise the malware's price to $5,000. 

Even if the overlay injections are curtailed in some way, the malware has various other capabilities to make it effective against any Android version. In the most recent version, the authors increased the number of aimed banks and financial organizations to 190 entities from 18 countries. 

The malware asks a total of 25 rights, 15 of which are employed nefariously. To name a few, accessibility, audio recording, read SMS, read/write storage, acquiring account lists, disabling keylock, making calls, and accessing precise device locations. Everything the virus captures, including SMS call records, key logs, notifications, and Google Authenticator codes, is sent to the C2 server. 

It is too soon to gauge the popularity of the new Escobar malware among cybercriminals, especially given its exorbitant price. Nonetheless, it has grown in strength to the point that it can now lure a wider audience. 

In general, avoiding the installation of APKs outside of Google Play, utilizing a mobile security application, and ensuring the Google Play Protect is enabled on your device will reduce, the chances of being infected with Android trojans.