Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Fake Accounts. Show all posts

Avoiding Social Media Scams When Recovering a Locked Gmail Account

 

Losing access to your Gmail account can be a frightening experience, especially given that Gmail is deeply integrated into the online lives of more than 2.5 billion users globally. Unfortunately, the popularity of Gmail has also attracted scammers who exploit users seeking help after being locked out of their accounts. These attackers wait for users to post their issues publicly on social media platforms, particularly X (formerly Twitter). They pose as helpful people or even official support agents, suggesting that they can help users recover their accounts. By using fake accounts that appear credible, they deceive users into sharing personal information or even paying money under the guise of assistance. 

Engaging with these fake accounts is risky, as scammers may ask for payment without helping or, worse, obtain the victim’s login credentials, gaining full access to their accounts. In the initial panic of losing an account, people often turn to social media for immediate help. This public search for help exposes them to a swarm of scammers using automated bots to detect posts about lost accounts. These bots then direct users to supposed “support agents” who, in reality, are fraudsters attempting to capitalize on the vulnerability of those locked out of their accounts. Victims may be asked to pay for a recovery service or provide personal details, like account passwords or two-factor authentication codes. 

Often, the scammers promise assistance but deliver none, leaving users at risk of both financial loss and further account compromise. In some cases, attackers use these interactions to access the victim’s Gmail credentials and take over not just the email but other connected Google services, leading to a much larger security breach. While the need for quick support is understandable, it’s essential to avoid turning to public platforms like X or Facebook, which can make users easy targets. Instead, Google has official account recovery methods to retrieve locked accounts safely. The company provides a structured recovery process, guiding users through steps that don’t involve sharing details with strangers. This includes using backup email addresses or two-factor authentication to regain access. 

Additionally, Google has an official support community where users can discuss issues and seek guidance in a more secure environment, reducing the likelihood of encountering scammers. By following these steps, users can regain access to their accounts without exposing themselves to further risk. Even in stressful situations, staying cautious and using verified recovery options is the safest course. Publicly seeking help with sensitive matters like account access opens doors to fraudsters who thrive on desperation. Taking time to verify recovery resources and avoiding social media platforms for assistance can help users avoid falling victim to predatory scams. By following Google’s secure processes, users can ensure the safety of their accounts and keep their personal information secure.

OpenAI’s Disruption of Foreign Influence Campaigns Using AI

 

Over the past year, OpenAI has successfully disrupted over 20 operations by foreign actors attempting to misuse its AI technologies, such as ChatGPT, to influence global political sentiments and interfere with elections, including in the U.S. These actors utilized AI for tasks like generating fake social media content, articles, and malware scripts. Despite the rise in malicious attempts, OpenAI’s tools have not yet led to any significant breakthroughs in these efforts, according to Ben Nimmo, a principal investigator at OpenAI. 

The company emphasizes that while foreign actors continue to experiment, AI has not substantially altered the landscape of online influence operations or the creation of malware. OpenAI’s latest report highlights the involvement of countries like China, Russia, Iran, and others in these activities, with some not directly tied to government actors. Past findings from OpenAI include reports of Russia and Iran trying to leverage generative AI to influence American voters. More recently, Iranian actors in August 2024 attempted to use OpenAI tools to generate social media comments and articles about divisive topics such as the Gaza conflict and Venezuelan politics. 

A particularly bold attack involved a Chinese-linked network using OpenAI tools to generate spearphishing emails, targeting OpenAI employees. The attack aimed to plant malware through a malicious file disguised as a support request. Another group of actors, using similar infrastructure, utilized ChatGPT to answer scripting queries, search for software vulnerabilities, and identify ways to exploit government and corporate systems. The report also documents efforts by Iran-linked groups like CyberAveng3rs, who used ChatGPT to refine malicious scripts targeting critical infrastructure. These activities align with statements from U.S. intelligence officials regarding AI’s use by foreign actors ahead of the 2024 U.S. elections. 

However, these nations are still facing challenges in developing sophisticated AI models, as many commercial AI tools now include safeguards against malicious use. While AI has enhanced the speed and credibility of synthetic content generation, it has not yet revolutionized global disinformation efforts. OpenAI has invested in improving its threat detection capabilities, developing AI-powered tools that have significantly reduced the time needed for threat analysis. The company’s position at the intersection of various stages in influence operations allows it to gain unique insights and complement the work of other service providers, helping to counter the spread of online threats.

Unveiling Storm-1152: A Top Creator of Fake Microsoft Accounts

 

The Digital Crimes Unit of Microsoft disrupted a major supplier of cybercrime-as-a-service (CaaS) last week, dubbed Storm-1152. The attackers had registered over 750 million fake Microsoft accounts, which they planned to sell online to other cybercriminals, making millions of dollars in the process.

"Storm-1152 runs illicit websites and social media pages, selling fraudulent Microsoft accounts and tools to bypass identity verification software across well-known technology platforms," Amy Hogan-Burney, general manager for Microsoft's DCU, stated . "These services reduce the time and effort needed for criminals to conduct a host of criminal and abusive behaviors online.” 

Cybercriminals can employ fraudulent accounts linked to fictitious profiles as a virtually anonymous starting point for automated illegal operations including ransomware, phishing, spamming, and other fraud and abuse. Furthermore, Storm-1152 is the industry leader in the development of fictitious accounts, offering account services to numerous prominent cyber threat actors. 

Microsoft lists Scattered Spider (also known as Octo Tempest) as one of these cybercriminals. They are the ones responsible for the ransomware attacks on Caesars Entertainment and the MGM Grand this fall). 

Additionally, Hogan-Burney reported that the DCU had located the group's primary ringleaders, Tai Van Nguyen, Linh Van Nguyễn (also known as Nguyễn Van Linh), and Duong Dinh Tu, all of whom were stationed in Vietnam.

"Our findings show these individuals operated and wrote the code for the illicit websites, published detailed step-by-step instructions on how to use their products via video tutorials, and provided chat services to assist those using their fraudulent services," Burney noted. 

Sophisticated crimeware-as-a-service ring 

Storm-1152's ability to circumvent security measures such as CAPTCHAs and construct millions of Microsoft accounts linked to nonexistent people highlights the group's expertise, according to researchers.

The racket was likely carried out by "leveraging automation, scripts, DevOps practices, and AI to bypass security measures like CAPTCHAs." The CaaS phenomenon is a "complex facet of the cybercrime ecosystem... making advanced cybercrime tools accessible to a wider spectrum of malicious actors," stated Craig Jones, vice president of security operations at Ontinue. 

According to Critical Start's Callie Guenther, senior manager of cyber threat research, "the use of automatic CAPTCHA-solving services indicates a fairly high level of sophistication, allowing the group to bypass one of the primary defences against automated account creation.”

Platforms can take a number of precautions to prevent unwittingly aiding cybercrime, the researchers noted. One such safeguard is the implementation of sophisticated detection algorithms that can recognise and flag suspicious conduct at scale, ideally with the help of AI. 

Furthermore, putting robust multifactor authentication (MFA) in place for the creation of accounts—especially those with elevated privileges—can greatly lower the success rate of creating fake accounts. However, Ontinue's Jones emphasises that more work needs to be done on a number of fronts.

Microsoft Shuts Down a Criminal Ring Responsible for Creating Over 750 Million Fake Accounts

 

Microsoft Corp. has shut down a cybercrime group's US-based infrastructure, which created more than 750 million fake accounts across the company's services. 

Microsoft carried out the takedown with the support of Arkose Labs Inc., a venture-backed cybersecurity firm. The latter sells a cloud platform that allows businesses in blocking fraud and hacking efforts aimed at their services. Storm-1152 is the threat actor that Microsoft has identified. 

Several hacking organisations' tactic is to create fake accounts in services like Microsoft Outlook and then use them for phishing or spam campaigns. Furthermore, fraudulent accounts can be employed to launch distributed denial-of-service (DDoS) attacks. Hackers typically do not create such accounts themselves, but rather purchase them from cybercrime-as-a-service outfits such as Storm-1152, the threat actor that Microsoft has disrupted. 

Storm-1152 is believed to be the "number one seller" of fake Microsoft accounts, the company stated. It is estimated that the gang created 750 million such accounts and also created fraudulent users on other companies' services. Furthermore, Storm-1152 sold software for circumventing CAPTCHAs, which are used by many online sites to ensure that a login request comes from a human and not an automated system.

Microsoft believes that several cybercrime groups' hacking efforts were fueled by the fake accounts that Storm-1152 created. Scattered Spider, the threat actor behind the widely reported attacks against Caesars Entertainment Inc. and MGM Resorts International earlier this year, is believed to be one of those groups. According to Microsoft's investigation, Storm-1152 earned millions of dollars in illegal money while incurring far larger expenses for the companies who made an effort to thwart it. 

“While our case focuses on fraudulent Microsoft accounts, the websites impacted also sold services to bypass security measures on other well-known technology platforms,” Amy Hogan-Burney, Microsoft’s general manager and associate general counsel for cybersecurity policy and protection, explained. “Today’s action therefore has a broader impact, benefiting users beyond Microsoft.” 

Microsoft disrupted the four websites by obtaining a seizure order from a federal court in the Southern District of New York. As part of its efforts to thwart Storm-1152's operations, Microsoft has also discovered that the group is led by three Vietnamese citizens : Duong Dinh Tu, Linh Van Nguyn, and Tai Van Nguyen. The company stated that it has reported its findings to law enforcement.

The Twitter Blue Scandal Caused Eli Lilly to Lose Billions of Dollars


It seems that Twitter Inc. has suspended its recently announced $8 blue check subscription following a proliferation of fake accounts on its platform. However, the decision to suspend the service came too late for one pharmaceutical company due to how fast online accounts proliferated. 

American pharmaceutical giant Eli Lilly (LLY) lost billions of dollars after its stock plummeted on Friday due to a false tweet claiming "insulin is free now" sent on Thursday by a fake account, verified with a blue tick. 

A fake account impersonating Eli Lilly on social media promised free insulin as part of its promotion on Friday, according to The Star newspaper. However, the stock of the company dropped 4.37 percent, wiping out over $15 billion in market capitalization. 

In a tweet posted from its official Twitter account, Eli Lilly provided clarification regarding the matter.

A flood of fake Twitter accounts has sprung up since Elon Musk's revised subscription guidelines for Twitter Blue were announced. Eli Lilly is only one of the victims. 

Twitter's Blue Saga


It was reported on Friday by AFP that Twitter took action on Friday to curb the proliferation of fake accounts. This has been seen since Elon Musk took over the company. There has been a suspension of new sign-ups for the newly introduced paid checkmark system on Twitter, and some accounts have been restored to their gray badges. 

Before the new law, the coveted blue tick used to be available only to politicians, famed personalities, journalists, and other public figures. It was also available to government organizations and private organizations. 

The official Twitter account @twittersupport tweeted on Friday about restoring the "official" label on accounts to stop the flood of fake accounts. The tweet stated "To combat impersonation, we have added an "official" label to some accounts." 

There is evidence that Twitter has temporarily disabled the feature as documented by a memo sent internally to its employees, obtained by US media including The Washington Post, to address "impersonation issues."

Fake CISO Profiles of Corporate Giants swamps LinkedIn

 

LinkedIn has recently been flooded with fake profiles for the post of Chief Information Security Officer (CISO) at some of the world’s largest organizations. 

One such LinkedIn profile is for the CISO of the energy giant, Chevron. One might search for the profile, and find the profile for Victor Sites, stating he is from Westerville, Ohio, and is a graduate of Texas A&M University. When in reality, the role of Chevron is currently occupied by Christopher Lukas, who is based in Danville, Calif. 

According to KrebsOnSecurity, upon searching the profile of “Current CISO of Chevron” on Google, they were led to the fake CISO profile, for it is the first search result returned, followed by the LinkedIn profile of the real Chevron CISO, Christopher Lukas. It was found that the false LinkedIn profiles are engineered to confuse search engine results for the role of CISOs at major organizations, and the profiles are even considered valid by numerous downstream data-scraping sources. 

Similar cases could be seen in the LinkedIn profile for Maryann Robles, claiming to be the CISO of another energy giant, ExxonMobil. LinkedIn was able to detect more such fabricated CISO profiles since the already detected fake profile suggested 1 view a number of them in the “People Also Viewed” column. 


Who is Behind the Fake Profiles? 


Security experts are not yet certain of the identity of the threat actors behind the creation and operation of these fake profiles. Likewise, the intention leading to the cyber security incident also remains unclear.  

LinkedIn, in a statement given to KrebsOnSecurity, said its team is working on tracking the fake accounts and taking down the con men. “We do have strong human and automated systems in place, and we’re continually improving, as fake account activity becomes more sophisticated,” the statement reads. “In our transparency report we share how our teams plus automated systems are stopping the vast majority of fraudulent activity we detect in our community – around 96% of fake accounts and around 99.1% of spam and scam,” said LinkedIn. 

What can LinkedIn do?  


LinkedIn could take simple steps that could inform the user about the profile they are looking at, and whether to trust the given profile. Such as, adding a “created on” date for every profile, and leveraging the user with filtered searches. 

The former CISO Mason of LinkedIn says it could also experiment with offering the user something similar to Twitter’s ‘verified mark’ to those who chose to validate that they can respond to email at the domain linked with their stated current employer. Mason also added LinkedIn needs a more streamlined process allowing employers to remove phony employee accounts.

Dark Web Threats: How Can They Be Combated?





The Dark Web is often considered one of the most dangerous sources of brand reputational threats. Another very significant source of threats is the so-called shadowy websites. To keep themselves safe from cybercrime, organizations need to be able to monitor this ecosystem.

In the past, reputational missteps resulted from one of the primary causes of reputational damage: poor judgment and malfeasance. It has done great damage, both from an economic and ethical point of view. It is estimated that Volkswagen's quarterly operating profit dropped by almost 450 million euros six months after the diesel emissions scandal broke.  

Several dozen fake accounts were exposed at Wells Fargo and the bank was fined $185 million. There have also been instances when digital problems have been as powerful as traditional ones. In 2013 the infamous Target data breach turned out to be a $162 million loss for the company, as a result of the breach that occurred.  

Big enterprises create several systems to guard themselves against attacks that can cause disasters, in 2016 the estimated number of systems was 75.

The CEO of the security platform mentioned that scanning the web supports business and help them to safeguard from cyberattacks or find exfiltrated data previously.

A cyber-attacker who is planning to attack your company may seek advice from a third party or try to obtain resources, such as a botnet, on the Internet to deliver malicious payloads to your computer. Essentially, if you know where to look for them, you can find information that might alert you to an upcoming attack, so you need not worry about not being able to find it.

If a set of credentials is in the wrong hands, it only takes one set of credentials for your company to suffer a major blow in terms of its reputation. Detecting stolen credentials is not difficult - they are in the market for sale, so you can scan them for free! 

VIPs and corporate executives are of particular interest to hackers because they contain personal information about them. The information can be used to build convincing spearfishing attacks to gain access to sensitive information or intellectual property by using convincing spearfishing attacks. It is possible for some information, such as travel plans, to even put these individuals in a dangerous situation.

On a positive note, it is also good news that vulnerabilities about malware are one of the main topics of discussion on the dark web. With the proper threat intelligence, you can learn whether you are susceptible to potential cyber threats and if so, what you need to do to protect yourself. Thus, if you prepare in advance, you will be in a better position to deal with surprises in the future.

Brazilian Cybercriminals Created Fake Accounts for Uber, Lyft and DoorDash

 

According to a recent report by the Federal Bureau of Investigation (FBI), a Brazilian organization is planning to defraud users of digital networks such as Uber, Lyft, and DoorDash, among others. According to authorities, this group may have used fake IDs to build driver or delivery accounts on these sites in order to sell them to people who were not qualified for the companies' policies. 

This scam may have also included the use of GPS counterfeiting technologies to trick drivers into taking longer trips and earning more money. Furthermore, the Department of Justice (DOJ) states that this organization would have begun operations in 2019 and would have expanded its operations after the pandemic paralyzed many restaurants and supermarkets. 

The gang, which worked mainly in Massachusetts but also in California, Florida, and Illinois, communicated through a WhatsApp group called "Mafia," where they allegedly agreed on similar pricing strategies to avoid undercutting each other's income, according to the FBI. 

The party leased driver accounts on a weekly basis, according to court records. A ride-hailing service driver account costs between $250 and $300 per week, while a food delivery web account costs $150 per week. The FBI claimed to have tracked more than 2,000 accounts created by gang members during their investigation. 

According to the agents in charge of the investigation, the suspects made hundreds of thousands of dollars from this scheme, depositing their earnings in bank accounts under their control and withdrawing small sums of money on a regular basis to avoid attracting the attention of the authorities. Thousands of dollars were also made by criminals due to referral incentives for new accounts. One of the gang members received USD 194,800 through DoorDash's user referral system for 487 accounts they had on the website, according to a screenshot posted on the group's WhatsApp page. 

The DOJ has charged 19 Brazilian people so far, as well as revealing that six members of the fraudulent party are still on the run. The Department of Justice reported the second round of charges against five Brazilian citizens last week. Four were apprehended and charged in a San Diego court, while a fifth is still on the run and assumed to be in Brazil.

Litigation Firm Discovers a New Phishing Scam Falsely Purporting To Be From Leading UK Supermarket


A litigation firm discovered a new phishing scam falsely indicating to be from a leading UK supermarket Tesco. 

The scam had utilized SMS and email communication planned to fool customers into handling over their subtleties, and steal classified and payment data. 

The fraud started through an official-looking but fake Facebook page entitled 'Tesco UK' which shared images implying to be from a Tesco warehouse, showing stuffed boxes of HD television sets. 

As per Griffin Law, the litigation firm, the message stated: “We have around 500 TVs in our warehouse that are about to be binned as they have slight damage and can’t be sold. However, all of them are in fully working condition, we thought instead of binning them we’d give them away free to 500 people who have shared and commented on this post by July 18.” 

The firm stated that at least some 100 customers had responded to the Facebook page or received an email.

The original fake Tesco Facebook page is currently listed as 'content unavailable.' It was the clueless users who had due to immense excitement shared the post helped it to spread before receiving an email offering them the opportunity to 'claim their prize.' 

A button in the message connected victims to a landing page to enter their name, place of residence, phone number, and the bank account details. 

Tim Sadler, Chief, Tessian, stated: As the lines between people in our ‘known’ network and our ‘unknown’ networks blur on social media feeds and in our inboxes, it becomes incredibly difficult to know who you can and can’t trust. Hackers prey on this, impersonating a trusted brand or person to convince you into complying with their malicious request and they will also prey on people’s vulnerabilities." 

Although Sadler empathized with the people who are struggling financially in the wake of the [COVID-19] pandemic and henceforth the proposal of a free television could be appealing to them.

However, he advises the users to consistently scrutinize the authenticity of these certain messages and consistently confirm the requestor's offer before tapping on the link and refrain from asking for trouble.

Facebook Publishes Its Latest "Enforcement Report”; Removes More Than Three Billion Fake Accounts and Seven Million "Hate Speech" Posts




Facebook has sought to expel more than three billion fake accounts alongside more than approximately seven million "hate speech" posts as it distributed its most recent "enforcement report", which subtleties what number of posts and records it made a move on between October 2018 and March 2019.

The Social Networking company's Chief Executive Zuckerberg clarifies when he hit back against various calls to break Facebook, “I don't think that the remedy of breaking up the company is going to address [the problem]," he said.

He displayed his contention defending Facebook's size made it conceivable to protect against the network's problems.

While move was made on more than one million posts selling weapons in the sixth month time frame covered by the report. The social network will now also report what numbers of posts were evacuated for selling "regulated goods”, for example, drugs and guns.

For certain sorts of content, like child sex abuse imagery, violence and terrorist propaganda, the report evaluates how frequently such a content was 'actually seen' by individuals on Facebook.

For the first time though, the report reveals that between January and March 2019 more than one million appeals were made after posts were erased for "hate speech". Also, moreover around 150,000 posts that were found not to have broken the hate speech policy were re - established amid that period.

Facebook said the report featured, "areas where we could be more open in order to build more accountability and responsiveness to the people who use our platform".

Zuckerberg however assured any doubts the reporters might have saying that, “The success of the company has allowed us to fund these efforts at a massive level. I think the amount of our budget that goes toward our safety systems...”