Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Social Media. Show all posts

OpenAI’s Disruption of Foreign Influence Campaigns Using AI

 

Over the past year, OpenAI has successfully disrupted over 20 operations by foreign actors attempting to misuse its AI technologies, such as ChatGPT, to influence global political sentiments and interfere with elections, including in the U.S. These actors utilized AI for tasks like generating fake social media content, articles, and malware scripts. Despite the rise in malicious attempts, OpenAI’s tools have not yet led to any significant breakthroughs in these efforts, according to Ben Nimmo, a principal investigator at OpenAI. 

The company emphasizes that while foreign actors continue to experiment, AI has not substantially altered the landscape of online influence operations or the creation of malware. OpenAI’s latest report highlights the involvement of countries like China, Russia, Iran, and others in these activities, with some not directly tied to government actors. Past findings from OpenAI include reports of Russia and Iran trying to leverage generative AI to influence American voters. More recently, Iranian actors in August 2024 attempted to use OpenAI tools to generate social media comments and articles about divisive topics such as the Gaza conflict and Venezuelan politics. 

A particularly bold attack involved a Chinese-linked network using OpenAI tools to generate spearphishing emails, targeting OpenAI employees. The attack aimed to plant malware through a malicious file disguised as a support request. Another group of actors, using similar infrastructure, utilized ChatGPT to answer scripting queries, search for software vulnerabilities, and identify ways to exploit government and corporate systems. The report also documents efforts by Iran-linked groups like CyberAveng3rs, who used ChatGPT to refine malicious scripts targeting critical infrastructure. These activities align with statements from U.S. intelligence officials regarding AI’s use by foreign actors ahead of the 2024 U.S. elections. 

However, these nations are still facing challenges in developing sophisticated AI models, as many commercial AI tools now include safeguards against malicious use. While AI has enhanced the speed and credibility of synthetic content generation, it has not yet revolutionized global disinformation efforts. OpenAI has invested in improving its threat detection capabilities, developing AI-powered tools that have significantly reduced the time needed for threat analysis. The company’s position at the intersection of various stages in influence operations allows it to gain unique insights and complement the work of other service providers, helping to counter the spread of online threats.

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

Protecting Your Business from Cybercriminals on Social Media

 

Social media has transformed into a breeding ground for cybercriminal activities, posing a significant threat to businesses of all sizes. According to recent reports, more than half of all companies suffer over 30% revenue loss annually due to fraudulent activities, with social media accounting for about 37% of these scams. This is alarming because even established tech giants like Yahoo, Facebook, and Google have fallen victim to these attacks. For smaller businesses, the threat is even greater as they often lack the robust security measures needed to fend off cyber threats effectively. 

Phishing scams are among the most prevalent attacks on social media. Cybercriminals often create fake profiles that mimic company employees or business partners, tricking unsuspecting users into clicking on malicious links. These links can lead to malware installations or trick individuals into revealing sensitive information like passwords or banking details. In some instances, fraudsters might also impersonate high-level executives to manipulate employees into transferring money or sharing confidential data. Another common method is social engineering, where cybercriminals manipulate individuals into taking actions they otherwise wouldn’t. 

For example, they might pretend to be company executives or representatives, convincing lower-level employees to share sensitive information, such as financial records or login credentials. This tactic is especially dangerous since it often appears as legitimate internal communication, making it harder for employees to recognize the threat. Credential stuffing is another significant concern. In this form of attack, cybercriminals use stolen credentials from data breaches to gain unauthorized access to social media accounts. This can lead to spam, data theft, or the spread of malware through the company’s official accounts, jeopardizing both the business’s reputation and its customers’ trust. Negative campaigns pose a different yet equally damaging threat. 

Attackers may post false reviews, complaints, or misinformation to tarnish a company’s image, resulting in lost sales, reduced customer loyalty, and even potential legal costs if the business decides to pursue legal action. Such campaigns can have long-lasting effects, making it difficult for companies to rebuild their reputations. Targeted advertising is another avenue for cybercriminals to exploit. They create deceptive ads that mislead customers or redirect them to malicious sites, damaging the company’s credibility and resulting in financial losses. To safeguard against these threats, businesses must take proactive steps. Using strong, unique passwords for social media accounts is essential to prevent unauthorized access. 

Responding quickly to any incidents can limit damage, and regular employee training on recognizing phishing attempts and social engineering tactics can reduce vulnerability. Managing access to social media accounts by limiting permissions to a select few employees can minimize risk. Additionally, regularly updating systems and applications ensures that security patches protect against known vulnerabilities. 

By implementing these preventive measures, businesses can better defend themselves against the growing threats posed by cybercriminals on social media, maintaining their reputation, customer trust, and financial stability.

Russian Disinformation Network Struggles to Survive Crackdown


 

The Russian disinformation network, known as Doppelgänger, is facing difficulties as it attempts to secure its operations in response to increased efforts to shut it down. According to a recent report by the Bavarian State Office for the Protection of the Constitution (BayLfV), the network has been scrambling to protect its systems and data after its activities were exposed.

Doppelgänger’s Activities and Challenges

Doppelgänger has been active in spreading false information across Europe since at least May 2022. The network has created numerous fake social media accounts, fraudulent websites posing as reputable news sources, and its own fake news platforms. These activities have primarily targeted Germany, France, the United States, Ukraine, and Israel, aiming to mislead the public and spread disinformation.

BayLfV’s report indicates that Doppelgänger’s operators were forced to take immediate action to back up their systems and secure their operations after it was revealed that European hosting companies were unknowingly providing services to the network. The German agency monitored the network closely and discovered details about the working patterns of those involved, noting that they operated during Russian office hours and took breaks on Russian holidays.

Connections to Russia

Further investigation by BayLfV uncovered clear links between Doppelgänger and Russia. The network used Russian IP addresses and the Cyrillic alphabet in its operations, reinforcing its connection to the Kremlin. The network's activities were timed with Moscow and St. Petersburg working hours, further suggesting coordination with Russian time zones.

This crackdown comes after a joint investigation by digital rights groups Qurium and EU DisinfoLab, which exposed Doppelgänger's infrastructure spread across at least ten European countries. Although German authorities were aware of the network’s activities, they had not taken proper action until recently.

Social Media Giant Meta's Response

Facebook’s parent company, Meta, has been actively working to combat Doppelgänger’s influence on its platforms. Meta reported that the network has been forced to change its tactics due to ongoing enforcement efforts. Since May, Meta has removed over 5,000 accounts and pages linked to Doppelgänger, disrupting its operations.

In an attempt to avoid detection, Doppelgänger has shifted its focus to spoofing websites of nonpolitical and entertainment news outlets, such as Cosmopolitan and The New Yorker. However, Meta noted that most of these efforts are being caught quickly, either before they go live or shortly afterward, indicating that the network is struggling to maintain its previous level of influence.

Impact on Doppelgänger’s Operations

The pressure from law enforcement and social media platforms is clearly affecting Doppelgänger’s ability to operate. Meta highlighted that the quality of the network’s disinformation campaigns has declined as it struggles to adapt to the persistent enforcement. The goal is to continue increasing the cost of these operations for Doppelgänger, making it more difficult for the network to continue spreading false information.

This ongoing crackdown on Doppelgänger demonstrates the challenges in combating disinformation and the importance of coordinated efforts to protect the integrity of information in today’s digital environment


The UK Erupts in Riots as Big Tech Stays Silent


 

For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.

Big Tech's Reluctance to Speak

Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.

Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.

Elon Musk's Twitter and the Spread of Misinformation

The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.

Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.

Experts Weigh In on Big Tech's Silence

Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.

Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.

What Does It Look Like?

As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.

Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.

A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.


Why Did Turkey Suddenly Ban Instagram? The Shocking Reason Revealed


 

On Friday, Turkey's Information and Communication Technologies Authority (ICTA) unexpectedly blocked Instagram access across the country. The ICTA, responsible for overseeing internet regulations, did not provide any specific reason for the ban. However, according to reports from Yeni Safak, a newspaper supportive of the government, the ban was likely a response to Instagram removing posts by Turkish users that expressed condolences for Hamas leader Ismail Haniyeh's death.

Many Turkish users faced difficulties accessing Instagram following the ban. Fahrettin Altun, the communications director for the Turkish presidency, publicly condemned Instagram, accusing it of censoring messages of sympathy for Haniyeh, whom he called a martyr. This incident has sparked significant controversy within Turkey.

Haniyeh’s Death and Its Aftermath

Ismail Haniyeh, the political leader of Hamas and a close associate of Turkish President Recep Tayyip Erdogan, was killed in an attack in Tehran on Wednesday, an act allegedly carried out by Israel. His death prompted widespread reactions in Turkey, with many taking to social media to express their condolences and solidarity, leading to the conflict with Instagram.

A History of Social Media Restrictions in Turkey

This is not the first instance of social media restrictions in Turkey. The country, with a population of 85 million, includes over 50 million Instagram users, making such bans highly impactful. From April 2017 to January 2020, Turkey blocked access to Wikipedia due to articles that linked the Turkish government to extremism, tellingly limiting the flow of information.

This recent action against Instagram is part of a broader pattern of conflicts between the Turkish government and social media companies. In April, Meta, the parent company of Facebook, had to suspend its Threads network in Turkey after authorities blocked its information sharing with Instagram. This surfaces ongoing tensions between Turkey and major social media firms.

The blockage of Instagram illustrates the persistent struggle between the Turkish government and social media platforms over content regulation and freedom of expression. These restrictions pose crucial challenges to the dissemination of information and public discourse, affecting millions who rely on these platforms for news and communication. 

Turkey's decision to block Instagram is a testament to the complex dynamics between the government and digital platforms. As the situation pertains, it will be essential to observe the responses from both Turkish authorities and the affected social media companies to grasp the broader implications for digital communication and freedom of speech in Turkey.


Telegram Users Cross 900 Million, Company Plans to Launch App Store


Aims to reach 1 Billion followers: Telegram founder

Telegram, a famous messaging app crossed 900 million active users recently, it will aim to cross the 1 billion milestone by 2024. According to Pavel Durov, the company's founder, it also plans to launch an app store and an in-app browser supporting web3 pages by July.

In March, Telegram reached 900 million. While addressing the achievement, Durov said the company wishes to be profitable by 2025.

Telegram looks proactive in adopting web3 tech for its platform. Since the beginning, the company has been a strong supporter of blockchain and cryptocurrency initiatives, but it couldn't enter the space due to its initial coin offering failure in 2018. “We began monetizing primarily to maintain our independence. Generally, we see value in [an IPO] as a means of democratizing access to Telegram's assets,” Durov said in an interview with the Financial Times earlier this year.

Telegram and TON blockchain

Telegram started auctioning usernames on the TON blockchain in December 2018. It has emphasized assisting developers in building mini-apps and games that utilize cryptocurrency while doing transactions. In 2024, the company started sharing ad revenues with channel owners by giving out Toncoin (a token on the TON blockchain). At the beginning of July 2024, Telegram began allowing channel owners to convert stars to Toncoin for buying ads at discount prices or trade cryptocurrencies.

Scam and Telegram

But telegram has been long suffering from scams and attacks from threat actors. According to a Kaspersky report, since November 2023, it has fallen victim to different peddling schemes by scammers, letting them steal Toncoins from users. According to Durov, Telegram plans on improving its moderation processes this year as multiple global elections surface (few have already happened as we speak) and deploy AI-related mechanisms to address potential problems. 

Financial Times reported “Messaging rival WhatsApp, owned by Meta, has 1.8bn monthly active users, while encrypted communications app Signal has 30mn as of February 2024, according to an analysis by Sensor Tower, though this data only covers mobile app use. Telegram’s bid for advertising dollars is at odds with its reputation as a renegade platform with a hands-off approach to moderation, which recently drew scrutiny for allowing some Hamas-related content to remain on the platform. ”

Supreme Court Directive Mandates Self-Declaration Certificates for Advertisements

 

In a landmark ruling, the Supreme Court of India recently directed every advertiser and advertising agency to submit a self-declaration certificate confirming that their advertisements do not make misleading claims and comply with all relevant regulatory guidelines before broadcasting or publishing. This directive stems from the case of Indian Medical Association vs Union of India. 

To enforce this directive, the Ministry of Information and Broadcasting has issued comprehensive guidelines outlining the procedure for obtaining these certificates, which became mandatory from June 18, 2024, onwards. This move is expected to significantly impact advertisers, especially those using deepfakes generated by Generative AI (GenAI) on social media platforms like Instagram, Facebook, and YouTube. The use of deepfakes in advertisements has been a growing concern. 

In a previous op-ed titled “Urgently needed: A law to protect consumers from deepfake ads,” the rising menace of deepfake ads making misleading or fraudulent claims was highlighted, emphasizing the adverse effects on consumer rights and public figures. A survey conducted by McAfee revealed that 75% of Indians encountered deepfake content, with 38% falling victim to deepfake scams, and 18% directly affected by such fraudulent schemes. Alarmingly, 57% of those targeted mistook celebrity deepfakes for genuine content. The new guidelines aim to address these issues by requiring advertisers to provide bona fide details and final versions of advertisements to support their declarations. This measure is expected to aid in identifying and locating advertisers, thus facilitating tracking once complaints are filed. 

Additionally, it empowers courts to impose substantial fines on offenders. Despite the potential benefits, industry bodies such as the Indian Internet and Mobile Association of India (IAMAI), Indian Newspaper Association (INS), and the Indian Society of Advertisers (ISA) have expressed concerns over the additional compliance burden, particularly for smaller advertisers. These bodies argue that while self-certification has merit, the process needs to be streamlined to avoid hampering legitimate advertising activities. The challenge of regulating AI-enabled deepfake ads is further complicated by the sheer volume of digital advertisements, making it difficult for regulators to review each one. 

Therefore, it is suggested that online platforms be obligated to filter out deepfake ads, leveraging their technology and resources for efficient detection. The Ministry of Electronics and Information Technology highlighted the negligence of social media intermediaries in fulfilling their due diligence obligations under the IT Rules in a March 2024 advisory. 

Although non-binding, the advisory stipulates that intermediaries must not allow unlawful content on their platforms. The Supreme Court is set to hear the matter again on July 9, 2024, when industry bodies are expected to present their views on the new guidelines. This intervention could address the shortcomings of current regulatory approaches and set a precedent for robust measures against deceptive advertising practices. 

As the country grapples with the growing threat of dark patterns in online ads, the apex court’s involvement is crucial in ensuring consumer protection and the integrity of advertising practices in India.

Stay Secure: How to Prevent Zero-Click Attacks on Social Platforms

Stay Secure: How to Prevent Zero-Click Attacks on Social Platforms

While we have all learned to avoid clicking on suspicious links and be wary of scammers, this week we were reminded that there are some silent threats out there that we should be aware of zero-click assaults.

Recent Incidents

As Forbes first reported, TikTok revealed that a few celebrities' accounts, including CNN and Paris Hilton, were penetrated by simply sending a direct message (DM). Attackers apparently used a zero-day vulnerability in the messaging component to run malicious malware when the message was opened. 

The NSA advised all smartphone users to turn their devices off and back on once a week for safety against zero-click assaults, however, the NSA accepts that this tactic will only occasionally prevent these attacks from succeeding. However, there are still steps you can take to protect yourself—and security software such as the finest VPNs can assist you.

TikTok’s Vulnerability: A Case Study in Zero-Click Exploits

As the name implies, a zero-click attack or exploit requires no activity from the victim. Malicious software can be installed on the targeted device without the user clicking on any links or downloading any harmful files.

This feature makes these types of attacks extremely difficult to detect. This is simply because a lack of engagement significantly minimizes the likelihood of hostile activity.

Cybercriminals use unpatched vulnerabilities in software code to carry out zero-click exploits, known as zero-day vulnerabilities. According to experts at security firm Kaspersky, apps with messaging or voice calling functions is a frequent target because "they are designed to receive and interpret data from untrusted sources"—making them more vulnerable.

Once a device vulnerability has been properly exploited, hackers can use malware, such as info stealers, to scrape your private data. Worse, they can install spyware in the background, recording all of your activity.

The Silent Threat

This is exactly how the Pegasus spyware attacked so many victims—more than 1,000 people in 50 countries, according to the 2021 joint investigation—without them even knowing it.

The same year, Citizen Lab security experts revealed that utilizing two zero-click iMessage bugs, nine Bahraini activists' iPhones were successfully infiltrated with Pegasus spyware. In 2019, attackers used a WhatsApp zero-day vulnerability to inject malware into communications via a missed call.

As the celebrity TikTok hack story shows, social media platforms are becoming the next popular target. Meta, for example, recently patched a similar vulnerability that could have let attackers to take over any Facebook account.

Protective Measures

Stay Updated
  • Regularly update your operating system, apps, and firmware. Patches often address known vulnerabilities.
  • Enable automatic updates to stay protected without manual intervention.
App Store Caution
  • Download apps only from official app stores (e.g., Google Play, Apple App Store). Third-party sources may harbor malicious apps.
  • Remove unused apps to reduce your attack surface.
Multi-Factor Authentication (MFA)
  • Enable MFA for all your accounts, especially social media platforms. Even if an attacker gains access to your password, MFA adds an extra layer of security.
  • Use authenticator apps or hardware tokens instead of SMS-based codes.
Beware of DMs
  • Be cautious when opening DMs, especially from unknown senders.
  • Avoid clicking on links or downloading files unless you’re certain of their legitimacy.
Media Files Scrutiny
  • Treat media files (images, videos, audio) with suspicion.
  • Avoid opening files from untrusted sources, even if they appear harmless.
No Jailbreaking or Rooting
  • Modifying your device’s software (jailbreaking/rooting) weakens security.
  • Stick to the official software to maintain robust defenses.

Apple Working to Patch Alarming iPhone Issue

 

Apple claims to be working rapidly to resolve an issue that resulted in some iPhone alarms not setting off, allowing its sleeping users to have an unexpected lie-in. 

Many people rely on their phones as alarm clocks, and some oversleepers took to social media to gripe. A Tiktokker expressed dissatisfaction at setting "like five alarms" that failed to go off. 

Apple has stated that it is aware of the issue at hand, but has yet to explain what it believes is causing it or how users may avoid a late start. 

It's also unknown how many people are affected or if the issue is limited to specific iPhone models. The news was first made public by the early risers on NBC's Today Show, which sparked concerns. 

In the absence of an official solution, those who are losing sleep over the issue can try a few simple fixes. One is to prevent human error; therefore, double-check the phone's alarm settings and make sure the volume is turned up. 

Others pointed the finger at Apple designers, claiming that a flaw in the iPhones' "attention aware features" could be to blame.

When enabled, they allow an iPhone to detect whether a user is paying attention to their device and, if so, to automatically take action, such as lowering the volume of alerts, including alarms. 

According to Apple, they are compatible with the iPhone X and later, as well as the iPad Pro 11-inch and iPad Pro 12.9-inch. Some TikTok users speculated that if a slumbering user's face was oriented towards the screen of a bedside iPhone, depending on the phone's settings, the functionalities may be activated. 

Apple said it intends to resolve the issue quickly. But, until then, its time zone-spanning consumer base may need to dust off some old gear and replace TikTok with the more traditional - but trustworthy - tick-tock of an alarm clock.

Discord Users' Privacy at Risk as Billions of Messages Sold Online

 

In a concerning breach of privacy, an internet-scraping company, Spy.pet, has been exposed for selling private data from millions of Discord users on a clear web website. The company has been gathering data from Discord since November 2023, with reports indicating the sale of four billion public Discord messages from over 14,000 servers, housing a staggering 627,914,396 users.

How Does This Breach Work?

The term "scraped messages" refers to the method of extracting information from a platform, such as Discord, through automated tools that exploit vulnerabilities in bots or unofficial applications. This breach potentially exposes private chats, server discussions, and direct messages, highlighting a major security flaw in Discord's interaction with third-party services.

Potential Risks Involved

Security experts warn that the leaked data could contain personal information, private media files, financial details, and even sensitive company information. Usernames, real names, and connected accounts may be compromised, posing a risk of identity theft or financial fraud. Moreover, if Discord is used for business communication, the exposure of company secrets could have serious implications.

Operations of Spy.pet

Spy.pet operates as a chat-harvesting platform, collecting user data such as aliases, pronouns, connected accounts, and public messages. To access profiles and archives of conversations, users must purchase credits, priced at $0.01 each with a minimum of 500 credits. Notably, the platform only accepts cryptocurrency payments, excluding Coinbase due to a ban. Despite facing a DDoS attack in February 2024, Spy.pet claims minimal damage.

How To Protect Yourself?

Discord is actively investigating Spy.pet and is committed to safeguarding users' privacy. In the meantime, users are advised to review their Discord privacy settings, change passwords, enable two-factor authentication, and refrain from sharing sensitive information in chats. Any suspected account compromises should be reported to Discord immediately.

What Are The Implications?

Many Discord users may not realise the permanence of their messages, assuming them to be ephemeral in the fast-paced environment of public servers. However, Spy.pet's data compilation service raises concerns about the privacy and security of users' conversations. While private messages are currently presumed secure, the sale of billions of public messages underscores the importance of heightened awareness while engaging in online communication.

The discovery of Spy.pet's actions is a clear signal of how vulnerable online platforms can be and underscores the critical need for strong privacy safeguards. It's crucial for Discord users to stay alert and take active measures to safeguard their personal data in response to this breach. As inquiries progress, the wider impact of this privacy violation on internet security and data protection is a substantial concern that cannot be overlooked.


Apple Steps Up Spyware Alerts Amid Rising Mercenary Threats

 


It has been reported that Apple sent notifications on April 10 to its Indian and 91 other users letting them know they might have been a victim of a possible mercenary spyware attack. As stated in the company's notification to the affected users, these spyware attacks were intended to 'remotely compromise the iPhone associated with the users' Apple IDs,' suggesting the attackers might have targeted them specifically as a result of who they are or what they do, and that they were most likely to be a target. 

A threat notification has been issued to users worldwide after fears were raised that sophisticated spyware attacks could be targeting high-profile Apple customers. There had been a similar warning sent out to Indian Apple users back in October last year, in which members of the Indian Parliament and journalists were alerted about potential ‘state-sponsored attacks'. 

People who had been alerted last year were able to use social media in response to the alerts, but this time around, the same has not been the case. After the Pegasus surveillance issue, Apple introduced this feature in 2021. When these alerts are received, they will be sent to users when they see activity that is consistent with a state-sponsored attack. 

It has recently released an alert highlighting the dangers and rarities of mercenary spyware, like the famous Pegasus from NSO Group, highlighting how complex and rare these types of viruses can be. According to the company's warning email, the spyware was designed to secretly infiltrate iPhones associated with particular Apple IDs. 

There has been a lot of speculation surrounding this issue, with Apple indicating that attackers may select their targets depending on their identity or profession to gain access to their systems. Mercenary spyware refers to sophisticated malware that has been developed and deployed primarily by private entities that may be guided by national authorities. 

In a message issued by the company, users were warned that advanced spyware may attempt to remotely access their iPhones, indicating that they may be at risk. The attacks, according to Apple, are both “exceptionally rare” and “vastly more sophisticated” than the usual cybercrime activities or consumer malware. 

In addition to stressing the unique characteristics of threats such as Pegasus spyware from NSO Group, the company also pointed out that such attacks are individually tailored and cost millions of dollars to launch, and only a very small percentage of customers are affected by such attacks. Moreover, as evidenced by the fact that a coalition of countries, including the United States, is currently working to create safeguards against the misuse of commercial spy software, these efforts are in line with global efforts to combat the misuse of commercial spyware. 

Furthermore, a recent report released by Google's Threat Analysis Group (TAG) and Mandiant shed light on the exploitation of zero-day vulnerabilities in the year 2023, revealing a significant portion of these exploits would be attributed to commercial surveillance vendors. It is widely known that web browser vulnerabilities and mobile device vulnerabilities are a major source of threat actors' evasion and persistence strategies, an indication of how reliant they are on zero-day exploits. 

Among the most concerning issues was that, in India, opposition politicians had raised concerns about possible government involvement in attacks against mobile phones in October, citing Apple's earlier alert about state-sponsored attacks from October that appeared to indicate such an involvement. There has been a high-risk warning issued by CERT-In, India's national cybersecurity watchdog, about vulnerabilities in Apple products that are affecting the entire Apple ecosystem. 

There may be vulnerabilities in these systems which will enable attackers to access sensitive information, execute unauthorized code, bypass security measures, and spoof systems to perform identity theft and other attacks against them. Several Apple devices and software are the subject of this advisory, including iOS, iPadOS, macOS, tvOS, watchOS, and Safari, as well as a wide range of Apple devices and computer software.

Apple also recommends that users remain vigilant regarding suspicious links and attachments, as some attacks might be exploiting the power of social engineering to mislead users into clicking on malicious links. When users suspect that they are being targeted, even in the absence of a threat notification, precautions should be taken to avoid exposing themselves to security threats. 

These precautions include changing passwords and speaking with experts in the field of digital security. As a result of these evolving threats, Apple emphasizes that to mitigate the risks effectively, users must work together with security professionals. Proactive measures and an increased awareness of cyber threats must become increasingly important in helping combat malicious cyber activity in the era of growing digital privacy concerns. 

There may be vulnerabilities in these systems which will enable attackers to access sensitive information, execute unauthorized code, bypass security measures, and spoof systems to perform identity theft and other attacks against them. Several Apple devices and software are the subject of this advisory, including iOS, iPadOS, macOS, tvOS, watchOS, and Safari, as well as a wide range of Apple devices and computer software. 

Apple also recommends that users remain vigilant regarding suspicious links and attachments, as some attacks might be exploiting the power of social engineering to mislead users into clicking on malicious links. When users suspect that they are being targeted, even in the absence of a threat notification, precautions should be taken to avoid exposing themselves to security threats. These precautions include changing passwords and speaking with experts in the field of digital security. 

As a result of these evolving threats, Apple emphasizes that to mitigate the risks effectively, users must work together with security professionals. Proactive measures and an increased awareness of cyber threats must become increasingly important in helping combat malicious cyber activity in the era of growing digital privacy concerns. It is recommended that users when clicking on links or opening attachments from unknown sources, be cautious. 

Since they feared the spyware might help attackers plan for a stealth attack, they decided not to share any more details about it. Additionally, Apple incorporated new advice for users who might be impacted by mercenary spyware attacks into its support page for those who might have been affected. The page explained how these threats are tailored to each individual and their particular device, which means they are difficult to detect and hard to eliminate.

Heightened Hacking Activity Prompts Social Media Security Warning

 


Having social media software for managing users' privacy settings, and security settings, and keeping track of recent news and marketing opportunities can provide a great way to keep in touch with family, and friends, and stay updated on recent news. However, it is important to abide by these settings to keep information safe. 

When social media is used improperly, it can introduce several risks to a person's personal information, as online criminals are devising new and in-depth methods for exploiting vulnerabilities more frequently than ever before. There are many things users need to know about keeping their Facebook, X and Instagram accounts secure - from finding out how accounts are hacked, to recovering accounts. 

When fraudsters gain access to the details of the users' accounts, they can take advantage of their contacts, sell their information on the dark web, and steal the identity of the users. According to reports by Action Fraud, some victims of email and social media hacking have been forced into extortion by criminals who have stolen their private photos and videos and used them to extort them. 9 out of 10 of the people who participated in the survey (89%) stated that they knew or were aware of people whose profiles had been compromised, and 28% said they knew at least five to ten people who had been hacked. 

The survey found that 15 per cent of the respondents knew someone who was hacked on social media more than ten times. With 76% of respondents indicating they have increased concerns within the last year compared to the previous year, it appears that the fears are growing. What scammers do to hack accounts Online users' accounts can be accessed in a variety of ways by fraudsters to gain access to their money. 

The hacked account user may be wondering how they managed to gain access to one of their accounts if they discover that one of theirs has been hacked. There are times when hackers gain access to a system which carries highly confidential data about a person and causes the system to be breached. This information is then used by fraudsters to gain access to accounts that have been compromised. 

Phishing attacks are designed to entice users into divulging their details by impersonating legitimate companies and containing links that lead them to malicious websites that can harvest their data. As a result, users may end up downloading malicious code to the devices they use to steal their information once they enter the information on the website. 

A chain hack which takes place on a social media platform involves a fraudster posting links to dubious websites in the comment section of a post. After the victim clicks on the link, the fraudster will then ask them to enter their social media account details. This will allow the fraudster access to the victim's account information. It has been reported that fraudsters are known to send messages to victims impersonating one of their contacts in an attempt to get them to share their two-factor authentication code with them. 

Hackers who use credentials they have previously been successful in obtaining access to other accounts belonging to a particular person are known as credential stuffers. When a scammer watches a user log into an account while an account is being used, they are shoulder surfing the user. It is possible to download a malicious app to the users' phones, which will, in turn, install malware onto their devices, enabling the fraudster to steal the username and password for their account and use it to steal users' money. 

When users' accounts have been hacked, take precautions to avoid recovery scammers contacting them on social media and saying they can retrieve their accounts for them if only they would follow their instructions. This is just another scam that they cannot fall victim to, and they would not be able to do this. 

Find out who to contact to get help with a hacked account by going to the help page of the account provider. All devices must be logged out of the users' accounts as well as their passwords must be changed on all devices. Please examine to ascertain the presence of any newly instituted protocols or configurations within users' email accounts, which may have been established without their explicit authorization. 

These modifications could potentially dictate the redirection of emails about their accounts. It is incumbent upon users to promptly notify their contacts of a potential security breach and advise them to exercise caution, as any received messages may not be legitimately sent by them.

Where is AI Leading Content Creation?


Artificial Intelligence (AI) is reshaping the world of social media content creation, offering creators new possibilities and challenges. The fusion of art and technology is empowering creators by automating routine tasks, allowing them to channel their energy into more imaginative pursuits. AI-driven tools like Midjourney, ElevenLabs, Opus Clip, and Papercup are democratising content production, making it accessible and cost-effective for creators from diverse backgrounds.  

Automation is at the forefront of this revolution, freeing up time and resources for creators. These AI-powered tools streamline processes such as research, data analysis, and content production, enabling creators to produce high-quality content more efficiently. This democratisation of content creation fosters diversity and inclusivity, amplifying voices from various communities. 

Yet, as AI takes centre stage, questions arise about authenticity and originality. While AI-generated content can be visually striking, concerns linger about its soul and emotional depth compared to human-created content. Creators find themselves navigating this terrain, striving to maintain authenticity while leveraging AI-driven tools to enhance their craft. 

AI analytics are playing a pivotal role in content optimization. Platforms like YouTube utilise AI algorithms for A/B testing headlines, predicting virality, and real-time audience sentiment analysis. Creators, armed with these insights, refine their content strategies to tailor messages, ultimately maximising audience engagement. However, ethical considerations like algorithmic bias and data privacy need careful attention to ensure the responsible use of AI analytics in content creation. 

The rise of virtual influencers, like Lil Miquela and Shudu Gram, poses a unique challenge to traditional content creators. While these virtual entities amass millions of followers, they also threaten the livelihoods of human creators, particularly in influencer marketing campaigns. Human creators, by establishing genuine connections with their audience and upholding ethical standards, can distinguish themselves from virtual counterparts, maintaining trust and credibility. 

As AI continues its integration into content creation, ethical and societal concerns emerge. Issues such as algorithmic bias, data privacy, and intellectual property rights demand careful consideration for the responsible deployment of AI technologies. Upholding integrity and ethical standards in creative practices, alongside collaboration between creators, technologists, and policymakers, is crucial to navigating these challenges and fostering a sustainable content creation ecosystem. 

In this era of technological evolution, the impact of AI on social media content creation is undeniable. As we embrace the possibilities it offers, addressing ethical concerns and navigating through the intricacies of this digitisation is of utmost importance for creators and audiences alike.

 

Meta’s Facebook, Instagram Back Online After Two Hour Long Outage

 

On March 5, a technical failure resulted in widespread login issues across Meta's Facebook, Instagram, Threads, and Messenger platforms.

Meta's head of communications, Andy Stone, confirmed the issues on X, formerly known as Twitter, and stated that the company "resolved the issue as quickly as possible for everyone who was impacted, and we apologise for any inconvenience." 

Users reported getting locked out of their Facebook accounts, and the platform's feeds, as well as Threads and Instagram, did not refresh. WhatsApp, which is also owned by Meta, seems unaffected.

A senior official from the United States Cybersecurity and Infrastructure Security Agency told reporters Tuesday that the agency was "not cognizant of any specific election nexus nor any specific malicious cyber activity nexus to the outage.” 

The outage occurs just ahead of the March 7th deadline for Big Tech firms to comply with the European Union's new Digital Markets Act. To comply, Meta is making modifications, including allowing users to separate their Facebook and Instagram accounts, and preventing personal information from being pooled to target them with online adverts. It is unclear whether the downtime is related to Meta's preparations for the DMA. 

Facebook, Instagram, and WhatsApp went down for hours in 2021, which the firm blamed on inaccurate changes to routers that coordinate network traffic between its data centres. The following year, WhatsApp experienced another brief outage. 

Facebook engineers were dispatched to one of its key US data centres in California to restore service, indicating that the fix had to be done remotely. Further complicating matters, the outage briefly prevented some employees from using their badges to access workplaces and conference rooms, according to The New York Times, which initially reported that engineers had been called to the data centre.

The “Mother of All Breaches”: Implications for Businesses


In the vast digital landscape, data breaches have become an unfortunate reality. However, some breaches stand out as monumental, and the recent discovery of the “mother of all breaches” (MOAB) is one such instance. Let’s delve into the details of this massive security incident and explore its implications for businesses.

The MOAB Unveiled

At the beginning of this year, cybersecurity researchers stumbled upon a staggering dataset containing 26 billion leaked entries. This treasure trove of compromised information includes data from prominent platforms like LinkedIn, Twitter.com, Tencent, Dropbox, Adobe, Canva, and Telegram. But the impact didn’t stop there; government agencies in the U.S., Brazil, Germany, the Philippines, and Turkey were also affected.

The MOAB isn’t your typical data breach—it’s a 12-terabyte behemoth that cybercriminals can wield as a powerful weapon. Here’s why it’s a game-changer:

Identity Theft Arsenal: The stolen personal data within this dataset provides threat actors with a comprehensive toolkit. From email addresses and passwords to sensitive financial information, it’s a goldmine for orchestrating identity theft and other malicious activities.

Global Reach: The MOAB’s reach extends across borders. Organizations worldwide are at risk, and the breach’s sheer scale means that no industry or sector is immune.

Implications for Businesses

As business leaders, it’s crucial to understand the implications of the MOAB and take proactive measures to safeguard your organization:

1. Continual Threat Landscape

The MOAB isn’t a one-time event—it’s an ongoing threat. Businesses must adopt a continuous monitoring approach to detect any signs of compromise. Here’s what to watch out for:

  • Uncommon Access Scenarios: Keep an eye on access logs. Sudden spikes in requests or unfamiliar IP addresses could indicate unauthorized entry. Logins during odd hours may also raise suspicion.
  • Suspicious Account Activity: Scammers might attempt to take over compromised accounts. Look for unexpected changes in user privileges, irregular login times, and frequent location shifts.
  • Phishing Surge: Breaches like the MOAB create fertile ground for phishing attacks. Educate employees and customers about recognizing phishing scams.

2. Infrastructure Vigilance

Patch and Update: Regularly update software and apply security patches. Vulnerabilities in outdated systems can be exploited.

Multi-Factor Authentication (MFA): Implement MFA wherever possible. It adds an extra layer of security by requiring additional verification beyond passwords.

Data Encryption: Encrypt sensitive data both at rest and in transit. Even if breached, encrypted data remains useless to attackers.

Incident Response Plan: Have a robust incident response plan in place. Know how to react swiftly if a breach occurs.

3. Customer Trust and Reputation

Transparency: If your organization is affected, be transparent with customers. Promptly inform them about the breach, steps taken, and precautions they should follow.

Reputation Management: A breach can tarnish your brand’s reputation. Communicate openly, take responsibility, and demonstrate commitment to security.

4. Legal and Regulatory Compliance

Data Protection Laws: Understand the legal obligations related to data breaches in your jurisdiction. Compliance is critical to avoid penalties.

Notification Requirements: Depending on the severity, you may need to notify affected individuals, authorities, or regulatory bodies.

5. Employee Training

Security Awareness: Train employees to recognize phishing attempts, use strong passwords, and follow security protocols.

Incident Reporting: Encourage employees to report any suspicious activity promptly.

What next?

The MOAB serves as a wake-up call for businesses worldwide. Cybersecurity isn’t a one-and-done task—it’s an ongoing commitment. By staying vigilant, implementing best practices, and prioritizing data protection, organizations can mitigate the impact of breaches and safeguard their customers’ trust.



Analysis: AI-Driven Online Financial Scams Surge

 

Cybersecurity experts are sounding the alarm about a surge in online financial scams, driven by artificial intelligence (AI), which they warn is becoming increasingly difficult to control. This warning coincides with an investigation by AAP FactCheck into cryptocurrency scams targeting the Pacific Islands.

AAP FactCheck's analysis of over 100 Facebook accounts purporting to be crypto traders reveals deceptive tactics such as fake profile images, altered bank notifications, and false affiliations with prestigious financial institutions.

The experts point out that Pacific Island nations, with their low levels of financial and media literacy and under-resourced law enforcement, are particularly vulnerable. However, they emphasize that this issue extends globally.

In 2022, Australians lost over $3 billion to scams, with a significant portion involving fraudulent investments. Ken Gamble, co-founder of IFW Global, notes that AI is amplifying the sophistication of scams, enabling faster dissemination across social media platforms and rendering them challenging to combat effectively.

Gamble highlights that scammers are leveraging AI to adapt to local languages, enabling them to target victims worldwide. While the Pacific Islands are a prime target due to their limited law enforcement capabilities, organized criminal groups from various countries, including Israel, China, and Nigeria, are behind many of these schemes.

Victims recount their experiences, such as a woman in PNG who fell prey to a scam after her relative's Facebook account was hacked, resulting in a loss of over 15,000 kina.

Dan Halpin from Cybertrace underscores the necessity of a coordinated global response involving law enforcement, international organizations like Interpol, public awareness campaigns, regulatory enhancements, and cross-border collaboration.

Halpin stresses the importance of improving cyber literacy levels in the region to mitigate these risks. However, Gamble warns that without prioritizing this issue, fueled by AI advancements, the situation will only deteriorate further.

Facebook's Two Decades: Four Transformative Impacts on the World

 

As Facebook celebrates its 20th anniversary, it's a moment to reflect on the profound impact the platform has had on society. From revolutionizing social media to sparking privacy debates and reshaping political landscapes, Facebook, now under the umbrella of Meta, has left an indelible mark on the digital world. Here are four key ways in which Facebook has transformed our lives:

1. Revolutionizing Social Media Landscape:
Before Facebook, platforms like MySpace existed, but Mark Zuckerberg's creation quickly outshone them upon its 2004 launch. Within a year, it amassed a million users, surpassing MySpace within four years, propelled by innovations like photo tagging. Despite fluctuations, Facebook steadily grew, reaching over a billion monthly users by 2012 and 2.11 billion daily users by 2023. Despite waning popularity among youth, Facebook remains the world's foremost social network, reshaping online social interaction.

2. Monetization and Privacy Concerns:
Facebook demonstrated the value of user data, becoming a powerhouse in advertising alongside Google. However, its data handling has been contentious, facing fines for breaches like the Cambridge Analytica scandal. Despite generating over $40 billion in revenue in the last quarter of 2023, Meta, Facebook's parent company, has faced legal scrutiny and fines for mishandling personal data.

3. Politicization of the Internet:
Facebook's targeted advertising made it a pivotal tool in political campaigning worldwide, with significant spending observed, such as in the lead-up to the 2020 US presidential election. It also facilitated grassroots movements like the Arab Spring. However, its role in exacerbating human rights abuses, as seen in Myanmar, has drawn criticism.

4. Meta's Dominance:
Facebook's success enabled Meta, previously Facebook, to acquire and amplify companies like WhatsApp, Instagram, and Oculus. Meta boasts over three billion daily users across its platforms. When unable to acquire rivals, Meta has been accused of replicating their features, facing regulatory challenges and accusations of market dominance. The company is shifting focus to AI and the Metaverse, indicating a departure from its Facebook-centric origins.

Looking ahead, Facebook's enduring popularity poses a challenge amidst rapid industry evolution and Meta's strategic shifts. As Meta ventures into the Metaverse and AI, the future of Facebook's dominance remains uncertain, despite its monumental impact over the past two decades.

Telegram Emerges as Hub for Cybercrime, Phishing Attacks as Cheap as $230

Cybersecurity experts raise alarms as Telegram becomes a hotspot for cybercrime, fueling the rise of phishing attacks. This trend facilitates mass assaults at a shockingly low cost, highlighting the "democratization" of cyber threats. In a recent development, cybersecurity researchers shed light on the democratization of the phishing landscape, courtesy of Telegram's burgeoning role in cybercrime activities. 

This messaging platform has swiftly transformed into a haven for threat actors, offering an efficient and cost-effective infrastructure for orchestrating large-scale phishing campaigns. Gone are the days when sophisticated cyber attacks required substantial resources. Now, malevolent actors can execute mass phishing endeavours for as little as $230, making cybercrime accessible to a wider pool of perpetrators. 

The affordability and accessibility of such tactics underscore the urgent need for heightened vigilance in the digital realm. Recent revelations regarding Telegram's involvement in cybercrime underscore a recurring issue with the platform's lenient content moderation policies. Experts emphasize that Telegram's history of lax moderation has fostered a breeding ground for various illicit activities, including the distribution of illegal content and cyber attacks. 

Criticism has been directed at Telegram in the past for its failure to effectively address issues such as misinformation, hate speech, and extremist content, highlighting concerns about user safety. With cyber threats evolving and the digital landscape growing more complex, the necessity for stringent moderation measures within platforms like Telegram becomes increasingly urgent. 

However, balancing user privacy with security poses a significant challenge, given the platform's encryption and privacy features. As discussions continue, Telegram and similar platforms must prioritize user safety and implement effective moderation strategies to mitigate risks effectively. 

"This messaging app has transformed into a bustling hub where seasoned cybercriminals and newcomers alike exchange illicit tools and insights creating a dark and well-oiled supply chain of tools and victims' data," Guardio Labs threat researchers Oleg Zaytsev and Nati Tal reported. 

Furthermore, they added that "free samples, tutorials, kits, even hackers-for-hire – everything needed to construct a complete end-to-end malicious campaign." The company also described Telegram as a "scammers paradise" and a "breeding ground for modern phishing operations." 

In April 2023, Kaspersky revealed that phishers are using Telegram to teach and advertise malicious bots. One such bot, Telekopye (aka Classiscam), helps create fake web pages, emails, and texts for large-scale phishing scams. Guardio warns that Telegram offers easy access to phishing tools, some even free, facilitating the creation of scam pages. 

These kits, along with compromised WordPress sites and backdoor mailers, enable scammers to send convincing emails from legitimate domains, bypassing spam filters. Researchers stress the dual responsibility of website owners to protect against exploitation for illicit activities. 

Telegram offers professionally crafted email templates ("letters") and bulk datasets ("leads") for targeted phishing campaigns. Leads are highly specific, and sourced from cybercrime forums or fake survey sites. Stolen credentials are monetized through the sale of "logs" to other criminal groups, yielding high returns. Social media accounts may sell for $1, while banking details can fetch hundreds. With minimal investment, anyone can launch a significant phishing operation.

Cybercriminals Exploit X Gold Badge, Selling Compromised Accounts on Dark Web

 A recent report highlights the illicit activities of cybercriminals exploiting the "Gold" verification badge on X (formerly Twitter). Following Elon Musk's acquisition of X in 2022, a paid verification system was introduced, allowing regular users to purchase blue ticks. Additionally, organizations could obtain the coveted gold check mark through a monthly subscription. 

Unfortunately, the report reveals that hackers are capitalizing on this feature by selling compromised accounts, complete with the gold verification badge, on dark web marketplaces and forums. CloudSEK, in its findings, notes a consistent pattern of advertisements promoting the sale of accounts with gold verification badges. 

These advertisements were not limited to dark web platforms but were also observed on popular communication channels such as Telegram. The exploitation of the gold verification badge poses a significant risk, as cybercriminals leverage these compromised accounts for phishing and scams, potentially deceiving unsuspecting users. 

This underscores the ongoing challenges in maintaining the security and integrity of online verification systems in the evolving landscape of cyber threats. CloudSek found some ads by just searching on Google, Facebook, and Telegram using words like "Twitter Gold buy." They saw dark web ads, and some were even on Facebook. People were selling X Gold accounts, and the price depended on how popular the account was. 

CloudSek's report said that some ads named the companies for sale, and the cost ranged from $1200 to $2000. This shows that hackers think they can make real money by selling accounts with the gold badge, based on how well-known and followed they are. It's a clear way cybercriminals make cash by selling compromised accounts on the dark web, showing why they do it. 

On the Dark web, a source from CloudSek managed to obtain a quote for 15 inactive X accounts, priced at $35 per account. The seller went a step further, offering a recurring deal of 15 accounts every week, accumulating a total of 720 accounts annually. 

It's noteworthy that the responsibility of activating these accounts with the coveted "gold" status lies with the purchaser, should they choose to do so. This information underscores the thriving market for inactive accounts and the potential volume of compromised assets available for illicit transactions.