Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Social Media. Show all posts

Huge Spike in Social Media and Email Hacks – Simple Ways to Protect Yourself

 


There has been a worrying rise in the number of people losing control of their social media and email accounts this year. According to recent data from Action Fraud, the UK’s national cybercrime reporting center, over 35,000 cases were reported in 2024. This is a huge increase compared to the 22,000 cases recorded the previous year.

To address this growing problem, Action Fraud has teamed up with Meta to start an online safety campaign. Their main goal is to help people secure their accounts by turning on two-step verification, also known as 2FA. This extra security step makes it much harder for hackers to break into accounts.

Hackers usually target social media or email profiles for money. Once they gain access, they often pretend to be the real user and reach out to the person’s friends or followers. Many times, they use these stolen accounts to promote fake investment schemes or sell fake event tickets. In other cases, hackers simply sell these hacked accounts to others who use them for illegal activities.

One trick commonly used by hackers is messaging the account owner’s contacts and convincing them to share security codes. Since the message appears to come from a trusted person, many people unknowingly share sensitive information, giving hackers further control.

Another method involves stealing login information through phishing scams or data leaks. If people use the same password for many sites, hackers can easily access multiple accounts once they crack one.

The good news is that there are simple ways to protect yourself. The most important step is enabling two-step verification on all your accounts. This adds an extra barrier by asking for a unique code when someone tries to log in, making it much tougher for hackers to get through even if they know your password.

Meta has also introduced face recognition technology to help users recover hacked accounts. Still, experts say prevention is always better than trying to fix the damage later.


Here are a few easy tips to protect your online accounts:

1. Always enable two-step verification wherever it is available.

2. Create strong and unique passwords for each account. Avoid using the same password more than once.

3. Be careful if someone you know suddenly asks for a security code — double-check if it’s really them.

4. Stay alert for suspicious links or emails asking for your login details — they could be phishing traps.

5. Keep an eye on your accounts for unusual activity or login attempts from unknown places.


With online scams increasing, staying careful and following these safety steps can help you avoid falling victim to account hacks. Taking action now can save you a lot of trouble later.

SilentCryptominer Threatens YouTubers to Post Malware in Videos

SilentCryptominer Threatens YouTubers to Post Malware in Videos

Experts have discovered an advanced malware campaign that exploits the rising popularity of Windows Packet Divert drivers to escape internet checks.

Malware targets YouTubers 

Hackers are spreading SilentCryptominer malware hidden as genuine software. It has impacted over 2000 victims in Russia alone. The attack vector involves tricking YouTubers with a large follower base into spreading malicious links. 

“Such software is often distributed in the form of archives with text installation instructions, in which the developers recommend disabling security solutions, citing false positives,” reports Secure List. This helps threat actors by “allowing them to persist in an unprotected system without the risk of detection. 

Innocent YouTubers Turned into victims

Most active of all have been schemes for distributing popular stealers, remote access tools (RATs), Trojans that provide hidden remote access, and miners that harness computing power to mine cryptocurrency.” Few commonly found malware in the distribution scheme are: Phemedrone, DCRat NJRat, and XWorm.

In one incident, a YouTuber with 60k subscribers had put videos containing malicious links to infected archives, gaining over 400k views. The malicious links were hosted on gitrock[.]com, along with download counter crossing 40,000. 

The malicious files were hosted on gitrok[.]com, with the download counter exceeding 40,000.

Blackmail and distributing malware

Threat actors have started using a new distribution plan where they send copyright strikes to content creators and influencers and blackmail them to shut down channels if they do not post videos containing malicious links. The scare strategy misuses the fame of the popular YouTubers to distribute malware to a larger base. 

The infection chain starts with a manipulated start script that employs an additional executable file via PowerShell. 

As per the Secure List Report, the loader (written in Python) is deployed with PyInstaller and gets the next-stage payload from hardcoded domains.  The second-stage loader runs environment checks, adds “AppData directory to Microsoft Defender exclusions” and downloads the final payload “SilentCryptominer.”

The infamous SilentCryptoMiner

The SilentCryptoMiner is known for mining multiple cryptocurrencies via different algorithms. It uses process hollowing techniques to deploy miner code into PCs for stealth.

The malware can escape security checks, like stopping mining when processes are running and scanning for virtual environment indicators. 

WhatsApp Says Spyware Company Paragon Hacked 90 Users

WhatsApp Says Spyware Company Paragon Hacked 90 Users

Attempts to censor opposition voices are not new. Since the advent of new media, few Governments and nations have used spyware to keep tabs on the public, and sometimes target individuals that the government considers a threat. All this is done under the guise of national security, but in a few cases, it is aimed to suppress opposition and is a breach of privacy. 

Zero-click Spyware for WhatsApp

One such interesting incident is the recent WhatsApp “zero-click” hacking incident. In a conversation with Reuters, a WhatsApp official disclosed that Israeli spyware company Paragon Solutions was targeting its users, victims include journalists and civil society members. Earlier this week, the official told Reuters that Whatsapp had sent Paragon a cease-and-desist notice after the surveillance hack. In its official statement, WhatsApp stressed it will “continue to protect people's ability to communicate privately."

Paragon refused to comment

According to Reuters, WhatsApp had noticed an attempt to hack around 90 users. The official didn’t disclose the identity of the targets but hinted that the victims belonged to more than a dozen countries, mostly from Europe. WhatsApp users were sent infected files that didn’t require any user interaction to hack their targets, the technique is called the “zero-click” hack, known for its stealth 

“The official said WhatsApp had since disrupted the hacking effort and was referring targets to Canadian internet watchdog group Citizen Lab,” Reuter reports. He didn’t discuss how it was decided that Paragon was the culprit but added that law enforcement agencies and industry partners had been notified, and didn’t give any further details.

FBI didn’t respond immediately

“The FBI did not immediately return a message seeking comment,” Reuter said. Citizen Lab researcher John Scott-Railton said the finding of Paragon spyware attacking WhatsApp is a “reminder that mercenary spyware continues to proliferate and as it does, so we continue to see familiar patterns of problematic use."

Citizen Lab researcher John Scott-Railton said the discovery of Paragon spyware targeting WhatsApp users "is a reminder that mercenary spyware continues to proliferate and as it does, so we continue to see familiar patterns of problematic use."

Ethical implications concerning spying software

Spyware businesses like Paragaon trade advanced surveillance software to government clients, and project their services as “critical to fighting crime and protecting national security,” Reuter mentions. However, history suggests that such surveillance tools have largely been used for spying, and in this case- journalists, activists, opposition politicians, and around 50 U.S officials. This raises questions about the lawless use of technology.

Paragon - which was reportedly acquired by Florida-based investment group AE Industrial Partners last month - has tried to position itself publicly as one of the industry's more responsible players. On its website, Paragon advertises the software as “ethically based tools, teams, and insights to disrupt intractable threats” On its website, and media reports mentioning people acquainted with the company “say Paragon only sells to governments in stable democratic countries,” Reuter mentions.

Meta Removes Independent Fact Checkers, Replaces With "Community Notes"


Meta to remove fact-checkers

Meta is dumping independent fact-checkers on Instagram and Facebook, similar to what X (earlier Twitter) did, replacing them with “community notes” where users’ comments decide the accuracy of a post.

On Tuesday, Mark Zuckerberg in a video said third-party moderators were "too politically biased" and it was "time to get back to our roots around free expression".

Tech executives are trying to build better relations with the new US President Donald Trump who will take oath this month, the new move is a step in that direction.  

Republican Party and Meta

The Republican party and Trump have called out Meta for its fact-checking policies, stressing it censors right-wing voices on its platform.

After the new policy was announced, Trump said in a news conference he was pleased with Meta’s decision to have  "come a long way".

Online anti-hate speech activists expressed disappointment with the shift, claiming it was motivated by a desire to align with Trump.

“Zuckerberg's announcement is a blatant attempt to cozy up to the incoming Trump administration – with harmful implications. Claiming to avoid "censorship" is a political move to avoid taking responsibility for hate and disinformation that platforms encourage and facilitate,” said Ava Lee of Global Witness. This organization sees itself as trying to bring big tech like Meta accountable.

Copying X

The present fact-checking program of Meta was introduced in 2016, it sends posts that seem false or misleading to independent fact-checking organizations to judge their credibility. 

Posts marked as misleading have labels attached to them, giving users more information, and move down in viewers’ social media feeds. This will now be replaced by community notes, starting in the US. Meta has no “immediate plans” to remove third-party fact-checkers in the EU or the UK.

The new community notes move has been copied from platform X, which was started after Elon Musk bought Twitter. 

It includes people with opposing opinions agreeing on notes that provide insight or explanation to disputed posts. 

We will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations. We will take a more personalized approach to political content, so that people who want to see more of it in their feeds can.

India Proposes New Draft Rules Under Digital Personal Data Protection Act, 2023




The Ministry of Electronics and Information Technology (MeitY) announced on January 3, 2025, the release of draft rules under the Digital Personal Data Protection Act, 2023 for public feedback. A significant provision in this draft mandates that parental consent must be obtained before processing the personal data of children under 18 years of age, including creating social media accounts. This move aims to strengthen online safety measures for minors and regulate how digital platforms handle their data.

The draft rules explicitly require social media platforms to secure verifiable parental consent before allowing minors under 18 to open accounts. This provision is intended to safeguard children from online risks such as cyberbullying, data breaches, and exposure to inappropriate content. Verification may involve government-issued identification or digital identity tools like Digital Lockers.

MeitY has invited the public to share their opinions and suggestions regarding the draft rules through the government’s citizen engagement platform, MyGov.in. The consultation window remains open until February 18, 2025. Public feedback will be reviewed before the finalization of the rules.

Consumer Rights and Data Protection Measures

The draft rules enhance consumer data protection by introducing several key rights and safeguards:
  • Data Deletion Requests: Users can request companies to delete their personal data.
  • Transparency Obligations: Companies must explain why user data is being collected and how it will be used.
  • Penalties for Data Breaches: Data fiduciaries will face fines of up to ₹250 crore for data breaches.

To ensure compliance, the government plans to establish a Data Protection Board, an independent digital regulatory body. The Board will oversee data protection practices, conduct investigations, enforce penalties, and regulate consent managers. Consent managers must register with the Board and maintain a minimum net worth of ₹12 crore.

Mixed Reactions to the Proposed Rules

The draft rules have received a blend of support and criticism. Supporters, like Saneh Lata, a teacher and mother of two from Dwarka, Delhi, appreciate the move, citing social media as a significant distraction for children. Critics, however, argue that the regulations may lead to excessive government intervention in children's digital lives.

Certain institutions, such as educational organizations and child welfare bodies, may be exempt from some provisions to ensure uninterrupted educational and welfare services. Additionally, digital intermediaries like e-commerce, online gaming, and social media platforms are subject to specific guidelines tailored to their operations.

The proposed draft rules mark a significant step towards strengthening data privacy, especially for vulnerable groups like children and individuals under legal guardianship. By holding data fiduciaries accountable and empowering consumers with greater control over their data, the government aims to create a safer and more transparent digital ecosystem.

FireScam Malware Targets Android Users via Fake Telegram Premium App

Android Malware 'FireScam' Poses As Telegram Premium to Steal User Data


A newly discovered Android malware, FireScam, is being distributed through phishing websites on GitHub, masquerading as a premium version of the Telegram application. These malicious sites impersonate RuStore, a Russian app marketplace, to deceive users into downloading the infected software.

How FireScam Operates

RuStore, launched by Russian tech giant VK (VKontakte) in May 2022, was developed as an alternative to Apple's App Store and Google Play following Western sanctions that restricted Russian users' access to global platforms. This marketplace hosts apps that comply with Russian regulations and operates under the oversight of the Russian Ministry of Digital Development.

According to security researchers at CYFIRMA, attackers have set up a fraudulent GitHub page mimicking RuStore. This fake website delivers a dropper module named GetAppsRu.apk. Once installed, the dropper requests extensive permissions, allowing it to scan installed applications, access device storage, and install additional software. It then downloads and executes the main malware payload, disguised as Telegram Premium.apk. This secondary payload enables the malware to monitor notifications, read clipboard data, access SMS and call information, and collect other sensitive details.

FireScam’s Advanced Capabilities

Once activated, FireScam presents users with a deceptive WebView-based Telegram login page designed to steal credentials. The malware communicates with Firebase Realtime Database, allowing stolen data to be uploaded instantly. It also assigns unique identifiers to compromised devices, enabling hackers to track them.

Stolen data is temporarily stored before being filtered and transferred to another location, ensuring that traces are erased from Firebase. Additionally, FireScam establishes a persistent WebSocket connection with the Firebase command-and-control (C2) server, enabling real-time command execution. This allows attackers to:

  • Request specific data from the infected device
  • Install additional payloads
  • Modify surveillance parameters
  • Initiate immediate data uploads

Furthermore, the malware can:

  • Monitor screen activity and app usage
  • Track changes in screen on/off states
  • Log keystrokes, clipboard data, and credentials stored in password managers
  • Intercept and steal e-commerce payment details

How to Stay Safe

While the identity of FireScam’s operators remains unknown, CYFIRMA researchers warn that the malware exhibits advanced evasion techniques and poses a serious threat to users. To minimize the risk of infection, users should:

  • Avoid downloading apps from unverified sources, especially those claiming to be premium versions of popular software.
  • Exercise caution when opening links from unknown sources.
  • Regularly review and restrict app permissions to prevent unauthorized data access.
  • Use reliable security solutions to detect and block malware threats.

As attackers continue refining their tactics, staying vigilant against phishing campaigns and suspicious downloads is essential to protecting personal and financial data.


The Digital Markets Act (DMA): A Game Changer for Tech Companies


The Digital Markets Act (DMA) is poised to reshape the European digital landscape. This pioneering legislation by the European Union seeks to curb the dominance of tech giants, foster competition, and create a fairer digital marketplace for consumers and businesses alike. By enforcing strict regulations on major players like Google, Apple, and Meta, the DMA aims to dismantle monopolistic practices and ensure greater choice and transparency.

The DMA targets the "gatekeepers" of the digital economy—large companies that control access to critical digital services. By requiring these firms to unbundle tightly integrated ecosystems, the act provides smaller players an opportunity to thrive.

For instance, companies will no longer be able to self-preference their own products in search rankings or restrict users from installing third-party apps. These changes promise to unlock innovation and drive competition across the digital ecosystem.

Google’s longstanding practice of integrating services such as Maps, Calendar, and Docs with its search engine has faced criticism for sidelining competitors. Under the DMA, Google must separate these services, starting with Maps.

While these integrations have offered users convenience, they have limited market access for alternatives like HERE WeGo and OpenStreetMap. The new regulations could disrupt Google’s user experience but pave the way for smaller mapping solutions to gain traction.

Apple faces significant challenges under the DMA. The legislation mandates opening its App Store to competing platforms, potentially allowing alternative app marketplaces to operate on iOS devices. This could disrupt Apple’s revenue streams and force the company to rethink its tightly controlled ecosystem.

Apple’s adherence to the DMA will redefine its approach to user experience while creating opportunities for developers to access a broader audience.

For consumers, the DMA promises long-term benefits by increasing choice and reducing dependency on dominant players. Initially, the transition may seem inconvenient, but the diversity it fosters will lead to a more innovative digital economy.

The DMA’s implications extend beyond Europe. It sets a precedent for how governments worldwide might regulate tech giants. Countries like the United States and India are closely watching its impact, potentially adopting similar frameworks to tackle monopolistic practices.

The Digital Markets Act is more than just a European regulation — it’s a bold step towards a competitive and equitable digital future. By leveling the playing field, it challenges global tech giants to innovate responsibly while empowering smaller businesses and consumers alike.

Practical Tips to Avoid Oversharing and Protect Your Online Privacy

 

In today’s digital age, the line between public and private life often blurs. Social media enables us to share moments, connect, and express ourselves. However, oversharing online—whether through impulsive posts or lax privacy settings—can pose serious risks to your security, privacy, and relationships. 

Oversharing involves sharing excessive personal information, such as travel plans, daily routines, or even seemingly harmless details like pet names or childhood memories. Cybercriminals can exploit this information to answer security questions, track your movements, or even plan crimes like burglary. 

Additionally, posts assumed private can be screenshotted, shared, or retrieved long after deletion, making them a permanent part of your digital footprint. Beyond personal risks, oversharing also contributes to a growing culture of surveillance. Companies collect your data to build profiles for targeted ads, eroding your control over your personal information. 

The first step in safeguarding your online privacy is understanding your audience. Limit your posts to trusted friends or specific groups using privacy tools on social media platforms. Share updates after events rather than in real-time to protect your location. Regularly review and update your account privacy settings, as platforms often change their default configurations. 

Set your profiles to private, accept connection requests only from trusted individuals, and think twice before sharing. Ask yourself if the information is something you would be comfortable sharing with strangers, employers, or cybercriminals. Avoid linking unnecessary accounts, as this creates vulnerabilities if one is compromised. 

Carefully review the permissions you grant to apps or games, and disconnect those you no longer use. For extra security, enable two-factor authentication and use strong, unique passwords for each account. Oversharing isn’t limited to social media posts; apps and devices also collect data. Disable unnecessary location tracking, avoid geotagging posts, and scrub metadata from photos and videos before sharing. Be mindful of background details in images, such as visible addresses or documents. 

Set up alerts to monitor your name or personal details online, and periodically search for yourself to identify potential risks. Children and teens are especially vulnerable to the risks of oversharing. Teach them about privacy settings, the permanence of posts, and safe sharing habits. Simple exercises, like the “Granny Test,” can encourage thoughtful posting. 

Reducing online activity and spending more time offline can help minimize oversharing while fostering stronger in-person connections. By staying vigilant and following these tips, you can enjoy the benefits of social media while keeping your personal information safe.

OpenAI’s Disruption of Foreign Influence Campaigns Using AI

 

Over the past year, OpenAI has successfully disrupted over 20 operations by foreign actors attempting to misuse its AI technologies, such as ChatGPT, to influence global political sentiments and interfere with elections, including in the U.S. These actors utilized AI for tasks like generating fake social media content, articles, and malware scripts. Despite the rise in malicious attempts, OpenAI’s tools have not yet led to any significant breakthroughs in these efforts, according to Ben Nimmo, a principal investigator at OpenAI. 

The company emphasizes that while foreign actors continue to experiment, AI has not substantially altered the landscape of online influence operations or the creation of malware. OpenAI’s latest report highlights the involvement of countries like China, Russia, Iran, and others in these activities, with some not directly tied to government actors. Past findings from OpenAI include reports of Russia and Iran trying to leverage generative AI to influence American voters. More recently, Iranian actors in August 2024 attempted to use OpenAI tools to generate social media comments and articles about divisive topics such as the Gaza conflict and Venezuelan politics. 

A particularly bold attack involved a Chinese-linked network using OpenAI tools to generate spearphishing emails, targeting OpenAI employees. The attack aimed to plant malware through a malicious file disguised as a support request. Another group of actors, using similar infrastructure, utilized ChatGPT to answer scripting queries, search for software vulnerabilities, and identify ways to exploit government and corporate systems. The report also documents efforts by Iran-linked groups like CyberAveng3rs, who used ChatGPT to refine malicious scripts targeting critical infrastructure. These activities align with statements from U.S. intelligence officials regarding AI’s use by foreign actors ahead of the 2024 U.S. elections. 

However, these nations are still facing challenges in developing sophisticated AI models, as many commercial AI tools now include safeguards against malicious use. While AI has enhanced the speed and credibility of synthetic content generation, it has not yet revolutionized global disinformation efforts. OpenAI has invested in improving its threat detection capabilities, developing AI-powered tools that have significantly reduced the time needed for threat analysis. The company’s position at the intersection of various stages in influence operations allows it to gain unique insights and complement the work of other service providers, helping to counter the spread of online threats.

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

Protecting Your Business from Cybercriminals on Social Media

 

Social media has transformed into a breeding ground for cybercriminal activities, posing a significant threat to businesses of all sizes. According to recent reports, more than half of all companies suffer over 30% revenue loss annually due to fraudulent activities, with social media accounting for about 37% of these scams. This is alarming because even established tech giants like Yahoo, Facebook, and Google have fallen victim to these attacks. For smaller businesses, the threat is even greater as they often lack the robust security measures needed to fend off cyber threats effectively. 

Phishing scams are among the most prevalent attacks on social media. Cybercriminals often create fake profiles that mimic company employees or business partners, tricking unsuspecting users into clicking on malicious links. These links can lead to malware installations or trick individuals into revealing sensitive information like passwords or banking details. In some instances, fraudsters might also impersonate high-level executives to manipulate employees into transferring money or sharing confidential data. Another common method is social engineering, where cybercriminals manipulate individuals into taking actions they otherwise wouldn’t. 

For example, they might pretend to be company executives or representatives, convincing lower-level employees to share sensitive information, such as financial records or login credentials. This tactic is especially dangerous since it often appears as legitimate internal communication, making it harder for employees to recognize the threat. Credential stuffing is another significant concern. In this form of attack, cybercriminals use stolen credentials from data breaches to gain unauthorized access to social media accounts. This can lead to spam, data theft, or the spread of malware through the company’s official accounts, jeopardizing both the business’s reputation and its customers’ trust. Negative campaigns pose a different yet equally damaging threat. 

Attackers may post false reviews, complaints, or misinformation to tarnish a company’s image, resulting in lost sales, reduced customer loyalty, and even potential legal costs if the business decides to pursue legal action. Such campaigns can have long-lasting effects, making it difficult for companies to rebuild their reputations. Targeted advertising is another avenue for cybercriminals to exploit. They create deceptive ads that mislead customers or redirect them to malicious sites, damaging the company’s credibility and resulting in financial losses. To safeguard against these threats, businesses must take proactive steps. Using strong, unique passwords for social media accounts is essential to prevent unauthorized access. 

Responding quickly to any incidents can limit damage, and regular employee training on recognizing phishing attempts and social engineering tactics can reduce vulnerability. Managing access to social media accounts by limiting permissions to a select few employees can minimize risk. Additionally, regularly updating systems and applications ensures that security patches protect against known vulnerabilities. 

By implementing these preventive measures, businesses can better defend themselves against the growing threats posed by cybercriminals on social media, maintaining their reputation, customer trust, and financial stability.

Russian Disinformation Network Struggles to Survive Crackdown


 

The Russian disinformation network, known as Doppelgänger, is facing difficulties as it attempts to secure its operations in response to increased efforts to shut it down. According to a recent report by the Bavarian State Office for the Protection of the Constitution (BayLfV), the network has been scrambling to protect its systems and data after its activities were exposed.

Doppelgänger’s Activities and Challenges

Doppelgänger has been active in spreading false information across Europe since at least May 2022. The network has created numerous fake social media accounts, fraudulent websites posing as reputable news sources, and its own fake news platforms. These activities have primarily targeted Germany, France, the United States, Ukraine, and Israel, aiming to mislead the public and spread disinformation.

BayLfV’s report indicates that Doppelgänger’s operators were forced to take immediate action to back up their systems and secure their operations after it was revealed that European hosting companies were unknowingly providing services to the network. The German agency monitored the network closely and discovered details about the working patterns of those involved, noting that they operated during Russian office hours and took breaks on Russian holidays.

Connections to Russia

Further investigation by BayLfV uncovered clear links between Doppelgänger and Russia. The network used Russian IP addresses and the Cyrillic alphabet in its operations, reinforcing its connection to the Kremlin. The network's activities were timed with Moscow and St. Petersburg working hours, further suggesting coordination with Russian time zones.

This crackdown comes after a joint investigation by digital rights groups Qurium and EU DisinfoLab, which exposed Doppelgänger's infrastructure spread across at least ten European countries. Although German authorities were aware of the network’s activities, they had not taken proper action until recently.

Social Media Giant Meta's Response

Facebook’s parent company, Meta, has been actively working to combat Doppelgänger’s influence on its platforms. Meta reported that the network has been forced to change its tactics due to ongoing enforcement efforts. Since May, Meta has removed over 5,000 accounts and pages linked to Doppelgänger, disrupting its operations.

In an attempt to avoid detection, Doppelgänger has shifted its focus to spoofing websites of nonpolitical and entertainment news outlets, such as Cosmopolitan and The New Yorker. However, Meta noted that most of these efforts are being caught quickly, either before they go live or shortly afterward, indicating that the network is struggling to maintain its previous level of influence.

Impact on Doppelgänger’s Operations

The pressure from law enforcement and social media platforms is clearly affecting Doppelgänger’s ability to operate. Meta highlighted that the quality of the network’s disinformation campaigns has declined as it struggles to adapt to the persistent enforcement. The goal is to continue increasing the cost of these operations for Doppelgänger, making it more difficult for the network to continue spreading false information.

This ongoing crackdown on Doppelgänger demonstrates the challenges in combating disinformation and the importance of coordinated efforts to protect the integrity of information in today’s digital environment


The UK Erupts in Riots as Big Tech Stays Silent


 

For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.

Big Tech's Reluctance to Speak

Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.

Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.

Elon Musk's Twitter and the Spread of Misinformation

The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.

Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.

Experts Weigh In on Big Tech's Silence

Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.

Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.

What Does It Look Like?

As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.

Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.

A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.


Why Did Turkey Suddenly Ban Instagram? The Shocking Reason Revealed


 

On Friday, Turkey's Information and Communication Technologies Authority (ICTA) unexpectedly blocked Instagram access across the country. The ICTA, responsible for overseeing internet regulations, did not provide any specific reason for the ban. However, according to reports from Yeni Safak, a newspaper supportive of the government, the ban was likely a response to Instagram removing posts by Turkish users that expressed condolences for Hamas leader Ismail Haniyeh's death.

Many Turkish users faced difficulties accessing Instagram following the ban. Fahrettin Altun, the communications director for the Turkish presidency, publicly condemned Instagram, accusing it of censoring messages of sympathy for Haniyeh, whom he called a martyr. This incident has sparked significant controversy within Turkey.

Haniyeh’s Death and Its Aftermath

Ismail Haniyeh, the political leader of Hamas and a close associate of Turkish President Recep Tayyip Erdogan, was killed in an attack in Tehran on Wednesday, an act allegedly carried out by Israel. His death prompted widespread reactions in Turkey, with many taking to social media to express their condolences and solidarity, leading to the conflict with Instagram.

A History of Social Media Restrictions in Turkey

This is not the first instance of social media restrictions in Turkey. The country, with a population of 85 million, includes over 50 million Instagram users, making such bans highly impactful. From April 2017 to January 2020, Turkey blocked access to Wikipedia due to articles that linked the Turkish government to extremism, tellingly limiting the flow of information.

This recent action against Instagram is part of a broader pattern of conflicts between the Turkish government and social media companies. In April, Meta, the parent company of Facebook, had to suspend its Threads network in Turkey after authorities blocked its information sharing with Instagram. This surfaces ongoing tensions between Turkey and major social media firms.

The blockage of Instagram illustrates the persistent struggle between the Turkish government and social media platforms over content regulation and freedom of expression. These restrictions pose crucial challenges to the dissemination of information and public discourse, affecting millions who rely on these platforms for news and communication. 

Turkey's decision to block Instagram is a testament to the complex dynamics between the government and digital platforms. As the situation pertains, it will be essential to observe the responses from both Turkish authorities and the affected social media companies to grasp the broader implications for digital communication and freedom of speech in Turkey.


Telegram Users Cross 900 Million, Company Plans to Launch App Store


Aims to reach 1 Billion followers: Telegram founder

Telegram, a famous messaging app crossed 900 million active users recently, it will aim to cross the 1 billion milestone by 2024. According to Pavel Durov, the company's founder, it also plans to launch an app store and an in-app browser supporting web3 pages by July.

In March, Telegram reached 900 million. While addressing the achievement, Durov said the company wishes to be profitable by 2025.

Telegram looks proactive in adopting web3 tech for its platform. Since the beginning, the company has been a strong supporter of blockchain and cryptocurrency initiatives, but it couldn't enter the space due to its initial coin offering failure in 2018. “We began monetizing primarily to maintain our independence. Generally, we see value in [an IPO] as a means of democratizing access to Telegram's assets,” Durov said in an interview with the Financial Times earlier this year.

Telegram and TON blockchain

Telegram started auctioning usernames on the TON blockchain in December 2018. It has emphasized assisting developers in building mini-apps and games that utilize cryptocurrency while doing transactions. In 2024, the company started sharing ad revenues with channel owners by giving out Toncoin (a token on the TON blockchain). At the beginning of July 2024, Telegram began allowing channel owners to convert stars to Toncoin for buying ads at discount prices or trade cryptocurrencies.

Scam and Telegram

But telegram has been long suffering from scams and attacks from threat actors. According to a Kaspersky report, since November 2023, it has fallen victim to different peddling schemes by scammers, letting them steal Toncoins from users. According to Durov, Telegram plans on improving its moderation processes this year as multiple global elections surface (few have already happened as we speak) and deploy AI-related mechanisms to address potential problems. 

Financial Times reported “Messaging rival WhatsApp, owned by Meta, has 1.8bn monthly active users, while encrypted communications app Signal has 30mn as of February 2024, according to an analysis by Sensor Tower, though this data only covers mobile app use. Telegram’s bid for advertising dollars is at odds with its reputation as a renegade platform with a hands-off approach to moderation, which recently drew scrutiny for allowing some Hamas-related content to remain on the platform. ”

Supreme Court Directive Mandates Self-Declaration Certificates for Advertisements

 

In a landmark ruling, the Supreme Court of India recently directed every advertiser and advertising agency to submit a self-declaration certificate confirming that their advertisements do not make misleading claims and comply with all relevant regulatory guidelines before broadcasting or publishing. This directive stems from the case of Indian Medical Association vs Union of India. 

To enforce this directive, the Ministry of Information and Broadcasting has issued comprehensive guidelines outlining the procedure for obtaining these certificates, which became mandatory from June 18, 2024, onwards. This move is expected to significantly impact advertisers, especially those using deepfakes generated by Generative AI (GenAI) on social media platforms like Instagram, Facebook, and YouTube. The use of deepfakes in advertisements has been a growing concern. 

In a previous op-ed titled “Urgently needed: A law to protect consumers from deepfake ads,” the rising menace of deepfake ads making misleading or fraudulent claims was highlighted, emphasizing the adverse effects on consumer rights and public figures. A survey conducted by McAfee revealed that 75% of Indians encountered deepfake content, with 38% falling victim to deepfake scams, and 18% directly affected by such fraudulent schemes. Alarmingly, 57% of those targeted mistook celebrity deepfakes for genuine content. The new guidelines aim to address these issues by requiring advertisers to provide bona fide details and final versions of advertisements to support their declarations. This measure is expected to aid in identifying and locating advertisers, thus facilitating tracking once complaints are filed. 

Additionally, it empowers courts to impose substantial fines on offenders. Despite the potential benefits, industry bodies such as the Indian Internet and Mobile Association of India (IAMAI), Indian Newspaper Association (INS), and the Indian Society of Advertisers (ISA) have expressed concerns over the additional compliance burden, particularly for smaller advertisers. These bodies argue that while self-certification has merit, the process needs to be streamlined to avoid hampering legitimate advertising activities. The challenge of regulating AI-enabled deepfake ads is further complicated by the sheer volume of digital advertisements, making it difficult for regulators to review each one. 

Therefore, it is suggested that online platforms be obligated to filter out deepfake ads, leveraging their technology and resources for efficient detection. The Ministry of Electronics and Information Technology highlighted the negligence of social media intermediaries in fulfilling their due diligence obligations under the IT Rules in a March 2024 advisory. 

Although non-binding, the advisory stipulates that intermediaries must not allow unlawful content on their platforms. The Supreme Court is set to hear the matter again on July 9, 2024, when industry bodies are expected to present their views on the new guidelines. This intervention could address the shortcomings of current regulatory approaches and set a precedent for robust measures against deceptive advertising practices. 

As the country grapples with the growing threat of dark patterns in online ads, the apex court’s involvement is crucial in ensuring consumer protection and the integrity of advertising practices in India.

Stay Secure: How to Prevent Zero-Click Attacks on Social Platforms

Stay Secure: How to Prevent Zero-Click Attacks on Social Platforms

While we have all learned to avoid clicking on suspicious links and be wary of scammers, this week we were reminded that there are some silent threats out there that we should be aware of zero-click assaults.

Recent Incidents

As Forbes first reported, TikTok revealed that a few celebrities' accounts, including CNN and Paris Hilton, were penetrated by simply sending a direct message (DM). Attackers apparently used a zero-day vulnerability in the messaging component to run malicious malware when the message was opened. 

The NSA advised all smartphone users to turn their devices off and back on once a week for safety against zero-click assaults, however, the NSA accepts that this tactic will only occasionally prevent these attacks from succeeding. However, there are still steps you can take to protect yourself—and security software such as the finest VPNs can assist you.

TikTok’s Vulnerability: A Case Study in Zero-Click Exploits

As the name implies, a zero-click attack or exploit requires no activity from the victim. Malicious software can be installed on the targeted device without the user clicking on any links or downloading any harmful files.

This feature makes these types of attacks extremely difficult to detect. This is simply because a lack of engagement significantly minimizes the likelihood of hostile activity.

Cybercriminals use unpatched vulnerabilities in software code to carry out zero-click exploits, known as zero-day vulnerabilities. According to experts at security firm Kaspersky, apps with messaging or voice calling functions is a frequent target because "they are designed to receive and interpret data from untrusted sources"—making them more vulnerable.

Once a device vulnerability has been properly exploited, hackers can use malware, such as info stealers, to scrape your private data. Worse, they can install spyware in the background, recording all of your activity.

The Silent Threat

This is exactly how the Pegasus spyware attacked so many victims—more than 1,000 people in 50 countries, according to the 2021 joint investigation—without them even knowing it.

The same year, Citizen Lab security experts revealed that utilizing two zero-click iMessage bugs, nine Bahraini activists' iPhones were successfully infiltrated with Pegasus spyware. In 2019, attackers used a WhatsApp zero-day vulnerability to inject malware into communications via a missed call.

As the celebrity TikTok hack story shows, social media platforms are becoming the next popular target. Meta, for example, recently patched a similar vulnerability that could have let attackers to take over any Facebook account.

Protective Measures

Stay Updated
  • Regularly update your operating system, apps, and firmware. Patches often address known vulnerabilities.
  • Enable automatic updates to stay protected without manual intervention.
App Store Caution
  • Download apps only from official app stores (e.g., Google Play, Apple App Store). Third-party sources may harbor malicious apps.
  • Remove unused apps to reduce your attack surface.
Multi-Factor Authentication (MFA)
  • Enable MFA for all your accounts, especially social media platforms. Even if an attacker gains access to your password, MFA adds an extra layer of security.
  • Use authenticator apps or hardware tokens instead of SMS-based codes.
Beware of DMs
  • Be cautious when opening DMs, especially from unknown senders.
  • Avoid clicking on links or downloading files unless you’re certain of their legitimacy.
Media Files Scrutiny
  • Treat media files (images, videos, audio) with suspicion.
  • Avoid opening files from untrusted sources, even if they appear harmless.
No Jailbreaking or Rooting
  • Modifying your device’s software (jailbreaking/rooting) weakens security.
  • Stick to the official software to maintain robust defenses.

Apple Working to Patch Alarming iPhone Issue

 

Apple claims to be working rapidly to resolve an issue that resulted in some iPhone alarms not setting off, allowing its sleeping users to have an unexpected lie-in. 

Many people rely on their phones as alarm clocks, and some oversleepers took to social media to gripe. A Tiktokker expressed dissatisfaction at setting "like five alarms" that failed to go off. 

Apple has stated that it is aware of the issue at hand, but has yet to explain what it believes is causing it or how users may avoid a late start. 

It's also unknown how many people are affected or if the issue is limited to specific iPhone models. The news was first made public by the early risers on NBC's Today Show, which sparked concerns. 

In the absence of an official solution, those who are losing sleep over the issue can try a few simple fixes. One is to prevent human error; therefore, double-check the phone's alarm settings and make sure the volume is turned up. 

Others pointed the finger at Apple designers, claiming that a flaw in the iPhones' "attention aware features" could be to blame.

When enabled, they allow an iPhone to detect whether a user is paying attention to their device and, if so, to automatically take action, such as lowering the volume of alerts, including alarms. 

According to Apple, they are compatible with the iPhone X and later, as well as the iPad Pro 11-inch and iPad Pro 12.9-inch. Some TikTok users speculated that if a slumbering user's face was oriented towards the screen of a bedside iPhone, depending on the phone's settings, the functionalities may be activated. 

Apple said it intends to resolve the issue quickly. But, until then, its time zone-spanning consumer base may need to dust off some old gear and replace TikTok with the more traditional - but trustworthy - tick-tock of an alarm clock.

Discord Users' Privacy at Risk as Billions of Messages Sold Online

 

In a concerning breach of privacy, an internet-scraping company, Spy.pet, has been exposed for selling private data from millions of Discord users on a clear web website. The company has been gathering data from Discord since November 2023, with reports indicating the sale of four billion public Discord messages from over 14,000 servers, housing a staggering 627,914,396 users.

How Does This Breach Work?

The term "scraped messages" refers to the method of extracting information from a platform, such as Discord, through automated tools that exploit vulnerabilities in bots or unofficial applications. This breach potentially exposes private chats, server discussions, and direct messages, highlighting a major security flaw in Discord's interaction with third-party services.

Potential Risks Involved

Security experts warn that the leaked data could contain personal information, private media files, financial details, and even sensitive company information. Usernames, real names, and connected accounts may be compromised, posing a risk of identity theft or financial fraud. Moreover, if Discord is used for business communication, the exposure of company secrets could have serious implications.

Operations of Spy.pet

Spy.pet operates as a chat-harvesting platform, collecting user data such as aliases, pronouns, connected accounts, and public messages. To access profiles and archives of conversations, users must purchase credits, priced at $0.01 each with a minimum of 500 credits. Notably, the platform only accepts cryptocurrency payments, excluding Coinbase due to a ban. Despite facing a DDoS attack in February 2024, Spy.pet claims minimal damage.

How To Protect Yourself?

Discord is actively investigating Spy.pet and is committed to safeguarding users' privacy. In the meantime, users are advised to review their Discord privacy settings, change passwords, enable two-factor authentication, and refrain from sharing sensitive information in chats. Any suspected account compromises should be reported to Discord immediately.

What Are The Implications?

Many Discord users may not realise the permanence of their messages, assuming them to be ephemeral in the fast-paced environment of public servers. However, Spy.pet's data compilation service raises concerns about the privacy and security of users' conversations. While private messages are currently presumed secure, the sale of billions of public messages underscores the importance of heightened awareness while engaging in online communication.

The discovery of Spy.pet's actions is a clear signal of how vulnerable online platforms can be and underscores the critical need for strong privacy safeguards. It's crucial for Discord users to stay alert and take active measures to safeguard their personal data in response to this breach. As inquiries progress, the wider impact of this privacy violation on internet security and data protection is a substantial concern that cannot be overlooked.


Apple Steps Up Spyware Alerts Amid Rising Mercenary Threats

 


It has been reported that Apple sent notifications on April 10 to its Indian and 91 other users letting them know they might have been a victim of a possible mercenary spyware attack. As stated in the company's notification to the affected users, these spyware attacks were intended to 'remotely compromise the iPhone associated with the users' Apple IDs,' suggesting the attackers might have targeted them specifically as a result of who they are or what they do, and that they were most likely to be a target. 

A threat notification has been issued to users worldwide after fears were raised that sophisticated spyware attacks could be targeting high-profile Apple customers. There had been a similar warning sent out to Indian Apple users back in October last year, in which members of the Indian Parliament and journalists were alerted about potential ‘state-sponsored attacks'. 

People who had been alerted last year were able to use social media in response to the alerts, but this time around, the same has not been the case. After the Pegasus surveillance issue, Apple introduced this feature in 2021. When these alerts are received, they will be sent to users when they see activity that is consistent with a state-sponsored attack. 

It has recently released an alert highlighting the dangers and rarities of mercenary spyware, like the famous Pegasus from NSO Group, highlighting how complex and rare these types of viruses can be. According to the company's warning email, the spyware was designed to secretly infiltrate iPhones associated with particular Apple IDs. 

There has been a lot of speculation surrounding this issue, with Apple indicating that attackers may select their targets depending on their identity or profession to gain access to their systems. Mercenary spyware refers to sophisticated malware that has been developed and deployed primarily by private entities that may be guided by national authorities. 

In a message issued by the company, users were warned that advanced spyware may attempt to remotely access their iPhones, indicating that they may be at risk. The attacks, according to Apple, are both “exceptionally rare” and “vastly more sophisticated” than the usual cybercrime activities or consumer malware. 

In addition to stressing the unique characteristics of threats such as Pegasus spyware from NSO Group, the company also pointed out that such attacks are individually tailored and cost millions of dollars to launch, and only a very small percentage of customers are affected by such attacks. Moreover, as evidenced by the fact that a coalition of countries, including the United States, is currently working to create safeguards against the misuse of commercial spy software, these efforts are in line with global efforts to combat the misuse of commercial spyware. 

Furthermore, a recent report released by Google's Threat Analysis Group (TAG) and Mandiant shed light on the exploitation of zero-day vulnerabilities in the year 2023, revealing a significant portion of these exploits would be attributed to commercial surveillance vendors. It is widely known that web browser vulnerabilities and mobile device vulnerabilities are a major source of threat actors' evasion and persistence strategies, an indication of how reliant they are on zero-day exploits. 

Among the most concerning issues was that, in India, opposition politicians had raised concerns about possible government involvement in attacks against mobile phones in October, citing Apple's earlier alert about state-sponsored attacks from October that appeared to indicate such an involvement. There has been a high-risk warning issued by CERT-In, India's national cybersecurity watchdog, about vulnerabilities in Apple products that are affecting the entire Apple ecosystem. 

There may be vulnerabilities in these systems which will enable attackers to access sensitive information, execute unauthorized code, bypass security measures, and spoof systems to perform identity theft and other attacks against them. Several Apple devices and software are the subject of this advisory, including iOS, iPadOS, macOS, tvOS, watchOS, and Safari, as well as a wide range of Apple devices and computer software.

Apple also recommends that users remain vigilant regarding suspicious links and attachments, as some attacks might be exploiting the power of social engineering to mislead users into clicking on malicious links. When users suspect that they are being targeted, even in the absence of a threat notification, precautions should be taken to avoid exposing themselves to security threats. 

These precautions include changing passwords and speaking with experts in the field of digital security. As a result of these evolving threats, Apple emphasizes that to mitigate the risks effectively, users must work together with security professionals. Proactive measures and an increased awareness of cyber threats must become increasingly important in helping combat malicious cyber activity in the era of growing digital privacy concerns. 

There may be vulnerabilities in these systems which will enable attackers to access sensitive information, execute unauthorized code, bypass security measures, and spoof systems to perform identity theft and other attacks against them. Several Apple devices and software are the subject of this advisory, including iOS, iPadOS, macOS, tvOS, watchOS, and Safari, as well as a wide range of Apple devices and computer software. 

Apple also recommends that users remain vigilant regarding suspicious links and attachments, as some attacks might be exploiting the power of social engineering to mislead users into clicking on malicious links. When users suspect that they are being targeted, even in the absence of a threat notification, precautions should be taken to avoid exposing themselves to security threats. These precautions include changing passwords and speaking with experts in the field of digital security. 

As a result of these evolving threats, Apple emphasizes that to mitigate the risks effectively, users must work together with security professionals. Proactive measures and an increased awareness of cyber threats must become increasingly important in helping combat malicious cyber activity in the era of growing digital privacy concerns. It is recommended that users when clicking on links or opening attachments from unknown sources, be cautious. 

Since they feared the spyware might help attackers plan for a stealth attack, they decided not to share any more details about it. Additionally, Apple incorporated new advice for users who might be impacted by mercenary spyware attacks into its support page for those who might have been affected. The page explained how these threats are tailored to each individual and their particular device, which means they are difficult to detect and hard to eliminate.