OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.
The Russian disinformation network, known as Doppelgänger, is facing difficulties as it attempts to secure its operations in response to increased efforts to shut it down. According to a recent report by the Bavarian State Office for the Protection of the Constitution (BayLfV), the network has been scrambling to protect its systems and data after its activities were exposed.
Doppelgänger’s Activities and Challenges
Doppelgänger has been active in spreading false information across Europe since at least May 2022. The network has created numerous fake social media accounts, fraudulent websites posing as reputable news sources, and its own fake news platforms. These activities have primarily targeted Germany, France, the United States, Ukraine, and Israel, aiming to mislead the public and spread disinformation.
BayLfV’s report indicates that Doppelgänger’s operators were forced to take immediate action to back up their systems and secure their operations after it was revealed that European hosting companies were unknowingly providing services to the network. The German agency monitored the network closely and discovered details about the working patterns of those involved, noting that they operated during Russian office hours and took breaks on Russian holidays.
Connections to Russia
Further investigation by BayLfV uncovered clear links between Doppelgänger and Russia. The network used Russian IP addresses and the Cyrillic alphabet in its operations, reinforcing its connection to the Kremlin. The network's activities were timed with Moscow and St. Petersburg working hours, further suggesting coordination with Russian time zones.
This crackdown comes after a joint investigation by digital rights groups Qurium and EU DisinfoLab, which exposed Doppelgänger's infrastructure spread across at least ten European countries. Although German authorities were aware of the network’s activities, they had not taken proper action until recently.
Social Media Giant Meta's Response
Facebook’s parent company, Meta, has been actively working to combat Doppelgänger’s influence on its platforms. Meta reported that the network has been forced to change its tactics due to ongoing enforcement efforts. Since May, Meta has removed over 5,000 accounts and pages linked to Doppelgänger, disrupting its operations.
In an attempt to avoid detection, Doppelgänger has shifted its focus to spoofing websites of nonpolitical and entertainment news outlets, such as Cosmopolitan and The New Yorker. However, Meta noted that most of these efforts are being caught quickly, either before they go live or shortly afterward, indicating that the network is struggling to maintain its previous level of influence.
Impact on Doppelgänger’s Operations
The pressure from law enforcement and social media platforms is clearly affecting Doppelgänger’s ability to operate. Meta highlighted that the quality of the network’s disinformation campaigns has declined as it struggles to adapt to the persistent enforcement. The goal is to continue increasing the cost of these operations for Doppelgänger, making it more difficult for the network to continue spreading false information.
This ongoing crackdown on Doppelgänger demonstrates the challenges in combating disinformation and the importance of coordinated efforts to protect the integrity of information in today’s digital environment
For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.
Big Tech's Reluctance to Speak
Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.
Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.
Elon Musk's Twitter and the Spread of Misinformation
The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.
Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.
Experts Weigh In on Big Tech's Silence
Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.
Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.
What Does It Look Like?
As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.
Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.
A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.
On Friday, Turkey's Information and Communication Technologies Authority (ICTA) unexpectedly blocked Instagram access across the country. The ICTA, responsible for overseeing internet regulations, did not provide any specific reason for the ban. However, according to reports from Yeni Safak, a newspaper supportive of the government, the ban was likely a response to Instagram removing posts by Turkish users that expressed condolences for Hamas leader Ismail Haniyeh's death.
Many Turkish users faced difficulties accessing Instagram following the ban. Fahrettin Altun, the communications director for the Turkish presidency, publicly condemned Instagram, accusing it of censoring messages of sympathy for Haniyeh, whom he called a martyr. This incident has sparked significant controversy within Turkey.
Haniyeh’s Death and Its Aftermath
Ismail Haniyeh, the political leader of Hamas and a close associate of Turkish President Recep Tayyip Erdogan, was killed in an attack in Tehran on Wednesday, an act allegedly carried out by Israel. His death prompted widespread reactions in Turkey, with many taking to social media to express their condolences and solidarity, leading to the conflict with Instagram.
A History of Social Media Restrictions in Turkey
This is not the first instance of social media restrictions in Turkey. The country, with a population of 85 million, includes over 50 million Instagram users, making such bans highly impactful. From April 2017 to January 2020, Turkey blocked access to Wikipedia due to articles that linked the Turkish government to extremism, tellingly limiting the flow of information.
This recent action against Instagram is part of a broader pattern of conflicts between the Turkish government and social media companies. In April, Meta, the parent company of Facebook, had to suspend its Threads network in Turkey after authorities blocked its information sharing with Instagram. This surfaces ongoing tensions between Turkey and major social media firms.
The blockage of Instagram illustrates the persistent struggle between the Turkish government and social media platforms over content regulation and freedom of expression. These restrictions pose crucial challenges to the dissemination of information and public discourse, affecting millions who rely on these platforms for news and communication.
Turkey's decision to block Instagram is a testament to the complex dynamics between the government and digital platforms. As the situation pertains, it will be essential to observe the responses from both Turkish authorities and the affected social media companies to grasp the broader implications for digital communication and freedom of speech in Turkey.
Telegram, a famous messaging app crossed 900 million active users recently, it will aim to cross the 1 billion milestone by 2024. According to Pavel Durov, the company's founder, it also plans to launch an app store and an in-app browser supporting web3 pages by July.
In March, Telegram reached 900 million. While addressing the achievement, Durov said the company wishes to be profitable by 2025.
Telegram looks proactive in adopting web3 tech for its platform. Since the beginning, the company has been a strong supporter of blockchain and cryptocurrency initiatives, but it couldn't enter the space due to its initial coin offering failure in 2018. “We began monetizing primarily to maintain our independence. Generally, we see value in [an IPO] as a means of democratizing access to Telegram's assets,” Durov said in an interview with the Financial Times earlier this year.
Telegram started auctioning usernames on the TON blockchain in December 2018. It has emphasized assisting developers in building mini-apps and games that utilize cryptocurrency while doing transactions. In 2024, the company started sharing ad revenues with channel owners by giving out Toncoin (a token on the TON blockchain). At the beginning of July 2024, Telegram began allowing channel owners to convert stars to Toncoin for buying ads at discount prices or trade cryptocurrencies.
But telegram has been long suffering from scams and attacks from threat actors. According to a Kaspersky report, since November 2023, it has fallen victim to different peddling schemes by scammers, letting them steal Toncoins from users. According to Durov, Telegram plans on improving its moderation processes this year as multiple global elections surface (few have already happened as we speak) and deploy AI-related mechanisms to address potential problems.
Financial Times reported “Messaging rival WhatsApp, owned by Meta, has 1.8bn monthly active users, while encrypted communications app Signal has 30mn as of February 2024, according to an analysis by Sensor Tower, though this data only covers mobile app use. Telegram’s bid for advertising dollars is at odds with its reputation as a renegade platform with a hands-off approach to moderation, which recently drew scrutiny for allowing some Hamas-related content to remain on the platform. ”
As Forbes first reported, TikTok revealed that a few celebrities' accounts, including CNN and Paris Hilton, were penetrated by simply sending a direct message (DM). Attackers apparently used a zero-day vulnerability in the messaging component to run malicious malware when the message was opened.
The NSA advised all smartphone users to turn their devices off and back on once a week for safety against zero-click assaults, however, the NSA accepts that this tactic will only occasionally prevent these attacks from succeeding. However, there are still steps you can take to protect yourself—and security software such as the finest VPNs can assist you.
As the name implies, a zero-click attack or exploit requires no activity from the victim. Malicious software can be installed on the targeted device without the user clicking on any links or downloading any harmful files.
This feature makes these types of attacks extremely difficult to detect. This is simply because a lack of engagement significantly minimizes the likelihood of hostile activity.
Cybercriminals use unpatched vulnerabilities in software code to carry out zero-click exploits, known as zero-day vulnerabilities. According to experts at security firm Kaspersky, apps with messaging or voice calling functions is a frequent target because "they are designed to receive and interpret data from untrusted sources"—making them more vulnerable.
Once a device vulnerability has been properly exploited, hackers can use malware, such as info stealers, to scrape your private data. Worse, they can install spyware in the background, recording all of your activity.
This is exactly how the Pegasus spyware attacked so many victims—more than 1,000 people in 50 countries, according to the 2021 joint investigation—without them even knowing it.
The same year, Citizen Lab security experts revealed that utilizing two zero-click iMessage bugs, nine Bahraini activists' iPhones were successfully infiltrated with Pegasus spyware. In 2019, attackers used a WhatsApp zero-day vulnerability to inject malware into communications via a missed call.
As the celebrity TikTok hack story shows, social media platforms are becoming the next popular target. Meta, for example, recently patched a similar vulnerability that could have let attackers to take over any Facebook account.
In a concerning breach of privacy, an internet-scraping company, Spy.pet, has been exposed for selling private data from millions of Discord users on a clear web website. The company has been gathering data from Discord since November 2023, with reports indicating the sale of four billion public Discord messages from over 14,000 servers, housing a staggering 627,914,396 users.
How Does This Breach Work?
The term "scraped messages" refers to the method of extracting information from a platform, such as Discord, through automated tools that exploit vulnerabilities in bots or unofficial applications. This breach potentially exposes private chats, server discussions, and direct messages, highlighting a major security flaw in Discord's interaction with third-party services.
Potential Risks Involved
Security experts warn that the leaked data could contain personal information, private media files, financial details, and even sensitive company information. Usernames, real names, and connected accounts may be compromised, posing a risk of identity theft or financial fraud. Moreover, if Discord is used for business communication, the exposure of company secrets could have serious implications.
Operations of Spy.pet
Spy.pet operates as a chat-harvesting platform, collecting user data such as aliases, pronouns, connected accounts, and public messages. To access profiles and archives of conversations, users must purchase credits, priced at $0.01 each with a minimum of 500 credits. Notably, the platform only accepts cryptocurrency payments, excluding Coinbase due to a ban. Despite facing a DDoS attack in February 2024, Spy.pet claims minimal damage.
How To Protect Yourself?
Discord is actively investigating Spy.pet and is committed to safeguarding users' privacy. In the meantime, users are advised to review their Discord privacy settings, change passwords, enable two-factor authentication, and refrain from sharing sensitive information in chats. Any suspected account compromises should be reported to Discord immediately.
What Are The Implications?
Many Discord users may not realise the permanence of their messages, assuming them to be ephemeral in the fast-paced environment of public servers. However, Spy.pet's data compilation service raises concerns about the privacy and security of users' conversations. While private messages are currently presumed secure, the sale of billions of public messages underscores the importance of heightened awareness while engaging in online communication.
The discovery of Spy.pet's actions is a clear signal of how vulnerable online platforms can be and underscores the critical need for strong privacy safeguards. It's crucial for Discord users to stay alert and take active measures to safeguard their personal data in response to this breach. As inquiries progress, the wider impact of this privacy violation on internet security and data protection is a substantial concern that cannot be overlooked.
Artificial Intelligence (AI) is reshaping the world of social media content creation, offering creators new possibilities and challenges. The fusion of art and technology is empowering creators by automating routine tasks, allowing them to channel their energy into more imaginative pursuits. AI-driven tools like Midjourney, ElevenLabs, Opus Clip, and Papercup are democratising content production, making it accessible and cost-effective for creators from diverse backgrounds.
Automation is at the forefront of this revolution, freeing up time and resources for creators. These AI-powered tools streamline processes such as research, data analysis, and content production, enabling creators to produce high-quality content more efficiently. This democratisation of content creation fosters diversity and inclusivity, amplifying voices from various communities.
Yet, as AI takes centre stage, questions arise about authenticity and originality. While AI-generated content can be visually striking, concerns linger about its soul and emotional depth compared to human-created content. Creators find themselves navigating this terrain, striving to maintain authenticity while leveraging AI-driven tools to enhance their craft.
AI analytics are playing a pivotal role in content optimization. Platforms like YouTube utilise AI algorithms for A/B testing headlines, predicting virality, and real-time audience sentiment analysis. Creators, armed with these insights, refine their content strategies to tailor messages, ultimately maximising audience engagement. However, ethical considerations like algorithmic bias and data privacy need careful attention to ensure the responsible use of AI analytics in content creation.
The rise of virtual influencers, like Lil Miquela and Shudu Gram, poses a unique challenge to traditional content creators. While these virtual entities amass millions of followers, they also threaten the livelihoods of human creators, particularly in influencer marketing campaigns. Human creators, by establishing genuine connections with their audience and upholding ethical standards, can distinguish themselves from virtual counterparts, maintaining trust and credibility.
As AI continues its integration into content creation, ethical and societal concerns emerge. Issues such as algorithmic bias, data privacy, and intellectual property rights demand careful consideration for the responsible deployment of AI technologies. Upholding integrity and ethical standards in creative practices, alongside collaboration between creators, technologists, and policymakers, is crucial to navigating these challenges and fostering a sustainable content creation ecosystem.
In this era of technological evolution, the impact of AI on social media content creation is undeniable. As we embrace the possibilities it offers, addressing ethical concerns and navigating through the intricacies of this digitisation is of utmost importance for creators and audiences alike.
At the beginning of this year, cybersecurity researchers stumbled upon a staggering dataset containing 26 billion leaked entries. This treasure trove of compromised information includes data from prominent platforms like LinkedIn, Twitter.com, Tencent, Dropbox, Adobe, Canva, and Telegram. But the impact didn’t stop there; government agencies in the U.S., Brazil, Germany, the Philippines, and Turkey were also affected.
The MOAB isn’t your typical data breach—it’s a 12-terabyte behemoth that cybercriminals can wield as a powerful weapon. Here’s why it’s a game-changer:
Identity Theft Arsenal: The stolen personal data within this dataset provides threat actors with a comprehensive toolkit. From email addresses and passwords to sensitive financial information, it’s a goldmine for orchestrating identity theft and other malicious activities.
Global Reach: The MOAB’s reach extends across borders. Organizations worldwide are at risk, and the breach’s sheer scale means that no industry or sector is immune.
As business leaders, it’s crucial to understand the implications of the MOAB and take proactive measures to safeguard your organization:
1. Continual Threat Landscape
The MOAB isn’t a one-time event—it’s an ongoing threat. Businesses must adopt a continuous monitoring approach to detect any signs of compromise. Here’s what to watch out for:
2. Infrastructure Vigilance
Patch and Update: Regularly update software and apply security patches. Vulnerabilities in outdated systems can be exploited.
Multi-Factor Authentication (MFA): Implement MFA wherever possible. It adds an extra layer of security by requiring additional verification beyond passwords.
Data Encryption: Encrypt sensitive data both at rest and in transit. Even if breached, encrypted data remains useless to attackers.
Incident Response Plan: Have a robust incident response plan in place. Know how to react swiftly if a breach occurs.
3. Customer Trust and Reputation
Transparency: If your organization is affected, be transparent with customers. Promptly inform them about the breach, steps taken, and precautions they should follow.
Reputation Management: A breach can tarnish your brand’s reputation. Communicate openly, take responsibility, and demonstrate commitment to security.
4. Legal and Regulatory Compliance
Data Protection Laws: Understand the legal obligations related to data breaches in your jurisdiction. Compliance is critical to avoid penalties.
Notification Requirements: Depending on the severity, you may need to notify affected individuals, authorities, or regulatory bodies.
5. Employee Training
Security Awareness: Train employees to recognize phishing attempts, use strong passwords, and follow security protocols.
Incident Reporting: Encourage employees to report any suspicious activity promptly.
The MOAB serves as a wake-up call for businesses worldwide. Cybersecurity isn’t a one-and-done task—it’s an ongoing commitment. By staying vigilant, implementing best practices, and prioritizing data protection, organizations can mitigate the impact of breaches and safeguard their customers’ trust.
A recent report highlights the illicit activities of cybercriminals exploiting the "Gold" verification badge on X (formerly Twitter). Following Elon Musk's acquisition of X in 2022, a paid verification system was introduced, allowing regular users to purchase blue ticks. Additionally, organizations could obtain the coveted gold check mark through a monthly subscription.