Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Misinformation. Show all posts

The UK Erupts in Riots as Big Tech Stays Silent


 

For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.

Big Tech's Reluctance to Speak

Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.

Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.

Elon Musk's Twitter and the Spread of Misinformation

The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.

Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.

Experts Weigh In on Big Tech's Silence

Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.

Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.

What Does It Look Like?

As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.

Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.

A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.


As Deepfake of Sachin Tendulkar Surface, India’s IT Minister Promises Tighter Rules


On Monday, Indian minister of State for Information Technology Rajeev Chandrasekhar confirmed that the government will notify robust rules under the Information Technology Act in order to ensure compliance by platform in the country. 

Union Minister, on X, expressed gratitude to cricketer Sachin Tendulkar for pointing out the video. Tendulkar, in X, for pointing out the video, said that AI-generated deepfakes and misinformation are a threat to safety and trust of Indian users. He notes that platforms must comply with advisory issued by the Centre. 

"Thank you @sachin_rt for this tweet #DeepFakes and misinformation powered by #AI are a threat to Safety&Trust of Indian users and represents harm & legal violation that platforms have to prevent and take down. Recent Advisory by @GoI_MeitY requires platforms to comply wth this 100%. We will be shortly notifying tighter rules under IT Act to ensure compliance by platforms," Chandrasekhar posted on X

On X, Sachin Tendulkar was seen cautioning his fans and the public that aforementioned video was fake. Further, he asked viewers to report any such applications, videos and advertisements. 

"These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and fake news. @GoI_MeitY, @Rajeev_GoI and @MahaCyber1," Tendulkar said on X.

Deepfakes are artificial media that have undergone digital manipulation to effectively swap out one person's likeness for another. The alteration of facial features using deep generative techniques is known as a "deepfake." While the practice of fabricating information is not new, deepfakes use sophisticated AI and machine learning algorithms to edit or create visual and auditory content that is easier to trick.

Last month, the government urged all online platforms to abide by the IT rules and mandated companies to notify users about forbidden content transparently and accurately.

The Centre has asked platforms to take urgent action against deepfakes and ensure that their terms of use and community standards comply with the laws and IT regulations in force. The government has made it abundantly evident that any violation will be taken very seriously and could result in legal actions against the entity.  

Reddit to Pay Users for Popular Posts

Reddit, the popular social media platform, has announced that it will begin paying users for their posts. The new system, which is still in its early stages, will see users rewarded with cash for posts that are awarded "gold" by other users.

Gold awards are a form of virtual currency that can be purchased by Reddit users for a fee. They can be given to other users to reward them for their contributions to the platform. Until now, gold awards have only served as a way to show appreciation for other users' posts. However, under the new system, users who receive gold awards will also receive a share of the revenue generated from those awards.

The amount of money that users receive will vary depending on the number of gold awards they receive and their karma score. Karma score is a measure of how much other users have upvoted a user's posts and comments. Users will need to have at least 10 gold awards to cash out, and they will receive either 90 cents or $1 for each gold award.

Reddit says that the new system is designed to "reward the best and brightest content creators" on the platform. The company hopes that this will encourage users to create more high-quality content and contribute more to the community.

However, there are also some concerns about the new system. Some users worry that it could lead to users creating clickbait or inflammatory content to get more gold awards and more money. Others worry that the system could be unfair to users who do not have a lot of karma.

One Reddit user expressed concern that the approach will lead users to produce content of poor quality. If they know they can make money from it, people are more likely to upload clickbait or provocative stuff.

Another Reddit member said that users with low karma may be treated unfairly by the system. According to the user, "Users with more karma will be able to profit more from the system than users with less karma." This will make users with lower karma less likely to produce high-quality content, which is unjust.

Some of the issues raised by the new method have been addressed by Reddit. According to the corporation, it will actively monitor the system to make sure users aren't producing low-quality content to increase their gold medal total. In addition, Reddit states that it will endeavor to create a system that is equitable to all users, regardless of karma.

According to a Reddit spokesman, "We understand that there are some concerns about the new system. We are dedicated to collaborating with the community to make sure that the system is just and that it inspires users to produce high-quality content."

The platform has undergone a dramatic change as a result of Reddit's new strategy of compensating users for popular postings. The system's actual functionality and whether it will improve the platform's content quality have still to be determined. Reddit is devoted to advancing and inventing, as evidenced by the declaration of the new system.

AI Boom: Cybercriminals Winning Early

Artificial intelligence (AI) is ushering in a transformative era across various industries, including the cybersecurity sector. AI is driving innovation in the realm of cyber threats, enabling the creation of increasingly sophisticated attack methods and bolstering the efficiency of existing defense mechanisms.

In this age of AI advancement, the potential for a safer world coexists with the emergence of fresh prospects for cybercriminals. As the adoption of AI technologies becomes more pervasive, cyber adversaries are harnessing its power to craft novel attack vectors, automate their malicious activities, and maneuver under the radar to evade detection.

According to a recent article in The Messenger, the initial beneficiaries of the AI boom are unfortunately cybercriminals. They have quickly adapted to leverage generative AI in crafting sophisticated phishing emails and deepfake videos, making it harder than ever to discern real from fake. This highlights the urgency for organizations to fortify their cybersecurity infrastructure.

On a more positive note, the demand for custom chips has skyrocketed, as reported by TechCrunch. As generative AI algorithms become increasingly complex, off-the-shelf hardware struggles to keep up. This has paved the way for a new era of specialized chips designed to power these advanced systems. Industry leaders like NVIDIA and AMD are at the forefront of this technological arms race, racing to develop the most efficient and powerful AI chips.

McKinsey's comprehensive report on the state of AI in 2023 reinforces the notion that generative AI is experiencing its breakout year. The report notes, "Generative AIs have surpassed many traditional machine learning models, enabling tasks that were once thought impossible." This includes generating realistic human-like text, images, and even videos. The applications span from content creation to simulating real-world scenarios for training purposes.

However, amidst this wave of optimism, ethical concerns loom large. The potential for misuse, particularly in deepfakes and disinformation campaigns, is a pressing issue that society must grapple with. Dr. Sarah Rodriguez, a leading AI ethicist, warns, "We must establish robust frameworks and regulations to ensure responsible use of generative AI. The stakes are high, and we cannot afford to be complacent."

Unprecedented opportunities are being made possible by the generative AI surge, which is changing industries. The potential is limitless and can improve anything from creative processes to data synthesis. But we must be cautious with this technology and deal with the moral issues it raises. Gaining the full benefits of generative AI will require a careful and balanced approach as we navigate this disruptive period.


Decoding Cybercriminals' Motives for Crafting Fake Data Leaks

 

Companies worldwide are facing an increasingly daunting challenge posed by data leaks, particularly due to the rise in ransomware and sophisticated cyberattacks. This predicament is further complicated by the emergence of fabricated data leaks. Instead of genuine breaches, threat actors are now resorting to creating fake leaks, aiming to exploit the situation.

The consequences of such falsified leaks are extensive, potentially tarnishing the reputation of the affected organizations. Even if the leaked data is eventually proven false, the initial spread of misinformation can lead to negative publicity.

The complexity of fake leaks warrants a closer examination, shedding light on how businesses can effectively tackle associated risks.

What Drives Cybercriminals to Fabricate Data Leaks?

Certain cybercriminal groups, like LockBit, Conti, Cl0p, and others, have gained significant attention, akin to celebrities or social media influencers. These groups operate on platforms like the Dark Web and other shadowy websites, and some even have their own presence on the X platform (formerly Twitter). Here, malicious actors publish details about victimized companies, attempting to extort ransom and setting deadlines for sensitive data release. This may include private business communications, corporate account login credentials, employee and client information. Moreover, cybercriminals may offer this data for sale, enticing other threat actors interested in using it for subsequent attacks.

Lesser-known cybercriminals also seek the spotlight, driving them to create fake leaks. These fabricated leaks generate hype, inducing a concerned reaction from targeted businesses, and also serve as a means to deceive fellow cybercriminals on the black market. Novice criminals are especially vulnerable to falling for this ploy.

Manipulating Databases for Deception: The Anatomy of Fake Leaks

Fake data leaks often materialize as parsed databases, involving the extraction of information from open sources without sensitive data. This process, known as internet parsing or web scraping, entails pulling text, images, links, and other data from websites. Threat actors employ parsing to gather data for malicious intent, including the creation of fake leaks.

In 2021, a prominent business networking platform encountered a similar case. Alleged user data was offered for sale on the Dark Web, but subsequent investigations revealed it was an aggregation of publicly accessible user profiles and website data, rather than a data breach. This incident garnered media attention and interest within the Dark Web community.

When offers arise on the Dark Web, claiming to provide leaked databases from popular social networks like LinkedIn, Facebook, or X, they are likely to be fake leaks containing information already publicly available. These databases may circulate for extended periods, occasionally sparking new publications and causing alarm among targeted firms.

According to Kaspersky Digital Footprint Intelligence, the Dark Web saw an average of 17 monthly posts about social media leaks from 2019 to mid-2021. However, this figure surged to an average of 65 monthly posts after a significant case in the summer of 2021. Many of these posts, as per their findings, may be reposts of the same database.

Old leaks, even genuine ones, can serve as the foundation for fake leaks. Presenting outdated data leaks as new creates the illusion of widespread cybercriminal access to sensitive information and ongoing cyberattacks. This strategy helps cybercriminals establish credibility among potential buyers and other actors within underground markets.

Similar instances occur frequently within the shadowy community, where old or unverified leaks resurface. Data that's several years old is repeatedly uploaded onto Dark Web forums, sometimes offered for free or a fee, masquerading as new leaks. This not only poses reputation risks but also compromises customer security.

Mitigating Fake Leaks: Business Guidelines

Faced with a fake leak, panic is a common response due to the ensuing public attention. Swift identification and response are paramount. Initial steps should include refraining from engaging with attackers and conducting a thorough investigation into the reported leak. Verification of the source, cross-referencing with internal data, and assessing information credibility are essential. Collecting evidence to confirm the attack and compromise is crucial.

For large businesses, including fake leaks, data breaches are a matter of "when," not "if." Transparency and preparation are key in addressing such substantial challenges. Developing a communication plan beforehand for interactions with clients, journalists, and government agencies is beneficial. 

Additionally, constant monitoring of the Dark Web enables detection of new posts about both fake and real leaks, as well as spikes in malicious activity. Due to the automation required for Dark Web monitoring and the potential lack of internal resources, external experts often manage this task.

Furthermore, comprehensive incident response plans, complete with designated teams, communication channels, and protocols, facilitate swift action if such cases arise.

In an era where data leaks continuously threaten businesses, proactive and swift measures are vital. By promptly identifying and addressing these incidents, conducting meticulous investigations, collaborating with cybersecurity experts, and working with law enforcement, companies can minimize risks, safeguard their reputation, and uphold customer trust.

Warcraft Fans Trick AI with Glorbo Hoax

Ambitious Warcraft fans have persuaded an AI article bot into writing about the mythical character Glorbo in an amusing and ingenious turn of events. The incident, which happened on Reddit, demonstrates the creativity of the game industry as well as the limitations of artificial intelligence in terms of fact-checking and information verification.

The hoax gained popularity after a group of Reddit users decided to fabricate a thorough backstory for a fictional character in the World of Warcraft realm to test the capabilities of an AI-powered article generator. A complex background was given to the made-up gnome warlock Glorbo, along with a made-up storyline and special magic skills.

The Glorbo enthusiasts were eager to see if the AI article bot would fall for the scam and create an article based on the complex story they had created. To give the story a sense of realism, they meticulously edited the narrative to reflect the tone and terminology commonly used in gaming media.

To their delight, the experiment was effective, as the piece produced by the AI not only chronicled Glorbo's alleged in-game exploits but also included references to the Reddit post, portraying the character as though it were a real member of the Warcraft universe. The whimsical invention may be presented as news because the AI couldn't tell the difference between factual and fictional content.

The information about this practical joke swiftly traveled throughout the gaming and social media platforms, amusing and intriguing people about the potential applications of AI-generated material in the field of journalism. While there is no doubt that AI technology has transformed the way material is produced and distributed, it also raises questions about the necessity for human oversight to ensure the accuracy of information.

As a result of the experiment, it becomes evident that AI article bots, while efficient in producing large volumes of content, lack the discernment and critical thinking capabilities that humans possess. Dr. Emily Simmons, an AI ethics researcher, commented on the incident, saying, "This is a fascinating example of how AI can be fooled when faced with deceptive inputs. It underscores the importance of incorporating human fact-checking and oversight in AI-generated content to maintain journalistic integrity."

The amusing incident serves as a reminder that artificial intelligence technology is still in its infancy and that, as it develops, tackling problems with misinformation and deception must be a top focus. While AI may surely help with content creation, it cannot take the place of human context, understanding, and judgment.

Glorbo's developers are thrilled with the result and hope that this humorous occurrence will encourage discussions on responsible AI use and the dangers of relying solely on automated systems for journalism and content creation.




ChatGPT's Reputability is Under Investigation by the FTC

The Federal Trade Commission (FTC) has recently launched an investigation into ChatGPT, the popular language model developed by OpenAI. This move comes as a stark reminder of the growing concerns surrounding the potential pitfalls of artificial intelligence (AI) and the need for stringent regulations to protect consumers. The investigation was initiated in response to potential violations of consumer protection laws, raising important questions about the transparency and accountability of AI technologies.

According to the Washington Post, the FTC's investigation focuses on OpenAI's ChatGPT after it was allegedly involved in instances of providing misleading information to users. The specific incidents leading to the investigation have not been disclosed yet, but the potential consequences of AI systems spreading false or harmful information have raised alarms in both the tech industry and regulatory circles.

As AI technologies become more prevalent in our daily lives, concerns regarding their trustworthiness and accuracy have grown. ChatGPT, with its wide usage in various applications such as customer support, content creation, and online interactions, has emerged as one of the most prominent examples of AI's impact on society. However, incidents of misinformation and biased responses from the AI model have cast doubts on its reliability, leading to the FTC's intervention.

Lina Khan, the Chairwoman of the FTC, highlighted the importance of the investigation, stating, "AI systems have the potential to significantly impact consumers and their decision-making. It is vital that we understand the extent to which these technologies can be trusted and how they may influence individuals' choices."

OpenAI, the organization behind ChatGPT, has acknowledged the FTC's investigation and expressed cooperation with the authorities in a statement reported by Barron's. "We take these allegations seriously and are committed to ensuring the utmost transparency and accountability of our AI systems. We will collaborate fully with the FTC to address any concerns and ensure consumer confidence in our technology," the statement read.

The FTC inquiry highlights the requirement for thorough and uniform standards for AI systems. The absence of clear regulations and control increases potential risks for consumers as AI becomes increasingly ingrained in our daily lives. It is crucial for developers and regulatory agencies to collaborate in order to construct strong frameworks that assure ethical AI development and usage if they are to sustain the public's trust and confidence in AI technologies.

The FTC's inquiry serves as a warning that artificial intelligence systems like ChatGPT are unreliable even though they have shown great promise in improving a variety of elements of human existence. The creation and use of these technologies are still ultimately the responsibility of humans, therefore it's critical to strike a balance between innovation and moral considerations.

Growing Threat From Deep Fakes and Misinformation

 


The prevalence of synthetic media is rising as a result of the development of tools that make it simple to produce and distribute convincing artificial images, videos, and music. The propagation of deepfakes increased by 900% in 2020, according to Sentinel, over the previous year.

With the rapid advancement of technology, cyber-influence operations are becoming more complex. The methods employed in conventional cyberattacks are increasingly being utilized to cyber influence operations, both in terms of overlap and extension. In addition, we have seen growing nation-state coordination and amplification.

Tech firms in the private sector could unintentionally support these initiatives. Companies that register domain names, host websites, advertise content on social media and search engines, direct traffic, and support the cost of these activities through digital advertising are examples of enablers.

Deep learning, a particular type of artificial intelligence, is used to create deepfakes. Deep learning algorithms can replace a person's likeness in a picture or video with other people's visage. Deepfake movies of Tom Cruise on TikTok in 2021 captured the public. Deepfake films of celebrities were first created by face-swapping photographs of celebrities online.

There are three stages of cyber influence operations, starting with prepositioning, in which false narratives are introduced to the public. The launch phase involves a coordinated campaign to spread the narrative through media and social channels, followed by the amplification phase, where media and proxies spread the false narrative to targeted audiences. The consequences of cyber influence operations include market manipulation, payment fraud, and impersonation. However, the most significant threat is trust and authenticity, given the increasing use of artificial media that can dismiss legitimate information as fake.

Business Can Defend Against Synthetic Media:

Deepfakes and synthetic media have become an increasing concern for organizations, as they can be used to manipulate information and damage reputations. To protect themselves, organizations should take a multi-layered approach.
  • Firstly, they should establish clear policies and guidelines for employees on how to handle sensitive information and how to verify the authenticity of media. This includes implementing strict password policies and data access controls to prevent unauthorized access.
  • Secondly, organizations should invest in advanced technology solutions such as deepfake detection software and artificial intelligence tools to detect and mitigate any threats. They should also ensure that all systems are up-to-date with the latest security patches and software updates.
  • Thirdly, organizations should provide regular training and awareness programs for employees to help them identify and respond to deepfake threats. This includes educating them on the latest deepfake trends and techniques, as well as providing guidelines on how to report suspicious activity.
Furthermore, organizations should have a crisis management plan in place in case of a deepfake attack. This should include clear communication channels and protocols for responding to media inquiries, as well as an incident response team with the necessary expertise to handle the situation. By adopting a multi-layered approach to deepfake protection, organizations can reduce the risks of synthetic media attacks and protect their reputation and sensitive information.