Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Misinformation. Show all posts

The UK Erupts in Riots as Big Tech Stays Silent


 

For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.

Big Tech's Reluctance to Speak

Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.

Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.

Elon Musk's Twitter and the Spread of Misinformation

The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.

Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.

Experts Weigh In on Big Tech's Silence

Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.

Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.

What Does It Look Like?

As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.

Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.

A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.


As Deepfake of Sachin Tendulkar Surface, India’s IT Minister Promises Tighter Rules


On Monday, Indian minister of State for Information Technology Rajeev Chandrasekhar confirmed that the government will notify robust rules under the Information Technology Act in order to ensure compliance by platform in the country. 

Union Minister, on X, expressed gratitude to cricketer Sachin Tendulkar for pointing out the video. Tendulkar, in X, for pointing out the video, said that AI-generated deepfakes and misinformation are a threat to safety and trust of Indian users. He notes that platforms must comply with advisory issued by the Centre. 

"Thank you @sachin_rt for this tweet #DeepFakes and misinformation powered by #AI are a threat to Safety&Trust of Indian users and represents harm & legal violation that platforms have to prevent and take down. Recent Advisory by @GoI_MeitY requires platforms to comply wth this 100%. We will be shortly notifying tighter rules under IT Act to ensure compliance by platforms," Chandrasekhar posted on X

On X, Sachin Tendulkar was seen cautioning his fans and the public that aforementioned video was fake. Further, he asked viewers to report any such applications, videos and advertisements. 

"These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and fake news. @GoI_MeitY, @Rajeev_GoI and @MahaCyber1," Tendulkar said on X.

Deepfakes are artificial media that have undergone digital manipulation to effectively swap out one person's likeness for another. The alteration of facial features using deep generative techniques is known as a "deepfake." While the practice of fabricating information is not new, deepfakes use sophisticated AI and machine learning algorithms to edit or create visual and auditory content that is easier to trick.

Last month, the government urged all online platforms to abide by the IT rules and mandated companies to notify users about forbidden content transparently and accurately.

The Centre has asked platforms to take urgent action against deepfakes and ensure that their terms of use and community standards comply with the laws and IT regulations in force. The government has made it abundantly evident that any violation will be taken very seriously and could result in legal actions against the entity.  

Reddit to Pay Users for Popular Posts

Reddit, the popular social media platform, has announced that it will begin paying users for their posts. The new system, which is still in its early stages, will see users rewarded with cash for posts that are awarded "gold" by other users.

Gold awards are a form of virtual currency that can be purchased by Reddit users for a fee. They can be given to other users to reward them for their contributions to the platform. Until now, gold awards have only served as a way to show appreciation for other users' posts. However, under the new system, users who receive gold awards will also receive a share of the revenue generated from those awards.

The amount of money that users receive will vary depending on the number of gold awards they receive and their karma score. Karma score is a measure of how much other users have upvoted a user's posts and comments. Users will need to have at least 10 gold awards to cash out, and they will receive either 90 cents or $1 for each gold award.

Reddit says that the new system is designed to "reward the best and brightest content creators" on the platform. The company hopes that this will encourage users to create more high-quality content and contribute more to the community.

However, there are also some concerns about the new system. Some users worry that it could lead to users creating clickbait or inflammatory content to get more gold awards and more money. Others worry that the system could be unfair to users who do not have a lot of karma.

One Reddit user expressed concern that the approach will lead users to produce content of poor quality. If they know they can make money from it, people are more likely to upload clickbait or provocative stuff.

Another Reddit member said that users with low karma may be treated unfairly by the system. According to the user, "Users with more karma will be able to profit more from the system than users with less karma." This will make users with lower karma less likely to produce high-quality content, which is unjust.

Some of the issues raised by the new method have been addressed by Reddit. According to the corporation, it will actively monitor the system to make sure users aren't producing low-quality content to increase their gold medal total. In addition, Reddit states that it will endeavor to create a system that is equitable to all users, regardless of karma.

According to a Reddit spokesman, "We understand that there are some concerns about the new system. We are dedicated to collaborating with the community to make sure that the system is just and that it inspires users to produce high-quality content."

The platform has undergone a dramatic change as a result of Reddit's new strategy of compensating users for popular postings. The system's actual functionality and whether it will improve the platform's content quality have still to be determined. Reddit is devoted to advancing and inventing, as evidenced by the declaration of the new system.

AI Boom: Cybercriminals Winning Early

Artificial intelligence (AI) is ushering in a transformative era across various industries, including the cybersecurity sector. AI is driving innovation in the realm of cyber threats, enabling the creation of increasingly sophisticated attack methods and bolstering the efficiency of existing defense mechanisms.

In this age of AI advancement, the potential for a safer world coexists with the emergence of fresh prospects for cybercriminals. As the adoption of AI technologies becomes more pervasive, cyber adversaries are harnessing its power to craft novel attack vectors, automate their malicious activities, and maneuver under the radar to evade detection.

According to a recent article in The Messenger, the initial beneficiaries of the AI boom are unfortunately cybercriminals. They have quickly adapted to leverage generative AI in crafting sophisticated phishing emails and deepfake videos, making it harder than ever to discern real from fake. This highlights the urgency for organizations to fortify their cybersecurity infrastructure.

On a more positive note, the demand for custom chips has skyrocketed, as reported by TechCrunch. As generative AI algorithms become increasingly complex, off-the-shelf hardware struggles to keep up. This has paved the way for a new era of specialized chips designed to power these advanced systems. Industry leaders like NVIDIA and AMD are at the forefront of this technological arms race, racing to develop the most efficient and powerful AI chips.

McKinsey's comprehensive report on the state of AI in 2023 reinforces the notion that generative AI is experiencing its breakout year. The report notes, "Generative AIs have surpassed many traditional machine learning models, enabling tasks that were once thought impossible." This includes generating realistic human-like text, images, and even videos. The applications span from content creation to simulating real-world scenarios for training purposes.

However, amidst this wave of optimism, ethical concerns loom large. The potential for misuse, particularly in deepfakes and disinformation campaigns, is a pressing issue that society must grapple with. Dr. Sarah Rodriguez, a leading AI ethicist, warns, "We must establish robust frameworks and regulations to ensure responsible use of generative AI. The stakes are high, and we cannot afford to be complacent."

Unprecedented opportunities are being made possible by the generative AI surge, which is changing industries. The potential is limitless and can improve anything from creative processes to data synthesis. But we must be cautious with this technology and deal with the moral issues it raises. Gaining the full benefits of generative AI will require a careful and balanced approach as we navigate this disruptive period.


Decoding Cybercriminals' Motives for Crafting Fake Data Leaks

 

Companies worldwide are facing an increasingly daunting challenge posed by data leaks, particularly due to the rise in ransomware and sophisticated cyberattacks. This predicament is further complicated by the emergence of fabricated data leaks. Instead of genuine breaches, threat actors are now resorting to creating fake leaks, aiming to exploit the situation.

The consequences of such falsified leaks are extensive, potentially tarnishing the reputation of the affected organizations. Even if the leaked data is eventually proven false, the initial spread of misinformation can lead to negative publicity.

The complexity of fake leaks warrants a closer examination, shedding light on how businesses can effectively tackle associated risks.

What Drives Cybercriminals to Fabricate Data Leaks?

Certain cybercriminal groups, like LockBit, Conti, Cl0p, and others, have gained significant attention, akin to celebrities or social media influencers. These groups operate on platforms like the Dark Web and other shadowy websites, and some even have their own presence on the X platform (formerly Twitter). Here, malicious actors publish details about victimized companies, attempting to extort ransom and setting deadlines for sensitive data release. This may include private business communications, corporate account login credentials, employee and client information. Moreover, cybercriminals may offer this data for sale, enticing other threat actors interested in using it for subsequent attacks.

Lesser-known cybercriminals also seek the spotlight, driving them to create fake leaks. These fabricated leaks generate hype, inducing a concerned reaction from targeted businesses, and also serve as a means to deceive fellow cybercriminals on the black market. Novice criminals are especially vulnerable to falling for this ploy.

Manipulating Databases for Deception: The Anatomy of Fake Leaks

Fake data leaks often materialize as parsed databases, involving the extraction of information from open sources without sensitive data. This process, known as internet parsing or web scraping, entails pulling text, images, links, and other data from websites. Threat actors employ parsing to gather data for malicious intent, including the creation of fake leaks.

In 2021, a prominent business networking platform encountered a similar case. Alleged user data was offered for sale on the Dark Web, but subsequent investigations revealed it was an aggregation of publicly accessible user profiles and website data, rather than a data breach. This incident garnered media attention and interest within the Dark Web community.

When offers arise on the Dark Web, claiming to provide leaked databases from popular social networks like LinkedIn, Facebook, or X, they are likely to be fake leaks containing information already publicly available. These databases may circulate for extended periods, occasionally sparking new publications and causing alarm among targeted firms.

According to Kaspersky Digital Footprint Intelligence, the Dark Web saw an average of 17 monthly posts about social media leaks from 2019 to mid-2021. However, this figure surged to an average of 65 monthly posts after a significant case in the summer of 2021. Many of these posts, as per their findings, may be reposts of the same database.

Old leaks, even genuine ones, can serve as the foundation for fake leaks. Presenting outdated data leaks as new creates the illusion of widespread cybercriminal access to sensitive information and ongoing cyberattacks. This strategy helps cybercriminals establish credibility among potential buyers and other actors within underground markets.

Similar instances occur frequently within the shadowy community, where old or unverified leaks resurface. Data that's several years old is repeatedly uploaded onto Dark Web forums, sometimes offered for free or a fee, masquerading as new leaks. This not only poses reputation risks but also compromises customer security.

Mitigating Fake Leaks: Business Guidelines

Faced with a fake leak, panic is a common response due to the ensuing public attention. Swift identification and response are paramount. Initial steps should include refraining from engaging with attackers and conducting a thorough investigation into the reported leak. Verification of the source, cross-referencing with internal data, and assessing information credibility are essential. Collecting evidence to confirm the attack and compromise is crucial.

For large businesses, including fake leaks, data breaches are a matter of "when," not "if." Transparency and preparation are key in addressing such substantial challenges. Developing a communication plan beforehand for interactions with clients, journalists, and government agencies is beneficial. 

Additionally, constant monitoring of the Dark Web enables detection of new posts about both fake and real leaks, as well as spikes in malicious activity. Due to the automation required for Dark Web monitoring and the potential lack of internal resources, external experts often manage this task.

Furthermore, comprehensive incident response plans, complete with designated teams, communication channels, and protocols, facilitate swift action if such cases arise.

In an era where data leaks continuously threaten businesses, proactive and swift measures are vital. By promptly identifying and addressing these incidents, conducting meticulous investigations, collaborating with cybersecurity experts, and working with law enforcement, companies can minimize risks, safeguard their reputation, and uphold customer trust.

Warcraft Fans Trick AI with Glorbo Hoax

Ambitious Warcraft fans have persuaded an AI article bot into writing about the mythical character Glorbo in an amusing and ingenious turn of events. The incident, which happened on Reddit, demonstrates the creativity of the game industry as well as the limitations of artificial intelligence in terms of fact-checking and information verification.

The hoax gained popularity after a group of Reddit users decided to fabricate a thorough backstory for a fictional character in the World of Warcraft realm to test the capabilities of an AI-powered article generator. A complex background was given to the made-up gnome warlock Glorbo, along with a made-up storyline and special magic skills.

The Glorbo enthusiasts were eager to see if the AI article bot would fall for the scam and create an article based on the complex story they had created. To give the story a sense of realism, they meticulously edited the narrative to reflect the tone and terminology commonly used in gaming media.

To their delight, the experiment was effective, as the piece produced by the AI not only chronicled Glorbo's alleged in-game exploits but also included references to the Reddit post, portraying the character as though it were a real member of the Warcraft universe. The whimsical invention may be presented as news because the AI couldn't tell the difference between factual and fictional content.

The information about this practical joke swiftly traveled throughout the gaming and social media platforms, amusing and intriguing people about the potential applications of AI-generated material in the field of journalism. While there is no doubt that AI technology has transformed the way material is produced and distributed, it also raises questions about the necessity for human oversight to ensure the accuracy of information.

As a result of the experiment, it becomes evident that AI article bots, while efficient in producing large volumes of content, lack the discernment and critical thinking capabilities that humans possess. Dr. Emily Simmons, an AI ethics researcher, commented on the incident, saying, "This is a fascinating example of how AI can be fooled when faced with deceptive inputs. It underscores the importance of incorporating human fact-checking and oversight in AI-generated content to maintain journalistic integrity."

The amusing incident serves as a reminder that artificial intelligence technology is still in its infancy and that, as it develops, tackling problems with misinformation and deception must be a top focus. While AI may surely help with content creation, it cannot take the place of human context, understanding, and judgment.

Glorbo's developers are thrilled with the result and hope that this humorous occurrence will encourage discussions on responsible AI use and the dangers of relying solely on automated systems for journalism and content creation.




ChatGPT's Reputability is Under Investigation by the FTC

The Federal Trade Commission (FTC) has recently launched an investigation into ChatGPT, the popular language model developed by OpenAI. This move comes as a stark reminder of the growing concerns surrounding the potential pitfalls of artificial intelligence (AI) and the need for stringent regulations to protect consumers. The investigation was initiated in response to potential violations of consumer protection laws, raising important questions about the transparency and accountability of AI technologies.

According to the Washington Post, the FTC's investigation focuses on OpenAI's ChatGPT after it was allegedly involved in instances of providing misleading information to users. The specific incidents leading to the investigation have not been disclosed yet, but the potential consequences of AI systems spreading false or harmful information have raised alarms in both the tech industry and regulatory circles.

As AI technologies become more prevalent in our daily lives, concerns regarding their trustworthiness and accuracy have grown. ChatGPT, with its wide usage in various applications such as customer support, content creation, and online interactions, has emerged as one of the most prominent examples of AI's impact on society. However, incidents of misinformation and biased responses from the AI model have cast doubts on its reliability, leading to the FTC's intervention.

Lina Khan, the Chairwoman of the FTC, highlighted the importance of the investigation, stating, "AI systems have the potential to significantly impact consumers and their decision-making. It is vital that we understand the extent to which these technologies can be trusted and how they may influence individuals' choices."

OpenAI, the organization behind ChatGPT, has acknowledged the FTC's investigation and expressed cooperation with the authorities in a statement reported by Barron's. "We take these allegations seriously and are committed to ensuring the utmost transparency and accountability of our AI systems. We will collaborate fully with the FTC to address any concerns and ensure consumer confidence in our technology," the statement read.

The FTC inquiry highlights the requirement for thorough and uniform standards for AI systems. The absence of clear regulations and control increases potential risks for consumers as AI becomes increasingly ingrained in our daily lives. It is crucial for developers and regulatory agencies to collaborate in order to construct strong frameworks that assure ethical AI development and usage if they are to sustain the public's trust and confidence in AI technologies.

The FTC's inquiry serves as a warning that artificial intelligence systems like ChatGPT are unreliable even though they have shown great promise in improving a variety of elements of human existence. The creation and use of these technologies are still ultimately the responsibility of humans, therefore it's critical to strike a balance between innovation and moral considerations.

Growing Threat From Deep Fakes and Misinformation

 


The prevalence of synthetic media is rising as a result of the development of tools that make it simple to produce and distribute convincing artificial images, videos, and music. The propagation of deepfakes increased by 900% in 2020, according to Sentinel, over the previous year.

With the rapid advancement of technology, cyber-influence operations are becoming more complex. The methods employed in conventional cyberattacks are increasingly being utilized to cyber influence operations, both in terms of overlap and extension. In addition, we have seen growing nation-state coordination and amplification.

Tech firms in the private sector could unintentionally support these initiatives. Companies that register domain names, host websites, advertise content on social media and search engines, direct traffic, and support the cost of these activities through digital advertising are examples of enablers.

Deep learning, a particular type of artificial intelligence, is used to create deepfakes. Deep learning algorithms can replace a person's likeness in a picture or video with other people's visage. Deepfake movies of Tom Cruise on TikTok in 2021 captured the public. Deepfake films of celebrities were first created by face-swapping photographs of celebrities online.

There are three stages of cyber influence operations, starting with prepositioning, in which false narratives are introduced to the public. The launch phase involves a coordinated campaign to spread the narrative through media and social channels, followed by the amplification phase, where media and proxies spread the false narrative to targeted audiences. The consequences of cyber influence operations include market manipulation, payment fraud, and impersonation. However, the most significant threat is trust and authenticity, given the increasing use of artificial media that can dismiss legitimate information as fake.

Business Can Defend Against Synthetic Media:

Deepfakes and synthetic media have become an increasing concern for organizations, as they can be used to manipulate information and damage reputations. To protect themselves, organizations should take a multi-layered approach.
  • Firstly, they should establish clear policies and guidelines for employees on how to handle sensitive information and how to verify the authenticity of media. This includes implementing strict password policies and data access controls to prevent unauthorized access.
  • Secondly, organizations should invest in advanced technology solutions such as deepfake detection software and artificial intelligence tools to detect and mitigate any threats. They should also ensure that all systems are up-to-date with the latest security patches and software updates.
  • Thirdly, organizations should provide regular training and awareness programs for employees to help them identify and respond to deepfake threats. This includes educating them on the latest deepfake trends and techniques, as well as providing guidelines on how to report suspicious activity.
Furthermore, organizations should have a crisis management plan in place in case of a deepfake attack. This should include clear communication channels and protocols for responding to media inquiries, as well as an incident response team with the necessary expertise to handle the situation. By adopting a multi-layered approach to deepfake protection, organizations can reduce the risks of synthetic media attacks and protect their reputation and sensitive information.


Smash and Grab: Meta Takes Down Disinformation Campaigns Run by China and Russia

 

Meta, Facebook’s parent company has confirmed that it has taken down two significant but unrelated ‘disinformation operations’ rolling out from China and Russia. 

The campaigns began at the beginning of May 2022, targeting media users in Germany, France, Italy, Ukraine, and the UK. The campaign attempted to influence public opinions by pushing fake narratives in the west, pertaining to US elections and the war in Ukraine. 

The campaign spoofed around 60 websites, impersonating legitimate news websites, such as The Guardian in the UK and Bild and Der Spiegel in Germany. The sites did not only imitate the format and design of the original news sites but also copied photos and bylines from the news reporters in some cases. 

“There, they would post original articles that criticized Ukraine and Ukrainian refugees, supported Russia, and argued that Western sanctions on Russia would backfire […] They would then promote these articles and also original memes and YouTube videos across many internet services, including Facebook, Instagram, Telegram, Twitter, petitions websites Change.org and Avaaz, and even LiveJournal” Meta stated in a blog post. 

In the wake of this security incident, Facebook and Instagram have reportedly removed nearly 2,000 accounts, more than 700 pages, and one group. Additionally, Meta detected around $105,000 in advertising. While Meta has been actively quashing fake websites, more spoofed websites continue to show up.  

However, “It presented an unusual combination of sophistication and brute force,” claims Meta’s Ben Nimmo and David Agranovich in a blog post announcing the takedowns. “The spoofed websites and the use of many languages demanded both technical and linguistic investment. The amplification on social media, on the other hand, relied primarily on crude ads and fake accounts.” 

“Together, these two approaches worked as an attempted ‘smash-and-grab’ against the information environment, rather than a serious effort to occupy it long term.” 

Both the operations are now taken down as the campaigns were a violation of Meta’s “coordinated inauthentic behaviour” rule, defined as “coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation”. 

Addressing the situation of emerging fraud campaigns, Ben Nimmo further said, “We know that even small operations these days work across lots of different social media platforms. So the more we can share information about it, the more we can tell people how this is happening, the more we can all raise our defences.”

30 Million Data Theft Hacktivists Detained in Ukraine

The Security Service of Ukraine's (SSU) cyber division has eliminated a group of hackers responsible for the data theft or roughly 30 million people. 

According to SSU, its cyber branch has dismantled a group of hacktivists who stole 30 million accounts and sold the data on the dark web. According to the department, the hacker organization sold these accounts for about UAH 14 million ($375,000). 

As stated by the SSU, the hackers sold data packs that pro-Kremlin propagandists bought in bulk and then utilized the accounts to distribute false information on social media, generate panic, and destabilize Ukraine and other nations. 

YuMoney, Qiwi, and WebMoney, which are not permitted in Ukraine, were used by the group to receive funds.The police discovered and seized many hard drives containing stolen personal data, alongside desktops, SIM cards, mobile phones, and flash drives, during the raids on the attackers' homes in Lviv, Ukraine. 

By infecting systems with malware, fraudsters were able to gather sensitive data and login passwords. They targeted systems in the European Union and Ukraine. According to Part 1 of Article 361-2 of the Ukrainian Criminal Code, unauthorized selling of material with restricted access, the group's organizer has been put under investigation.

The number of people detained is still unknown, but they are all charged criminally with selling or disseminating restricted-access material stored in computers and networks without authorization. There are lengthy prison terms associated with these offenses.

The gang's primary clients were pro-Kremlin propagandists who utilized the stolen accounts in their destabilizing misinformation efforts in Ukraine and other nations.

The SSU took down five bot farms that spread misinformation around the nation in March and employed 100,000 fictitious social media profiles. A huge bot farm with one million bots was found and destroyed by Ukrainian authorities in August.

The SSU discovered two further botnets in September that were using 7,000 accounts to propagate false information on social media.

Malware producers are frequently easier to recognize, but by using accounts belonging to real people, the likelihood that the operation would be discovered is greatly reduced due to the history of the posts and the natural activity.






According to Europol, Deepfakes are Used Frequently in Organized Crime

 

The Europol Innovation Lab recently released its inaugural report, titled "Facing reality? Law enforcement and the challenge of deepfakes", as part of its Observatory function. The paper presents a full overview of the illegal use of deepfake technology, as well as the obstacles faced by law enforcement in identifying and preventing the malicious use of deepfakes, based on significant desk research and in-depth interaction with law enforcement specialists. 

Deepfakes are audio and audio-visual consents that "convincingly show individuals expressing or doing activities they never did, or build personalities which never existed in the first place" using artificial intelligence. Deepfakes are being utilized for malevolent purposes in three important areas, according to the study: disinformation, non-consensual obscenity, and document fraud. As technology further advances in the near future, it is predicted such attacks would become more realistic and dangerous.

  1. Disinformation: Europol provided several examples of how deepfakes could be used to distribute false information, with potentially disastrous results. In the geopolitical domain, for example, producing a phony emergency warning that warns of an oncoming attack. The US charged the Kremlin with a disinformation scheme to use as a pretext for an invasion of Ukraine in February, just before the crisis between Russia and Ukraine erupted.  The technique may also be used to attack corporations, for example, by constructing a video or audio deepfake which makes it appear as if a company's leader committed contentious or unlawful conduct. Criminals imitating the voice of the top executive of an energy firm robbed the company of $243,000. 
  2. Non-consensual obscenity: According to the analysis, Sensity found non-consensual obscenity was present in 96 percent of phony videos. This usually entails superimposing a victim's face onto the body of a philanderer, giving the impression of the victim is performing the act.
  3. Document fraud: While current fraud protection techniques are making it more difficult to fake passports, the survey stated that "synthetic media and digitally modified facial photos present a new way for document fraud." These technologies, for example, can mix or morph the faces of the person who owns the passport and the person who wants to obtain one illegally, boosting the likelihood the photo will pass screening, including automatic ones. 

Deepfakes might also harm the court system, according to the paper, by artificially manipulating or producing media to show or deny someone's guilt. In a recent child custody dispute, a mother of a kid edited an audiotape of her husband to persuade the court he was abusive to her. 

Europol stated all law enforcement organizations must acquire new skills and tools to properly deal with these types of threats. Manual detection strategies, such as looking for discrepancies, and automatic detection techniques, such as deepfake detection software uses artificial intelligence and is being developed by companies like Facebook and McAfee, are among them. 

It is quite conceivable that malicious threat actors would employ deepfake technology to assist various criminal crimes and undertake misinformation campaigns to influence or corrupt public opinion in the months and years ahead. Machine learning and artificial intelligence advancements will continue to improve the software used to make deepfakes.

Misinformation is a Hazard to Cyber Security

 

Most cybersecurity leaders recognize the usefulness of data, but data is merely information. What if the information you've been given is actually false? Or it is deception? What methods does your cybersecurity program use to determine what is real and what isn't?

Ian Hill, Global Director of Cyber Security with Royal BAM Group defined misinformation as "inaccurate or purposely misleading information." This might be anything from misinformation to deceptive advertising to satire carried too far. So, while disinformation isn't meant to be destructive, it can cause harm. 

The ideas, tactics, and actions used in cybersecurity and misinformation attacks are very similar. Misinformation takes advantage of our cognitive biases and logical fallacies, whereas cyberattacks target computer systems. Information that has been distorted, miscontextualized, misappropriated, deep fakes, and cheap fakes are all used in misinformation attacks. To wreak even more harm, nefarious individuals combine both attacks. 

Misinformation has the potential to be more damaging than viruses, worms, and other malware. Individuals, governments, society, and corporations can all be harmed by misinformation operations to deceive and damage people. 

The attention economy and advertisement-centric business models to launch a sophisticated misinformation campaign that floods the information channels the truth at unprecedented speed and scale. Understanding the agent, message, and interpreter of a specific case of information disorder is critical for organizations to stop it. Find out who's behind it — the "agent" — and what the message is that's being sent. Understanding the attack's target audience — the interpreter — is just as critical.

Misconceptions and deceptions from basic phishing scams, cyberattacks have progressed. Misinformation and disinformation are cybersecurity risks for four reasons, according to Disinfo. EU. They're known as the 4Ts:

  •  Terrain, or the infrastructure that disseminates falsehoods 
  •  Misinformation tactics, or how the misinformation is disseminated
  •  The intended victims of the misinformation that leads to cyberattacks, known as targets.
  •  Temptations, or the financial motivations for disseminating false information in cyberattacks.
 
Employees who are educated on how threat actors, ranging from an amateur hacker to a nation-state criminal, spread false information will be less likely to fall for false narratives and harmful untruths. It is now up to cybersecurity to distinguish between the true and the fraudulent.

Facebook Struggles Against Hate Speech and Misinformation, Fails to Take Actions


In the last month, FB CEO Mark Zuckerberg and others met with civil rights activists to discuss FB's way of dealing with the rising hate speeches on the platform. The activists were not too happy about Facebook's failure to deal with hate speeches and misinformation. As it seems, the civil rights group took an 'advertising boycott' action against the social media giant and expressed their stark criticism. According to these civil groups, they have had enough with Mark Zuckerberg's incompetency to deal with white supremacy, propaganda, and voters suppression on FB.


This move to boycott Facebook came as a response to Donald Trump's recent statement on FB. Trump said that anti-racism protesters should be treated with physical violence, and he also spread misinformation about mail-in voting. FB, however, denies these allegations, saying these posts didn't violate community policies. Even after such incidents, the company ensures that everything's alright, and it just needs to toughen up its enforcement actions.

"Facebook stands firmly against hate. Being a platform where everyone can make their voice heard is core to our mission, but that doesn't mean it's acceptable for people to spread hate. It's not. We have clear policies against hatred – and we constantly strive to get better and faster at enforcing them. We have made real progress over the years, but this work is never finished, and we know what a big responsibility Facebook has to get better at finding and removing hateful content." "Later this morning, Mark and I, alongside our team, are meeting with the organizers of the Stop Hate for Profit campaign followed by a meeting with other civil rights leaders who have worked closely with us on our efforts to address civil rights," said COO Sheryl Sandberg in her FB post.

In another incident, FB refused to take action against T. Raja Singh, an Indian politician from BJP. According to the Wall Street Journal, the company didn't apply its hate speech policies on Raja's Islamophobic remarks. FB employees admitted that the politicians' statements were enough to terminate his FB account. The company refused to, as according to the FB executive in India, could hurt FB's business in India.

Twitter removes nearly 4,800 accounts linked to Iran government

Twitter has removed nearly 4,800 accounts it claimed were being used by Iranian government to spread misinformation, the company said on Thursday.

Iran has made wide use of Twitter to support its political and diplomatic goals.

The step aims to prevent election interference and misinformation.

The social media giant released a transparency report that detailed recent efforts to tamp down on the spread of misinformation by insidious actors on its platform. In addition to the Iranian accounts, Twitter suspended four accounts it suspected of being linked to Russia's Internet Research Agency (IRA), 130 fake accounts associated with the Catalan independence movement in Spain and 33 accounts operated by a commercial entity in Venezuela.

It revealed the deletions in an update to its transparency report.

The 4,800 accounts were not a unified block, said Yoel Roth, Twitter's head of site integrity in a blog detailing its actions.

The Iranian accounts were divided into three categories depending on their activities. More than 1,600 accounts were tweeting global news content that supported the Iranian policies and actions. A total of 248 accounts were engaged specifically in discussion about Israel. Finally, a total of 2,865 accounts were banned due to taking on a false persona which was used to target political and social issues in Iran.

Since October 2018, Twitter has been publishing transparency reports on its investigations into state-backed information operations, releasing datasets on more than 30 million tweets.

Twitter has been regularly culling accounts it suspects of election interference from Iran, Russia and other nations since the fallout from the 2016 US presidential election. Back in February, the social media platform announced it had banned 2,600 Iran-linked accounts and 418 accounts tied to Russia's IRA it suspected of election meddling.

“We believe that people and organizations with the advantages of institutional power and which consciously abuse our service are not advancing healthy discourse but are actively working to undermine it,” Twitter said.

Whatsapp Declines to comply with the Government’s Demand



With general elections scheduled to be held one year from now in India, the Indian Government is taking a strict prospect of the utilization of various social media platforms like Facebook, Twitter, and WhatsApp for the spread of prevarication of information.

In the light of the same it had requested from WhatsApp for a solution for track the outset of messages on its platform.

The Facebook owned firm though declined to comply with the government's request saying that the move will undermine the protection and privacy of WhatsApp users.

Sources in the IT Ministry have said that the administration has declared that WhatsApp should keep on exploring the specialized technical advancements whereby if there should be an occurrence of mass circulation of offensive and detestable messages whipping up clashes and delinquency, the outset can be figured out easily.

The ministry is additionally looking for an all the more firm affirmation of the assent with Indian laws from the company, along with the foundation of grievance officer with a wide framework.

Accentuation has been given to the fact that a local corporate entity, subject to Indian laws, ought to be set up by the company in the outlined time period.


Prior this week the WhatsApp Head Chris Daniels got together with the IT Minister Ravi Shankar Prasad for tending issues similar to this one. After the gathering, Mr. Prasad said that the legislature has requested that WhatsApp set up a local corporate entity and uncover a technological solution in order to ascertain the outset of the  phony messages circled through its platform simultaneously commission  a grievance  officer.

 “People rely on WhatsApp for all kinds of sensitive conversations, including with their doctors, banks and families. Building traceability would undermine end-to-end encryption and the private nature of WhatsApp, creating potential for serious misuse,” the Facebook-owned firm said on Thursday.

“WhatsApp will not weaken the privacy protections we provide,” a company spokesperson stressed, adding, “Our focus remains working closely with others in India to educate people about misinformation and help keep people safe.”

A month ago, WhatsApp top administrators, including COO Matthew Idema, met IT Secretary and other Indian government authorities to summarize the several different advances being taken by the company on this issue.