Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Misinformation. Show all posts

Online Misinformation and AI-Driven Fake Content Raise Concerns for Election Integrity

 

With elections drawing near, unease is spreading about how digital falsehoods might influence voter behavior. False narratives on social platforms may skew perception, according to officials and scholars alike. As artificial intelligence advances, deceptive content grows more convincing, slipping past scrutiny. Trust in core societal structures risks erosion under such pressure. Warnings come not just from academics but also from community leaders watching real-time shifts in public sentiment.  

Fake messages have recently circulated online, pretending to be from the City of York Council. Though they looked real, officials later stated these ads were entirely false. One showed a request for people willing to host asylum seekers; another asked volunteers to take down St George flags. A third offered work fixing road damage across neighborhoods. What made them convincing was their design - complete with official logos, formatting, and contact information typical of genuine notices. 

Without close inspection, someone scrolling quickly might believe them. Despite their authentic appearance, none of the programs mentioned were active or approved by local government. The resemblance to actual council material caused confusion until authorities stepped in to clarify. Blurred logos stood out immediately when BBC Verify examined the pictures. Wrong fonts appeared alongside misspelled words, often pointing toward artificial creation. 

Details like fingers looked twisted or incomplete - a frequent issue in computer-made visuals. One poster included an email tied to a real council employee, though that person had no knowledge of the material. Websites referenced in some flyers simply did not exist online. Even so, plenty of individuals passed the content along without questioning its truth. A single fabricated post managed to spread through networks totaling over 500,000 followers. False appearances held strong appeal despite clear warning signs. 

What spreads fast online isn’t always true - Clare Douglas, head of City of York Council, pointed out how today’s tech amplifies old problems in new ways. False stories once moved slowly; now they race across devices at a pace that overwhelms fact-checking efforts. Trust fades when people see conflicting claims everywhere, especially around health or voting matters. Institutions lose ground not because facts disappear, but because attention scatters too widely. When doubt sticks longer than corrections, participation dips quietly over time.  

Ahead of public meetings, tensions surfaced in various regions. Misinformation targeting asylum seekers and councils emerged online in Barnsley, according to Sir Steve Houghton, its council head. False stories spread further due to influencers who keep sharing them - profit often outweighs correction. Although government outlets issued clarifications, distorted messages continue flooding digital spaces. Their sheer number, combined with how long they linger, threatens trust between groups and raises risks for everyday security. Not everyone checks facts these days, according to Ilya Yablokov from the University of Sheffield’s Disinformation Research Cluster. Because AI makes it easier than ever, faking believable content takes little effort now. 

With just a small setup, someone can flood online spaces fast. What helps spread falsehoods is how busy people are - they skip checking details before passing things along. Instead, gut feelings or existing opinions shape what gets shared. Fabricated stories spreading locally might cost almost nothing to create, yet their impact on democracy can be deep. 

When misleading accounts reach more voters, specialists emphasize skills like questioning sources, checking facts, or understanding media messages - these help preserve confidence in public processes while supporting thoughtful engagement during voting events.

The New Content Provenance Report Will Address GenAI Misinformation


The GenAI problem 

Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.

The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing. 

What is AI slop?

The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle. 

The content provenance report 

For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.

NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.” 

What is next for Content Integrity?

The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft. 

Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage. 

What is needed?

The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.

The present technology places the pressure on the end user to understand the provenance data. 

A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.


India Strengthens Cybersecurity Measures Amid Rising Threats Post-Pahalgam Attack

 

In response to a surge in cyberattacks targeting Indian digital infrastructure following the Pahalgam terror incident, the Indian government has directed financial institutions and critical infrastructure sectors to enhance their cybersecurity protocols. These instructions were issued by the Computer Emergency Response Team (CERT-In), according to a source familiar with the development, Moneycontrol reported.

The precautionary push isn’t limited to government networks — private sector entities are also actively reinforcing their systems against potential cyber threats. “We have been extra alert right from the Pahalgam attack, in terms of ensuring cyber security speedily not just by government agencies but also by the private sector,” the source stated.

CERT-In, India’s central agency for cyber defense, has released advisories to banking institutions and other essential sectors, urging them to tighten their digital safeguards. In addition, the government has engaged with organizations like NASSCOM to facilitate a collaborative cyber alert framework.

Recent attacks primarily involved DDoS, or distributed denial-of-service incidents, which overwhelm servers with excessive traffic, rendering websites inaccessible and potentially causing financial damage. Attempts to deface websites — typically for political messaging — were also reported.

This intensified focus on digital defense follows India’s military action against terrorist hideouts in Pakistan, occurring nearly two weeks after the Pahalgam incident, which resulted in the deaths of Indian tourists in Kashmir.

Moneycontrol previously highlighted that cyber surveillance across India's vital digital infrastructure is being ramped up following the Pahalgam attack and the subsequent Operation Sindoor. Critical sectors and strategic installations are under strict scrutiny to ensure adherence to robust cybersecurity practices.

Amid these developments, misinformation remains a parallel concern. Daily takedown requests under Section 69A of the IT Act have surpassed 1,000, as the government works with social media platforms to curb the spread of fake news, the source noted.

Generative AI Fuels Identity Theft, Aadhaar Card Fraud, and Misinformation in India

 

A disturbing trend is emerging in India’s digital landscape as generative AI tools are increasingly misused to forge identities and spread misinformation. One user, Piku, revealed that an AI platform generated a convincing Aadhaar card using only a name, birth date, and address—raising serious questions about data security. While AI models typically do not use real personal data, the near-perfect replication of government documents hints at training on real-world samples, possibly sourced from public leaks or open repositories. 

This AI-enabled fraud isn’t occurring in isolation. Criminals are combining fake document templates with authentic data collected from discarded paperwork, e-waste, and old printers. The resulting forged identities are realistic enough to pass basic checks, enabling SIM card fraud, bank scams, and more. What started as tools for entertainment and productivity now pose serious risks. Misinformation tactics are evolving too. 

A recent incident involving playback singer Shreya Ghoshal illustrated how scammers exploit public figures to push phishing links. These fake stories led users to malicious domains targeting them with investment scams under false brand names like Lovarionix Liquidity. Cyber intelligence experts traced these campaigns to websites built specifically for impersonation and data theft. The misuse of generative AI also extends into healthcare fraud. 

In a shocking case, a man impersonated renowned cardiologist Dr. N John Camm and performed unauthorized surgeries at a hospital in Madhya Pradesh. At least two patient deaths were confirmed between December 2024 and February 2025. Investigators believe the impersonator may have used manipulated or AI-generated credentials to gain credibility. Cybersecurity professionals are urging more vigilance. CertiK founder Ronghui Gu emphasizes that users must understand the risks of sharing biometric data, like facial images, with AI platforms. Without transparency, users cannot be sure how their data is used or whether it’s shared. He advises precautions such as using pseudonyms, secondary emails, and reading privacy policies carefully—especially on platforms not clearly compliant with regulations like GDPR or CCPA. 

A recent HiddenLayer report revealed that 77% of companies using AI have already suffered security breaches. This underscores the need for robust data protection as AI becomes more embedded in everyday processes. India now finds itself at the center of an escalating cybercrime wave powered by generative AI. What once seemed like harmless innovation now fuels identity theft, document forgery, and digital misinformation. The time for proactive regulation, corporate accountability, and public awareness is now—before this new age of AI-driven fraud becomes unmanageable.

The UK Erupts in Riots as Big Tech Stays Silent


 

For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.

Big Tech's Reluctance to Speak

Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.

Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.

Elon Musk's Twitter and the Spread of Misinformation

The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.

Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.

Experts Weigh In on Big Tech's Silence

Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.

Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.

What Does It Look Like?

As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.

Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.

A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.


As Deepfake of Sachin Tendulkar Surface, India’s IT Minister Promises Tighter Rules


On Monday, Indian minister of State for Information Technology Rajeev Chandrasekhar confirmed that the government will notify robust rules under the Information Technology Act in order to ensure compliance by platform in the country. 

Union Minister, on X, expressed gratitude to cricketer Sachin Tendulkar for pointing out the video. Tendulkar, in X, for pointing out the video, said that AI-generated deepfakes and misinformation are a threat to safety and trust of Indian users. He notes that platforms must comply with advisory issued by the Centre. 

"Thank you @sachin_rt for this tweet #DeepFakes and misinformation powered by #AI are a threat to Safety&Trust of Indian users and represents harm & legal violation that platforms have to prevent and take down. Recent Advisory by @GoI_MeitY requires platforms to comply wth this 100%. We will be shortly notifying tighter rules under IT Act to ensure compliance by platforms," Chandrasekhar posted on X

On X, Sachin Tendulkar was seen cautioning his fans and the public that aforementioned video was fake. Further, he asked viewers to report any such applications, videos and advertisements. 

"These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and fake news. @GoI_MeitY, @Rajeev_GoI and @MahaCyber1," Tendulkar said on X.

Deepfakes are artificial media that have undergone digital manipulation to effectively swap out one person's likeness for another. The alteration of facial features using deep generative techniques is known as a "deepfake." While the practice of fabricating information is not new, deepfakes use sophisticated AI and machine learning algorithms to edit or create visual and auditory content that is easier to trick.

Last month, the government urged all online platforms to abide by the IT rules and mandated companies to notify users about forbidden content transparently and accurately.

The Centre has asked platforms to take urgent action against deepfakes and ensure that their terms of use and community standards comply with the laws and IT regulations in force. The government has made it abundantly evident that any violation will be taken very seriously and could result in legal actions against the entity.  

Reddit to Pay Users for Popular Posts

Reddit, the popular social media platform, has announced that it will begin paying users for their posts. The new system, which is still in its early stages, will see users rewarded with cash for posts that are awarded "gold" by other users.

Gold awards are a form of virtual currency that can be purchased by Reddit users for a fee. They can be given to other users to reward them for their contributions to the platform. Until now, gold awards have only served as a way to show appreciation for other users' posts. However, under the new system, users who receive gold awards will also receive a share of the revenue generated from those awards.

The amount of money that users receive will vary depending on the number of gold awards they receive and their karma score. Karma score is a measure of how much other users have upvoted a user's posts and comments. Users will need to have at least 10 gold awards to cash out, and they will receive either 90 cents or $1 for each gold award.

Reddit says that the new system is designed to "reward the best and brightest content creators" on the platform. The company hopes that this will encourage users to create more high-quality content and contribute more to the community.

However, there are also some concerns about the new system. Some users worry that it could lead to users creating clickbait or inflammatory content to get more gold awards and more money. Others worry that the system could be unfair to users who do not have a lot of karma.

One Reddit user expressed concern that the approach will lead users to produce content of poor quality. If they know they can make money from it, people are more likely to upload clickbait or provocative stuff.

Another Reddit member said that users with low karma may be treated unfairly by the system. According to the user, "Users with more karma will be able to profit more from the system than users with less karma." This will make users with lower karma less likely to produce high-quality content, which is unjust.

Some of the issues raised by the new method have been addressed by Reddit. According to the corporation, it will actively monitor the system to make sure users aren't producing low-quality content to increase their gold medal total. In addition, Reddit states that it will endeavor to create a system that is equitable to all users, regardless of karma.

According to a Reddit spokesman, "We understand that there are some concerns about the new system. We are dedicated to collaborating with the community to make sure that the system is just and that it inspires users to produce high-quality content."

The platform has undergone a dramatic change as a result of Reddit's new strategy of compensating users for popular postings. The system's actual functionality and whether it will improve the platform's content quality have still to be determined. Reddit is devoted to advancing and inventing, as evidenced by the declaration of the new system.

AI Boom: Cybercriminals Winning Early

Artificial intelligence (AI) is ushering in a transformative era across various industries, including the cybersecurity sector. AI is driving innovation in the realm of cyber threats, enabling the creation of increasingly sophisticated attack methods and bolstering the efficiency of existing defense mechanisms.

In this age of AI advancement, the potential for a safer world coexists with the emergence of fresh prospects for cybercriminals. As the adoption of AI technologies becomes more pervasive, cyber adversaries are harnessing its power to craft novel attack vectors, automate their malicious activities, and maneuver under the radar to evade detection.

According to a recent article in The Messenger, the initial beneficiaries of the AI boom are unfortunately cybercriminals. They have quickly adapted to leverage generative AI in crafting sophisticated phishing emails and deepfake videos, making it harder than ever to discern real from fake. This highlights the urgency for organizations to fortify their cybersecurity infrastructure.

On a more positive note, the demand for custom chips has skyrocketed, as reported by TechCrunch. As generative AI algorithms become increasingly complex, off-the-shelf hardware struggles to keep up. This has paved the way for a new era of specialized chips designed to power these advanced systems. Industry leaders like NVIDIA and AMD are at the forefront of this technological arms race, racing to develop the most efficient and powerful AI chips.

McKinsey's comprehensive report on the state of AI in 2023 reinforces the notion that generative AI is experiencing its breakout year. The report notes, "Generative AIs have surpassed many traditional machine learning models, enabling tasks that were once thought impossible." This includes generating realistic human-like text, images, and even videos. The applications span from content creation to simulating real-world scenarios for training purposes.

However, amidst this wave of optimism, ethical concerns loom large. The potential for misuse, particularly in deepfakes and disinformation campaigns, is a pressing issue that society must grapple with. Dr. Sarah Rodriguez, a leading AI ethicist, warns, "We must establish robust frameworks and regulations to ensure responsible use of generative AI. The stakes are high, and we cannot afford to be complacent."

Unprecedented opportunities are being made possible by the generative AI surge, which is changing industries. The potential is limitless and can improve anything from creative processes to data synthesis. But we must be cautious with this technology and deal with the moral issues it raises. Gaining the full benefits of generative AI will require a careful and balanced approach as we navigate this disruptive period.