Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Snapchat. Show all posts

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

Grooming Cases Reach Unprecedented Heights Amidst Regulatory Delays

 


Campaigners are calling for no more delays in the online safety bill, which is being pushed by the Government, as thousands of crimes related to online grooming have been reported during the wait for updated online safety laws. 

There has been a lengthy wait before the long-awaited bill can become law, but the proposed legislation went through repeated changes and delays on its way to becoming law in the autumn. Additionally, ministers have come under fire in recent days from tech companies for what they think is an attempt by the government to undermine encryption technology.

A call has been made by the NSPCC to support the bill after the charity announced that in the last six years, UK police forces have recorded 34,000 cases of online grooming crimes, affecting children and young people. In 2017, the charity began calling for more robust online safety regulations to be put in place in order to protect users. 

NSPCC statistics show that 6350 incidents of sexual communication with children were reported last year, an increase of 82 per cent since the offence of sexual communication with children was introduced in 2017/18, according to data obtained from 42 UK police forces. 

Moreover, the figures say that in 83 percent of the cases of social media grooming over the last six years, when the gender of the victim could be determined, the victims were girls, the charity noted. According to the police data, more than 150 apps, games and websites were also used for the purpose of targeting children. If children are to be protected from abuse and neglect, then the NSPCC believes that the Bill is indispensable. 

As a result of this law, firms and big tech bosses will have to adopt stricter responsibilities for protecting young users if it passes. Nevertheless, the NSPCC wants assurances that new technologies, including artificial intelligence, will be regulated by the legislation as well.

A study of the data shows that 73% of the reported crimes involved either Snapchat or an associated website, where 5,500 of the incidents involved children between the ages of 5 and 12. A few weeks away from the end of the summer recess, parliament will resume sessions to wrap up the debate on the bill, which is expected to be passed soon after. 

A severe impasse in the UK is threatening the future of end-to-end encryption due to its ongoing implications. Increasing numbers of tech companies are offering encrypted messaging services to app users in order to satisfy their demands for more privacy since this means that the message can be viewed only by the sender and the recipient, rather than anyone else. The records cannot even be accessed by the tech companies themselves in most cases. 

Even though most of us would agree that privacy is, in general, something we all cherish above all others, there is a grave element of risk that cannot be ignored when trying to achieve it. However, depending on which platform you go to, these privacy features can be accessed by everyone, and the platforms claim they offer extra protection for people such as victims of domestic abuse, journalists, political activists, and others. It is also claimed by them that if a backdoor is added to their services, it will undermine the security of their system for everybody. 

Despite the fact that the tech industry and legislators have a consensus that something needs to be changed, the tradeoff between privacy and security has prevented any meaningful progress from being made. 

As stated in the latest draft of the Online Safety Bill, it demands a backdoor through which the authorities will have the ability to access social media services for the sole purpose of unlawful surveillance. 

Nevertheless, there are concerns among tech companies that ostensibly loosening any protection against data scraping might provide hackers and data thieves with a window of opportunity to perpetrate havoc on our sensitive information by exploiting any loopholes. It is generally considered that social media platforms prefer developing their own safety precautions as opposed to taking a proactive approach to prevent the spread of child sex abuse material (CSAM). In addition, updates are used to tighten up their grip on the spread of other forms of harmful and age-restricted content, so that children do not encounter harmful content.

Even with the efforts of each individual company, the statistics indicate that the epidemic of online child grooming continues to worsen – an epidemic exacerbated by social media's unintentional role as a smokescreen for online child maltreatment. 

The chief executive of the NSPCC, Sir Peter Wanless, commented that the study demonstrates that there is a considerable amount of child abuse on social media as well as the human cost associated with fundamentally unsafe products. There are many offences against children online, so it is imperative that we remember how important the Online Safety Bill is and why children need the ground-breaking protection it will provide. 

As a result of the ongoing butting heads between Silicon Valley giants and government regulators, there has been speculation that the Communications Regulatory Authority, or Ofcom, might get involved and bring about changes that will impact the entire industry. If the right balance can be struck between both relevant entities, it will be interesting to see if both parties can benefit. There is no doubt about it, though, that it is a matter of equivocally needing all hands on deck in order to get the job done.

Phishing Scam Exploit's American Express, Snapchat Open-Redirect Threats

Phishing emails aimed at users of Google Workspace and Microsoft 365 have been sent as a result of open-redirect vulnerabilities affecting the American Express and Snapchat domains.

The term "open redirects" refers to a software vulnerability that makes it simpler for hackers to point users toward harmful resources they control.

Vulnerabilities :

Open redirect occurs when a website doesn't validate user input, allowing hackers to modify the URLs of domains with stellar reviews to route consumers to malicious sites. Because the initial domain name in the altered link is a well-known one, like American Express or Snapchat, victims will believe it.

The link may seem secure to an untrained eye because the first domain name in the modified link is actually the domain name of the original site. According to email security firm INKY, the trusted domain, such as American Express or Snapchat, serves as a temporary landing page before redirecting the user to a malicious website.

DocuSign, FedEx, and Microsoft were used as baits in phishing emails distributed to the Snapchat group, which led to sites that harvest user credentials. Researchers from Inky claim that 6,812 phishing emails sent from Google Workspace and Microsoft 365 hacked over the course of two and a half months used the Snapchat open redirect.

On August 4, 2021, professionals informed Snapchat of a vulnerability through the Open Bug Bounty site, but nothing has been done to fix it.

The matter was made worse by the discovery of the American Express open-redirect vulnerability in more than 2,000 phishing emails in only two days in July. The vulnerability has since been patched, as per the report, and any user who opens the link now is led to an error page on the company's legitimate website.

Prevention cautions

Roger Kay of INKY provided easy measures for preventing open redirect attacks:
  • Domain owners can undertake a few easy actions if they want to further reduce open redirect attacks. First, don't use redirection at all in your site architecture. Domain owners can, however, build an allowlist of permitted safe links to reduce open-redirect misuse if it's required for business reasons.
  • Additionally, domain owners have the option to display caution about external links before forwarding viewers to external websites.
  • Users should be on the lookout for URLs that include things like "url=," "redirect=," "external-link," or "proxy" as they explore websites online. These strings can suggest that a reputable domain might reroute traffic to another website.
  • Additionally, recipients of emails with links should look for repeated instances of "http" in the URL, another possible sign of redirection.

Google Maps…Creepy or Useful?



Whether Android or iPhone there is no denying that Google is there for all of us, keeping a track log of our data in a "Timeline" that unequivocally shows wherever we've been, which while in some cases is amazingly valuable and helpful yet for the rest it’s downright creepy.

The creepy degree of details range from like precisely the time at which the user left for home, arrival at home, the exact route taken along the way, pictures taken in specific locations and then some.

It'll show them if they were driving, strolling or on a train, and any pit stops they may have made during their journey. Like here is an example including a user's stop for lunch, and a meeting they took with Snapchat on the Upper West side earlier in the day.



Zoomed in, one can see the exact course taken to arrive and where the car was parked.


And hence there's no reason as to why Google has to know this much information about any user, except if they truly care about things like Google's recommendations based on where they've been.

So there are a couple of ways the user can recover their privacy. First, here’s how the user can delete everything Google Maps currently knows about them:

  • Open Google Maps on your iPhone or Android phone.
  • Tap your profile picture on the top-right. 
  • Choose “Your data in Maps.” 
  • Choose “See & Delete activity.” 
  • Hit the menu button on the top-right of the page and select “Settings.” 
  • Choose “Delete all location history.” 


 And here’s how the user can set it up so Google automatically deletes all this location data every three months:

  • Open Google Maps on iPhone or Android. 
  • Tap the menu bar on the top-left of the app. 
  • Choose “Your Timeline.” 
  • Tap the three dots on the top-right of the screen. 
  • Choose “Settings and privacy.” 
  • Select “Automatically delete location history.” 
  • Change the setting from “Keep until I delete manually” to “Keep for 18 months” or “Keep for 3 months.” 


 Or, if the user doesn’t mind Google tracking them day to day but just want to stop it for a little while, they can simply turn on Incognito mode in Maps by doing this:


  • Open Maps on your iPhone or Android phone. 
  • Tap your profile picture on the top-right. 
  • Choose “Turn on Incognito mode.”



Congested Google Servers Render Snapchat and YouTube Inaccessible!



The eastern parts of the USA were hit by a sudden congestion of the Google servers which triggered famous apps like YouTube and Snapchat to be inaccessible.


Quite immediately, Google addressed the matter citing that it was dealing with the “high levels of network congestion”.

This was highlighted to be the reason for the inoperative applications. It also affected many other services in the Google Cloud, YouTube and G Suite.

Slow performance or/and sporadic errors are other repercussions of the network congestion. Google engineers are halfway through the restoration process.


Twitter blew up with the questions and worries of the social media users as the applications ceased to work as smoothly as they do.

On the other hand, YouTube and Snapchat also took to their Twitter handles to concede the alarming issue at hand.

Computing happens to be one of the most profitable services Google has to provide but it faces serious rivalry at the hands of other technology organizations like Microsoft and Amazon.


Google Wins a Dismissal of a Lawsuit over the Biometric Privacy Act


The world's largest search engine had a lawsuit filed against it by its users, allegedly stating that Google had violated the privacy of its users by utilizing facial recognition software to examine their photos without their consent.

U.S. District Judge Edmond E. Chang in Chicago dismissed it referring to an absence of "concrete injuries" to the offended parties.

The original suit was known to have been documented in March 2016, a user sued Google for supposedly transferring their information to Google Photos by means of using the facial recognition software and further scanning it in order to create a template of their face without their permission, all the while crossing paths with a unique Illinois law.

In spite of the fact that Google is the first among those well-known who violated the law explicitly as Snapchat and Facebook also have had faced lawsuits for the same ,  Google emerges as the first to prevail upon a dismissal of a lawsuit over the biometric security act.

Google's triumph comes in the midst of open public backlash against the U.S. technology goliaths over misusing of user information and expanded the further examination of privacy policies.