Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label X. Show all posts

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.

X's URL Blunder Sparks Security Concerns

 



X, the social media platform formerly known as Twitter, recently grappled with a significant security flaw within its iOS app. The issue involved an automatic alteration of Twitter.com links to X.com links within Xeets, causing widespread concern among users. While the intention behind this change was to maintain brand consistency, the execution resulted in potential security vulnerabilities.

The flaw originated from a feature that indiscriminately replaced any instance of "Twitter" in a URL with "X," regardless of its context. This meant that legitimate URLs containing the word "Twitter" were also affected, leading to situations where users unknowingly promoted malicious websites. For example, a seemingly harmless link like netflitwitter[.]com would be displayed as Netflix.com but actually redirect users to a potentially harmful site.

The implications of this flaw were significant, as it could have facilitated phishing campaigns or distributed malware under the guise of reputable brands such as Netflix or Roblox. Despite the severity of the issue, X chose not to address it publicly, likely in an attempt to mitigate negative attention.

The glitch persisted for at least nine hours, possibly longer, before it was eventually rectified. Subsequent tests confirmed that URLs are now displaying correctly, indicating that the issue has been resolved. However, it's important to note that the auto-change policy does not apply when the domain is written in all caps.

This incident underscores the importance of thorough testing and quality assurance in software development, particularly for platforms with large user bases. It serves as a reminder for users to exercise caution when clicking on links, even if they appear to be from trusted sources.

To better understand how platforms like X operate and maintain user trust, it's essential to consider the broader context of content personalization. Profiles on X are utilised to tailor content presentation, potentially reordering material to better match individual interests. This customization considers users' activity across various platforms, reflecting their interests and characteristics. While content personalization enhances user experience, incidents like the recent security flaw highlight the importance of balancing personalization with user privacy and security concerns.


Scam: Chennai Woman Exposes Cyber Crime Involving Adhaar Card, Courier, Drugs


Woman discloses scam, alerts netizens

By bringing attention to a fresh cybercrime strategy, a marketing expert from Chennai has assisted others in avoiding the scam. Lavanya Mohan, the woman, talked about her experience on X, (formerly Twitter). She said how she got a call saying that someone was using her Aadhaar card to carry drugs over international borders.

The woman said she had recently read in the news about how two residents of Gurugram were conned out of almost Rs 2 crores by cybercriminals who tricked FedEx executives and cybercrime branch experts into calling people and pretending their Aadhar cards were being used to smuggle drugs into Thailand. 

A woman revealed the scam of "Aadhar Card Misused For Drug Smuggling"

Mohan described her conversation with the fraudsters in a series of X threads posted on her social media account, @lavsmohan. The caller, who was impersonating a customer service agent from a delivery company (FedEx, in Mohan's case), had concocted a story about a package that was supposed to be shipped with drugs from Thailand using her Aadhar ID.

Even more phony data were provided by the fraudster, such as shipment information, a forged FIR number, and even a phony employee ID, to increase the impression of urgency and validity. The caller then warned her about "rising scams" and offered to put her in touch with a customs official to settle the matter. 

In her post, Mohan went into further detail about what had happened and expressed her knowledge, saying, "Ma'am, if you don't go ahead with the complaint, your Aadhar will continue to be misused so let me connect you right away with the cyber crime branch."  "Threatening consequences + urgency = scam," she continued. 

The Gurugram incident served as a reminder

Mohan revealed how she was made aware of the news from Gurugram two weeks prior, when two men lost Rs 1.3 crores and Rs 56 lakhs, respectively, to scammers. 

But Mohan held ground and refrained from succumbing to the conman's manipulations. She refused to speak with the caller any further and withheld any personal information, telling them she would wait for police officers to get in touch with her and hang up. She saw the warning signs, which included unwanted calls, threats of legal consequences, and attempts to pressure her into acting quickly. 

In response to the crime occurrence, Mohan wrote: "The amount of information he had to provide me is concerning. Their approach is to put you in contact with the police, who then assert that your ID has connections to the criminal underworld." She further stated, "People are losing their hard-earned money and they can't be blamed because these scams are growing more sophisticated."

FedEx clears the air

Following the cybercrimes on Wednesday that used FedEx's name, the business made it clear in an informative statement that it only phones consumers to inquire about shipped products if the client specifically wants to do so. 

The company's statement went on to caution that anyone should notify local law officials right away and report any strange calls or messages requesting personal information to the cybercrime. 

A similar instance of a "sophisticated" cyber scam was brought to light by well-known Bollywood actress Anjali Patil, who has starred in movies including Newton and Mirzya. The actor was defrauded of Rs 5.79 lakhs in a similar, widely publicized "drug parcel scam" in December 2023. 


Corporate Accountability: Tech Titans Address the Menace of Misleading AI in Elections

 


In a report issued on Friday, 20 leading technology companies pledged to take proactive steps to prevent deceptive uses of artificial intelligence from interfering with global elections, including Google, Meta, Microsoft, OpenAI, TikTok, X, Amazon and Adobe. 

According to a press release issued by the 20 companies participating in the event, they are committed to “developing tools to detect and address online distributions of artificial intelligence content that is intended to deceive voters.” 

The companies are also committed to educating voters about the use of artificial intelligence and providing transparency in elections around the world. It was the head of the Munich Security Conference, which announced the accord, that lauded the agreement as a critical step towards improving election integrity, increasing social resilience, and creating trustworthy technology practices that would help advance the advancement of election integrity. 

It is expected that in 2024, over 4 billion people will be eligible to cast ballots in over 40 different countries. A growing number of experts are saying that easy-to-use generative AI tools could potentially be used by bad actors in those campaigns to sway votes and influence those elections. 

From simple text prompts, users can generate images, videos, and audio using tools that use generative artificial intelligence (AI). It can be said that some of these services do not have the necessary security measures in place to prevent users from creating content that suggests politicians or celebrities say things they have never said or do things they have never done. 

In a tech industry "agreement" intended to reduce voter deception regarding candidates, election officials, and the voting process, the technology industry aims at AI-generated images, video, and audio. It is important to note, however, that it does not call for an outright ban on such content in its entirety. 

It should be noted that while the agreement is intended to show unity among platforms with billions of users, it mostly outlines efforts that are already being implemented, such as those designed to identify and label artificial intelligence-generated content already in the pipeline. 

Especially in the upcoming election year, which is going to see millions of people head to the polls in countries all around the world, there is growing concern about how artificial intelligence software could mislead voters and maliciously misrepresent candidates. 

AI appears to have already impersonated President Biden in New Hampshire's January primary attempting to discourage Democrats from voting in the primary as well as purportedly showing a leading candidate claiming to have rigged the election in Slovakia last September by using obvious AI-generated audio. 

The agreement, endorsed by a consortium of 20 corporations, encompasses entities involved in the creation and dissemination of AI-generated content, such as OpenAI, Anthropic, and Adobe, among others. Notably, Eleven Labs, whose voice replication technology is suspected to have been utilized in fabricating the false Biden audio, is among the signatories. 

Social media platforms including Meta, TikTok, and X, formerly known as Twitter, have also joined the accord. Nick Clegg, Meta's President of Global Affairs, emphasized the imperative for collective action within the industry, citing the pervasive threat posed by AI. 

The accord delineates a comprehensive set of principles aimed at combating deceptive election-related content, advocating for transparent disclosure of origins and heightened public awareness. Specifically addressing AI-generated audio, video, and imagery, the accord targets content falsifying the appearance, voice, or conduct of political figures, as well as disseminating misinformation about electoral processes. 

Acknowledged as a pivotal stride in fortifying digital communities against detrimental AI content, the accord underscores a collaborative effort complementing individual corporate initiatives. As per the "Tech Accord to Combat Deceptive Use of AI in 2024 Elections," signatories commit to developing and deploying technologies to mitigate risks associated with deceptive AI election content, including the potential utilization of open-source solutions where applicable.

 Notably, Adobe, Amazon, Arm, Google, IBM, and Microsoft, alongside others, have lent their support to the accord, as confirmed in the latest statement.

Nationwide Banking Crisis: Servers Down, UPI Transactions in Jeopardy

 


Several bank servers have been reported to have been down on Tuesday, affecting Unified Payments Interface (UPI) transactions throughout the country. Several users took to social media platforms and reported that they encountered issues while making UPI transactions. They took to X (formerly Twitter) to complain about not being able to complete the transaction. It was confirmed in a tweet that the National Payments Corporation of India had suffered from an outage which led to the failure of UPI transactions in some banks. 

A website monitoring service with issues received reports that the UPI service was not working for Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI), and others, according to Downdetector, a website monitoring service. According to reports on social media platforms, hundreds of bank servers have experienced widespread outages nationwide, impacting the Unified Payments Interface (UPI) transactions. 

Users were flooding social media platforms with details of these disruptions. As well, Downdetector, a company providing website monitoring services, received reports of ongoing outages affecting UPI as well as Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI) and others. The outage seems to affect UPI transactions made using several banks as well. 

In some cases, users have reported experiencing server problems when making UPI payments with HDFC Bank, Baroda Bank, Mumbai Bank, State Bank of India (SBI), and Kotak Mahindra Bank, among other banks. In addition to reporting UPI, Kotak Mahindra Bank and HDFC Bank's ongoing outage on Downdetector, a website that keeps an eye on outages and issues across the entire business landscape, Downdetector has also received reports of ongoing outages from users. 

Several users have reported having difficulty with the “Fund Transfer” process within their respective banks due to technical difficulties. A new high was reached by UPI transactions in January, with a value of Rs 18.41 trillion, up marginally by 1 per cent from Rs 18.23 trillion in December. During November, the number of transactions increased by 1.5%, reaching 12.20 billion, which is up by 1.5 per cent from 12.02 billion in October. 

In November, the number of transactions was 11.4 billion, resulting in a value of Rs 17.4 trillion. The NPCI data shows that the volume of transactions in January was 52 per cent higher and the value was 42 per cent higher than the same month of the previous financial year, according to NPCI data. 

Earlier in November 2023, a report stating that the government was considering implementing a minimum time constraint within the initial interaction between two individuals who are carrying out transactions exceeding an adjustable amount was published. 

The Indian Express reported that, according to government sources, the proposed plan would dictate a four-hour timeframe to be imposed on the first digital payment between two users, particularly for transactions exceeding Rs 2,000, based on inputs that were received from the government.

Elon Musk's X Steps Up: Pledges Legal Funds for Workers Dealing with Unfair Bosses

 


In a recent interview, Elon Musk said that his company X social media platform, formerly known as Twitter, would cover members' legal bills and sue those whose jobs are unfairly treated by their employers for posting or liking something on the platform.  

There have been no further details shared by Musk about how "unfair treatment" by employers is viewed by him or how he will vet users seeking legal counsel. 

In a follow-up, he stated that the company would fund the legal fees regardless of how much they charge. However, there has not been any response from the company regarding who qualifies for legal support and how users will be screened for eligibility for legal support. 

Throughout the years, Facebook users, as well as celebrities and many other public figures, have faced controversy with their employers in the form of posts, likes, or reposts they have made while using the platform. 

As Musk announced earlier in the day, a fight between him and Matrix's CEO Mark Zuckerberg would also be streamed live on the microblogging platform, which is largely operated by Facebook. Two of the top tech titans had faced off against one another in a cage fight last month after both had accepted a challenge from the other. 

Musk has made a statement to the effect that the Zuck v Musk fight will be live-streamed on X and all proceeds will go to a charity for veterans. In late October, the tech billionaire shared a graph showing the latest count, and a statement that he had reached a new record for monthly users of X. 

X had reached 540 million users at the end of October, he added. It was reported in January by the Daily Wire that Kara Lynne, a streamer at a gaming company, was fired from her job for following the controversial X account "Libs of TikTok".

In the wake of organizational changes at the company and in an attempt to boost falling advertising revenue, the figures have come out and the company is going through restructuring. The Twitter logo was familiar for 17 years, but in July, Musk launched a new logo accompanied by a new name, renaming the social media platform to X and committing to building an "all-in-one app" rather than the existing blue bird logo.  

A few weeks ago, Musk stated that the platform has a negative cash flow because advertising revenues have dropped nearly 50 percent and the platform has a large amount of debt. Even though advertising revenues rose in June more than expected, the good news did not play out as expected. 

Many previously banned users have been allowed to rejoin since he has taken control of the company—including former President Donald Trump, for example. In addition, he has weakened the content moderation policies and fired a majority of the team responsible for overseeing hate speech/other forms of potentially harmful content on the site, as well as loosened up the rules regarding moderation. 

As Musk's commitment to free speech has been demonstrated, it has not been without consequences for those who exercise that right, as several journalists who wrote about Musk's organization were temporarily suspended by Musk, and an account that tracked his private jet's flight path using publicly available data was banned as well. 

Several reports indicate Musk also publicly fired an employee who criticized him on his platform and laid off colleagues who criticized him in private, but both actions were reportedly taken in response to criticism. There is an apparent presence of a "woke mind virus" in the minds of people that Musk campaigns against some social causes such as transgender rights since he launched his initial bid to acquire Twitter early last year and has shared several posts on social media. 

The CEO of Tesla, Elon Musk, also tweeted that "cis" and "cisgender" would now be considered slurs on the app, a change he announced back in June. There has been a rise in the number of employee terminations after employees post or publicly endorse offensive content on social media platforms, and this is not just for controversial activities that relate to social issues, but also for a wide range of other major reasons. 

The Californian tech worker Michelle Serna, who posted a video on TikTok while a meeting was taking place in the background, was fired from her company in May after posting the video online. Inadequate moderation of hate speech during recent months, the tycoon who purchased Twitter for $44 billion last October has seen the company's advertising business collapse, in part because the company did not moderate hate speech as it should have, and previously banned accounts have returned to the platform. 

According to Musk, his desire for free expression motivates his changes, and he has often lashed out at what he views as a threat posed to free expression caused by the shifting cultural sensibilities influencing technological advancement. CCDH, the non-profit organization focused on countering the spread of hate speech on the Internet, feels that the platform has flourished under the influence of hate speech.  This finding of the CCDH is disputed by X and he is suing the agency for its findings. 

Trump's Twitter account was reinstated by Musk in December, but it appears the former US president is yet to resume his use of Twitter. Several supporters of the ex-president tried unsuccessfully to overturn the results of the 2020 election by attacking the Capitol Building on January 6 of the following year, but he was banned from Twitter in early 2021 as a result of his role in the attack. A US media outlet reports that social media platform X recently reinstated Kanye West's account after he was suspended eight months ago when it was found that he posted an antisemitic comment.