Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Twitter. Show all posts

Irish Data Protection Commission Halts AI Data Practices at X

 

The Irish Data Protection Commission (DPC) recently took a decisive step against the tech giant X, resulting in the immediate suspension of its use of personal data from European Union (EU) and European Economic Area (EEA) users to train its AI model, “Grok.” This marks a significant victory for data privacy, as it is the first time the DPC has taken such substantial action under its powers granted by the Data Protection Act of 2018. 

The DPC initially raised concerns that X’s data practices posed a considerable risk to individuals’ fundamental rights and freedoms. The use of publicly available posts to train the AI model was viewed as an unauthorized collection of sensitive personal data without explicit consent. This intervention highlights the tension between technological innovation and the necessity of safeguarding individual privacy. 

Following the DPC’s intervention, X agreed to cease its current data processing activities and commit to adhering to stricter privacy guidelines. Although the company did not acknowledge any wrongdoing, this outcome sends a strong message to other tech firms about the importance of prioritizing data privacy when developing AI technologies. The immediate halt of Grok AI’s training on data from 60 million European users came in response to mounting regulatory pressure across Europe, with at least nine GDPR complaints filed during its short stint from May 7 to August 1. 

After the suspension, Dr. Des Hogan, Chairperson of the Irish DPC, emphasized that the regulator would continue working with its EU/EEA peers to ensure compliance with GDPR standards, affirming the DPC’s commitment to safeguarding citizens’ rights. The DPC’s decision has broader implications beyond its immediate impact on X. As AI technology rapidly evolves, questions about data ethics and transparency are increasingly urgent. This decision serves as a prompt for a necessary dialogue on the responsible use of personal data in AI development.  

To further address these issues, the DPC has requested an opinion from the European Data Protection Board (EDPB) regarding the legal basis for processing personal data in AI models, the extent of data collection permitted, and the safeguards needed to protect individual rights. This guidance is anticipated to set clearer standards for the responsible use of data in AI technologies. The DPC’s actions represent a significant step in regulating AI development, aiming to ensure that these powerful technologies are deployed ethically and responsibly. By setting a precedent for data privacy in AI, the DPC is helping shape a future where innovation and individual rights coexist harmoniously.

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.

Winklevoss Crypto Firm Gemini to Return $1.1B to Customers in Failed "Earn" Scheme

‘Earn’ product fiasco

Gemini to return money

As part of a settlement with regulators on Wednesday, the cryptocurrency company Gemini, owned by the Winklevoss twins, agreed to repay at least $1.1 billion to consumers of its failed "Earn" loan scheme and pay a $37 million fine for "significant" compliance violations.

The New York State Department of Financial Services claims that Gemini, which the twins started following their well-known argument with Mark Zuckerberg over who developed Facebook, neglected to "fully vet or sufficiently monitor" Genesis, Gemini Earn's now-bankrupt lending partner.

What is the Earn Program?

The Earn program, which promised users up to 8% income on their cryptocurrency deposits, was canceled in November 2022 when Genesis was unable to pay withdrawals due to the fall of infamous scammer Sam Bankman-Fried's FTX enterprise.

Since then, almost 30,000 residents of New York and over 200,000 other Earn users have lost access to their money.

Gemini "engaged in unsafe and unsound practices that ultimately threatened the financial health of the company," according to the state regulator.

NYSDFS Superintendent Adrienne Harris claimed in a statement that "Gemini failed to conduct due diligence on an unregulated third party, later accused of massive fraud, harming Earn customers who were suddenly unable to access their assets after Genesis Global Capital experienced a financial meltdown." 

Customers win lawsuit

Customers of Earn, who are entitled to the assets they committed to Gemini, have won with today's settlement.

“Collecting hundreds of millions of dollars in fees from Gemini customers that otherwise could have gone to Gemini, substantially weakening Gemini’s financial condition,” was the unregulated affiliate that dubbed Gemini Liquidity during the crisis.

Although it did not provide any details, the regulator added that it "further identified various management and compliance deficiencies."

Gemini also consented to pay $40 million to Genesis' bankruptcy proceedings as part of the settlement, for the benefit of Earn customers.

"If the company does not fulfill its obligation to return at least $1.1 billion to Earn customers after the resolution of the [Genesis] bankruptcy," the NYSDFS stated that it "has the right to bring further action against Gemini."

Gemini announced that the settlement would "result in all Earn users receiving 100% of their digital assets back in kind" during the following 12 months in a long statement that was posted on X.

The business further stated that final documentation is required for the settlement and that it may take up to two months for the bankruptcy court to approve it.

The New York Department of Financial Services (DFS) was credited by Gemini with helping to reach a settlement that gives Earn users a coin-for-coin recovery.

More about the lawsuit

Attorney General Letitia James of New York filed a lawsuit against Genesis and Gemini in October, accusing them of defrauding Earn consumers out of their money and labeling them as "bad actors."

James tripled the purported scope of the lawsuit earlier this month. The complaint was submitted a few weeks after The Post revealed that, on August 9, 2022, well in advance of Genesis's bankruptcy, Gemini had surreptitiously taken $282 million in cryptocurrency from the company.

Subsequently, the twins stated that the change was made to the advantage of the patrons.

The brothers' actions, however, infuriated Earn customers, with one disgruntled investor telling The Post that "there's no good way that Gemini can spin this."

In a different lawsuit, the SEC is suing Gemini and Genesis because the Earn program was an unregistered security.

The collapse of Earn was a significant blow to the Winklevoss twins' hopes of becoming a dominant force in the industry.

Gemini had built its brand on the idea that it was a reliable player in the wild, mostly uncontrolled cryptocurrency market.

Nationwide Banking Crisis: Servers Down, UPI Transactions in Jeopardy

 


Several bank servers have been reported to have been down on Tuesday, affecting Unified Payments Interface (UPI) transactions throughout the country. Several users took to social media platforms and reported that they encountered issues while making UPI transactions. They took to X (formerly Twitter) to complain about not being able to complete the transaction. It was confirmed in a tweet that the National Payments Corporation of India had suffered from an outage which led to the failure of UPI transactions in some banks. 

A website monitoring service with issues received reports that the UPI service was not working for Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI), and others, according to Downdetector, a website monitoring service. According to reports on social media platforms, hundreds of bank servers have experienced widespread outages nationwide, impacting the Unified Payments Interface (UPI) transactions. 

Users were flooding social media platforms with details of these disruptions. As well, Downdetector, a company providing website monitoring services, received reports of ongoing outages affecting UPI as well as Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI) and others. The outage seems to affect UPI transactions made using several banks as well. 

In some cases, users have reported experiencing server problems when making UPI payments with HDFC Bank, Baroda Bank, Mumbai Bank, State Bank of India (SBI), and Kotak Mahindra Bank, among other banks. In addition to reporting UPI, Kotak Mahindra Bank and HDFC Bank's ongoing outage on Downdetector, a website that keeps an eye on outages and issues across the entire business landscape, Downdetector has also received reports of ongoing outages from users. 

Several users have reported having difficulty with the “Fund Transfer” process within their respective banks due to technical difficulties. A new high was reached by UPI transactions in January, with a value of Rs 18.41 trillion, up marginally by 1 per cent from Rs 18.23 trillion in December. During November, the number of transactions increased by 1.5%, reaching 12.20 billion, which is up by 1.5 per cent from 12.02 billion in October. 

In November, the number of transactions was 11.4 billion, resulting in a value of Rs 17.4 trillion. The NPCI data shows that the volume of transactions in January was 52 per cent higher and the value was 42 per cent higher than the same month of the previous financial year, according to NPCI data. 

Earlier in November 2023, a report stating that the government was considering implementing a minimum time constraint within the initial interaction between two individuals who are carrying out transactions exceeding an adjustable amount was published. 

The Indian Express reported that, according to government sources, the proposed plan would dictate a four-hour timeframe to be imposed on the first digital payment between two users, particularly for transactions exceeding Rs 2,000, based on inputs that were received from the government.

Global Outage Strikes Social Media Giant X

The recent global outage of Social Media Platform X caused a stir in the online community during a time when digital media predominates. Users everywhere became frustrated and curious about the cause of this extraordinary disruption when they realized they couldn't use the platform on December 21, 2023.

Reports of the outage, which was first discovered by Downdetector, began to arrive from all over the world, affecting millions of customers. The impact of the outage has increased because Social Media Platform X, a significant player in the social media ecosystem, has grown to be an essential part of peoples' everyday lives.

One significant aspect of the outage was the diverse range of issues users faced. According to reports, users experienced difficulties in tweeting, accessing their timelines, and even logging into their accounts. The widespread nature of these problems hinted at a major technical glitch rather than localized issues.

TechCrunch reported that the outage lasted for several hours, leaving users in limbo and sparking speculation about the root cause. The incident raised questions about the platform's reliability and prompted discussions about the broader implications of such outages in an interconnected digital world.

Assuring users that their technical teams were actively working to repair the issue, the platform's official response was prompt in admitting the inconvenience. Both users and specialists were in the dark, though, as there were few details regarding the precise cause.

Experts weighed in on the outage, emphasizing the need for robust infrastructure and redundancy measures to prevent such widespread disruptions in the future. The incident served as a reminder of the vulnerabilities inherent in our dependence on centralized digital platforms.

In the aftermath of the outage, Social Media Platform X released a formal apology, expressing regret for the inconvenience caused to users. The incident prompted discussions about the need for transparency from tech giants when addressing such disruptions and the importance of contingency plans to mitigate the impact on users.

Amidst the growing digitalization of our world, incidents such as the worldwide disruption of Social Media Platform X highlight the vulnerability of our interdependent networks. It's a wake-up call for users and tech businesses alike to put resilience and transparency first when faced with unanticipated obstacles in the digital space.

Tech Giants Grapple Russian Propaganda: EU's Call to Action

 


In a recent study published by the European Commission, it was found that after Elon Musk changed X's safety policies, Russian propaganda was able to reach a much wider audience, thanks to the changes made by Musk. 

After an EU report revealed they failed to curb a massive Kremlin disinformation campaign surrounding Russia's invasion of Ukraine last month, there has been intense scrutiny on social media platforms Meta, YouTube, X (formerly Twitter), and TikTok, among others. 

As part of the study conducted by civil society groups and published last week by the European Commission, it was revealed that after the dismantling of Twitter's safety standards, very clearly Kremlin-backed accounts have gained further influence in the early part of 2023, especially because of the weakened safety standards. 

In the first two months of 2022, pro-Russian accounts have garnered over 165 million subscribers across major platforms, and have generated over 16 billion views since then. There have still been few details as to whether or not the EU will ban the content of Russian state media. According to the EU study, the failure of X to deal with disinformation, had these rules been in place last year, would have violated these rules if they had been in effect at the time. 

Musk has proven to be more cooperative than social media companies in terms of limiting propaganda on their platforms, even though they are finding it hard to do the same. In fact, according to the study, Telegram and Meta, the company that owns Instagram and Facebook, have made little headway in limiting Russian disinformation campaigns as a result of their efforts. 

There has been a much more aggressive approach to the fight against disinformation in Europe than the US has. By the Digital Services Act that took effect last month, major tech companies are expected to take proactive measures to reduce risks related to children's safety, harassment, the use of illegal content, and threats to democratic processes, or risk getting fined significantly. 

There were tougher rules introduced for the world's biggest online platforms earlier this month under the EU's Digital Services Act (DSA). Several large social media companies are currently operating under the DSA's stricter rules, which demand that they take a more aggressive approach to policing content after the website has been identified as having a minimum of 45 million monthly active users, which includes hate speech and disinformation.

If the DSA had been operational a month earlier, there is a possibility that social media companies could have been fined if they had breached their legal duties – leading to a breach of legal duties. The most damning aspect of Elon Musk's acquisition of X last October is the rapid growth of hate and lies that have reigned on the social network. 

As a result of the new owner's decision to lift mitigation measures on Kremlin-backed accounts, along with removing labels from related Russian state-affiliated accounts, engagement grew by 36 percent between January and May 2023. The new owner argued that "all news" is propaganda to some degree, thus increasing engagement percentages. 

As a consequence, the Kremlin has stepped up its sophisticated information warfare campaign across Europe, threatening free and fair elections across the continent as well as fundamental human rights. There is a chance that platforms will be required to act fast before it is too late to comply with the new EU digital regulation that is now in effect, the Digital Services Act, which was implemented on August 25th, before the European parliamentary elections in 2024 arrive.

It was recently outlined by the Digital Security Alliance that large social media companies and search engines in the EU with at least 45 million monthly active users are now required to adopt stricter content moderation policies, which include clamping down on hate speech and disinformation in a proactive manner, or else face heavy fines if they do not.   

The Race to Train AI: Who's Leading the Pack in Social Media?

 


A growing number of computer systems and data sets consisting of large, complex information have enabled the rise of artificial intelligence over the last few years. AI has the potential to be practical and profitable by being used in numerous applications such as machine learning, which provides a way for a system to locate patterns within large sets of data. 

In modern times, AI is playing a significant role in a wide range of computer systems, including iPhones that recognize and translate voice, driverless cars that carry out complicated manoeuvres under their power, and robots in factories and homes that automate tasks. 

AI has become increasingly important in research over the last few years, and it is now being used in several applications, such as the processing of vast amounts of data that lie at the heart of fields like astronomy and genomics, producing weather forecasts and climate models, and interpreting medical imaging images for signs of disease. 

In a recent update to its privacy policy, X, the social media platform that used to be known as Twitter, stated it may train an AI model based on posts from users. According to Bloomberg early this week, X's recently updated privacy policy informs its users that the company is now collecting various kinds of information about its users, including biometric data, as well as their job history and educational background. 

The data that X will be collecting on users appears to be more than what it has planned to do with them. Another update to the company's policy specifies that it plans to use the data it collects along with other publicly available information to train its machine learning and artificial intelligence models based on the information it collects and other data sources.

Several schools are recommending the use of private data, such as text messages in direct messages, to train their models, according to The Office of Elon Musk, owner of the company and former CEO. There is no reason to be surprised by this change. 

According to Musk, his latest startup, xAI, was founded to help researchers and engineers in the enterprise build new products by utilizing data collected from the microblogging site and his latest startup, Twitter. For accessing the company's data via an API, X charges companies using its API $42,000. 

It was reported in April that X was removed from Microsoft's advertising platforms due to increased fees and, in response, had threatened to sue the company for allegedly using Twitter data illegally. These fees increased after Microsoft reportedly pulled X from its advertising platforms. Elon Musk has called on AI research labs to halt work on systems that can compete with human intelligence, in a tweet published late Thursday.

Musk has called on several tech leaders to stop the development of systems that are at human levels of brightness. Several AI labs have been strongly urged to cease training models more powerful than GPT-4, the newest version of the large language model software developed by the U.S. startup OpenAI, according to an open letter signed by Musk, Steve Wozniak, and 2020 presidential candidate Andrew Yang from the Future of Life Institute. 

Founded in Cambridge, Massachusetts, the Future of Life Institute is a non-profit organization dedicated to pushing forward the responsible and ethical development of artificial intelligence in the future. A few of the founders of the company include Max Tegmark, a cosmologist at MIT, and Jaan Tallinn, the co-founder of Skype. 

Musk and Google's AI lab DeepMind, which is owned by Google, have previously agreed not to develop lethal autonomous weapons systems as part of the organization's previous campaign. In an appeal to all AI labs, the institute said it was taking immediate steps to “pause for at least 6 months at least the training of AI systems with higher levels of power than the GPT-4.” 

The GPT-4, which was released earlier this month, is believed to be a far more sophisticated version of the GPT-3 than its predecessor. Researchers have been amazed to learn that ChatGPT, the viral artificial intelligence chatbot, has been able to mimic human-type responses when users ask its questions. In only two months after its launch in January this year, ChatGPT had accrued 100 million monthly active users, making it the fastest-growing application in the history of consumer applications. 

A machine learning algorithm is trained by taking vast amounts of data from the internet at a time and applying it to write poetry in the style of William Shakespeare to draft legal opinions based on the facts in a case. However, some ethical and moral scholars have raised concerns that AI might also be abused for crime and misinformation purposes, which could lead to exploitation. 

During CNBC's contact with OpenAI, the company was unable to comment immediately upon being contacted. Microsoft, the world's largest technology company whose headquarters are located in Redmond, Washington, has invested $10 billion in OpenAI, which is backed by the company. 

Microsoft is also integrating its natural language processing technology, called GPT, into its Bing search engine for natural language search to make it more conversational and useful to users. There was a follow-up announcement from Google, which announced its line of conversational artificial intelligence (AI) products aimed at consumers. 

According to Musk, AI, or artificial intelligence, may represent one of the biggest threats to civilization shortly. OpenAI was founded by Elon Musk and Sam Altman in 2015, though Musk left OpenAI's board in 2018 and therefore does not hold any stake in the company that he helped found. It has been his view that the organization has diverged from its original purpose several times lately, and he has voiced his opinion about the same.

There is also a race among regulators to get a grip on AI tools due to the rapid advance of technology in this area. In a report published on Wednesday, the United Kingdom announced the publication of a white paper on artificial intelligence, deferring the job of overseeing the use of such tools in different sectors by applying the existing laws within their jurisdictions.

Elon Musk's X Steps Up: Pledges Legal Funds for Workers Dealing with Unfair Bosses

 


In a recent interview, Elon Musk said that his company X social media platform, formerly known as Twitter, would cover members' legal bills and sue those whose jobs are unfairly treated by their employers for posting or liking something on the platform.  

There have been no further details shared by Musk about how "unfair treatment" by employers is viewed by him or how he will vet users seeking legal counsel. 

In a follow-up, he stated that the company would fund the legal fees regardless of how much they charge. However, there has not been any response from the company regarding who qualifies for legal support and how users will be screened for eligibility for legal support. 

Throughout the years, Facebook users, as well as celebrities and many other public figures, have faced controversy with their employers in the form of posts, likes, or reposts they have made while using the platform. 

As Musk announced earlier in the day, a fight between him and Matrix's CEO Mark Zuckerberg would also be streamed live on the microblogging platform, which is largely operated by Facebook. Two of the top tech titans had faced off against one another in a cage fight last month after both had accepted a challenge from the other. 

Musk has made a statement to the effect that the Zuck v Musk fight will be live-streamed on X and all proceeds will go to a charity for veterans. In late October, the tech billionaire shared a graph showing the latest count, and a statement that he had reached a new record for monthly users of X. 

X had reached 540 million users at the end of October, he added. It was reported in January by the Daily Wire that Kara Lynne, a streamer at a gaming company, was fired from her job for following the controversial X account "Libs of TikTok".

In the wake of organizational changes at the company and in an attempt to boost falling advertising revenue, the figures have come out and the company is going through restructuring. The Twitter logo was familiar for 17 years, but in July, Musk launched a new logo accompanied by a new name, renaming the social media platform to X and committing to building an "all-in-one app" rather than the existing blue bird logo.  

A few weeks ago, Musk stated that the platform has a negative cash flow because advertising revenues have dropped nearly 50 percent and the platform has a large amount of debt. Even though advertising revenues rose in June more than expected, the good news did not play out as expected. 

Many previously banned users have been allowed to rejoin since he has taken control of the company—including former President Donald Trump, for example. In addition, he has weakened the content moderation policies and fired a majority of the team responsible for overseeing hate speech/other forms of potentially harmful content on the site, as well as loosened up the rules regarding moderation. 

As Musk's commitment to free speech has been demonstrated, it has not been without consequences for those who exercise that right, as several journalists who wrote about Musk's organization were temporarily suspended by Musk, and an account that tracked his private jet's flight path using publicly available data was banned as well. 

Several reports indicate Musk also publicly fired an employee who criticized him on his platform and laid off colleagues who criticized him in private, but both actions were reportedly taken in response to criticism. There is an apparent presence of a "woke mind virus" in the minds of people that Musk campaigns against some social causes such as transgender rights since he launched his initial bid to acquire Twitter early last year and has shared several posts on social media. 

The CEO of Tesla, Elon Musk, also tweeted that "cis" and "cisgender" would now be considered slurs on the app, a change he announced back in June. There has been a rise in the number of employee terminations after employees post or publicly endorse offensive content on social media platforms, and this is not just for controversial activities that relate to social issues, but also for a wide range of other major reasons. 

The Californian tech worker Michelle Serna, who posted a video on TikTok while a meeting was taking place in the background, was fired from her company in May after posting the video online. Inadequate moderation of hate speech during recent months, the tycoon who purchased Twitter for $44 billion last October has seen the company's advertising business collapse, in part because the company did not moderate hate speech as it should have, and previously banned accounts have returned to the platform. 

According to Musk, his desire for free expression motivates his changes, and he has often lashed out at what he views as a threat posed to free expression caused by the shifting cultural sensibilities influencing technological advancement. CCDH, the non-profit organization focused on countering the spread of hate speech on the Internet, feels that the platform has flourished under the influence of hate speech.  This finding of the CCDH is disputed by X and he is suing the agency for its findings. 

Trump's Twitter account was reinstated by Musk in December, but it appears the former US president is yet to resume his use of Twitter. Several supporters of the ex-president tried unsuccessfully to overturn the results of the 2020 election by attacking the Capitol Building on January 6 of the following year, but he was banned from Twitter in early 2021 as a result of his role in the attack. A US media outlet reports that social media platform X recently reinstated Kanye West's account after he was suspended eight months ago when it was found that he posted an antisemitic comment.