Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Twitter. Show all posts

Irish Data Protection Commission Halts AI Data Practices at X

 

The Irish Data Protection Commission (DPC) recently took a decisive step against the tech giant X, resulting in the immediate suspension of its use of personal data from European Union (EU) and European Economic Area (EEA) users to train its AI model, “Grok.” This marks a significant victory for data privacy, as it is the first time the DPC has taken such substantial action under its powers granted by the Data Protection Act of 2018. 

The DPC initially raised concerns that X’s data practices posed a considerable risk to individuals’ fundamental rights and freedoms. The use of publicly available posts to train the AI model was viewed as an unauthorized collection of sensitive personal data without explicit consent. This intervention highlights the tension between technological innovation and the necessity of safeguarding individual privacy. 

Following the DPC’s intervention, X agreed to cease its current data processing activities and commit to adhering to stricter privacy guidelines. Although the company did not acknowledge any wrongdoing, this outcome sends a strong message to other tech firms about the importance of prioritizing data privacy when developing AI technologies. The immediate halt of Grok AI’s training on data from 60 million European users came in response to mounting regulatory pressure across Europe, with at least nine GDPR complaints filed during its short stint from May 7 to August 1. 

After the suspension, Dr. Des Hogan, Chairperson of the Irish DPC, emphasized that the regulator would continue working with its EU/EEA peers to ensure compliance with GDPR standards, affirming the DPC’s commitment to safeguarding citizens’ rights. The DPC’s decision has broader implications beyond its immediate impact on X. As AI technology rapidly evolves, questions about data ethics and transparency are increasingly urgent. This decision serves as a prompt for a necessary dialogue on the responsible use of personal data in AI development.  

To further address these issues, the DPC has requested an opinion from the European Data Protection Board (EDPB) regarding the legal basis for processing personal data in AI models, the extent of data collection permitted, and the safeguards needed to protect individual rights. This guidance is anticipated to set clearer standards for the responsible use of data in AI technologies. The DPC’s actions represent a significant step in regulating AI development, aiming to ensure that these powerful technologies are deployed ethically and responsibly. By setting a precedent for data privacy in AI, the DPC is helping shape a future where innovation and individual rights coexist harmoniously.

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.

Winklevoss Crypto Firm Gemini to Return $1.1B to Customers in Failed "Earn" Scheme

‘Earn’ product fiasco

Gemini to return money

As part of a settlement with regulators on Wednesday, the cryptocurrency company Gemini, owned by the Winklevoss twins, agreed to repay at least $1.1 billion to consumers of its failed "Earn" loan scheme and pay a $37 million fine for "significant" compliance violations.

The New York State Department of Financial Services claims that Gemini, which the twins started following their well-known argument with Mark Zuckerberg over who developed Facebook, neglected to "fully vet or sufficiently monitor" Genesis, Gemini Earn's now-bankrupt lending partner.

What is the Earn Program?

The Earn program, which promised users up to 8% income on their cryptocurrency deposits, was canceled in November 2022 when Genesis was unable to pay withdrawals due to the fall of infamous scammer Sam Bankman-Fried's FTX enterprise.

Since then, almost 30,000 residents of New York and over 200,000 other Earn users have lost access to their money.

Gemini "engaged in unsafe and unsound practices that ultimately threatened the financial health of the company," according to the state regulator.

NYSDFS Superintendent Adrienne Harris claimed in a statement that "Gemini failed to conduct due diligence on an unregulated third party, later accused of massive fraud, harming Earn customers who were suddenly unable to access their assets after Genesis Global Capital experienced a financial meltdown." 

Customers win lawsuit

Customers of Earn, who are entitled to the assets they committed to Gemini, have won with today's settlement.

“Collecting hundreds of millions of dollars in fees from Gemini customers that otherwise could have gone to Gemini, substantially weakening Gemini’s financial condition,” was the unregulated affiliate that dubbed Gemini Liquidity during the crisis.

Although it did not provide any details, the regulator added that it "further identified various management and compliance deficiencies."

Gemini also consented to pay $40 million to Genesis' bankruptcy proceedings as part of the settlement, for the benefit of Earn customers.

"If the company does not fulfill its obligation to return at least $1.1 billion to Earn customers after the resolution of the [Genesis] bankruptcy," the NYSDFS stated that it "has the right to bring further action against Gemini."

Gemini announced that the settlement would "result in all Earn users receiving 100% of their digital assets back in kind" during the following 12 months in a long statement that was posted on X.

The business further stated that final documentation is required for the settlement and that it may take up to two months for the bankruptcy court to approve it.

The New York Department of Financial Services (DFS) was credited by Gemini with helping to reach a settlement that gives Earn users a coin-for-coin recovery.

More about the lawsuit

Attorney General Letitia James of New York filed a lawsuit against Genesis and Gemini in October, accusing them of defrauding Earn consumers out of their money and labeling them as "bad actors."

James tripled the purported scope of the lawsuit earlier this month. The complaint was submitted a few weeks after The Post revealed that, on August 9, 2022, well in advance of Genesis's bankruptcy, Gemini had surreptitiously taken $282 million in cryptocurrency from the company.

Subsequently, the twins stated that the change was made to the advantage of the patrons.

The brothers' actions, however, infuriated Earn customers, with one disgruntled investor telling The Post that "there's no good way that Gemini can spin this."

In a different lawsuit, the SEC is suing Gemini and Genesis because the Earn program was an unregistered security.

The collapse of Earn was a significant blow to the Winklevoss twins' hopes of becoming a dominant force in the industry.

Gemini had built its brand on the idea that it was a reliable player in the wild, mostly uncontrolled cryptocurrency market.

Nationwide Banking Crisis: Servers Down, UPI Transactions in Jeopardy

 


Several bank servers have been reported to have been down on Tuesday, affecting Unified Payments Interface (UPI) transactions throughout the country. Several users took to social media platforms and reported that they encountered issues while making UPI transactions. They took to X (formerly Twitter) to complain about not being able to complete the transaction. It was confirmed in a tweet that the National Payments Corporation of India had suffered from an outage which led to the failure of UPI transactions in some banks. 

A website monitoring service with issues received reports that the UPI service was not working for Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI), and others, according to Downdetector, a website monitoring service. According to reports on social media platforms, hundreds of bank servers have experienced widespread outages nationwide, impacting the Unified Payments Interface (UPI) transactions. 

Users were flooding social media platforms with details of these disruptions. As well, Downdetector, a company providing website monitoring services, received reports of ongoing outages affecting UPI as well as Kotak Mahindra Bank, HDFC Bank, State Bank of India (SBI) and others. The outage seems to affect UPI transactions made using several banks as well. 

In some cases, users have reported experiencing server problems when making UPI payments with HDFC Bank, Baroda Bank, Mumbai Bank, State Bank of India (SBI), and Kotak Mahindra Bank, among other banks. In addition to reporting UPI, Kotak Mahindra Bank and HDFC Bank's ongoing outage on Downdetector, a website that keeps an eye on outages and issues across the entire business landscape, Downdetector has also received reports of ongoing outages from users. 

Several users have reported having difficulty with the “Fund Transfer” process within their respective banks due to technical difficulties. A new high was reached by UPI transactions in January, with a value of Rs 18.41 trillion, up marginally by 1 per cent from Rs 18.23 trillion in December. During November, the number of transactions increased by 1.5%, reaching 12.20 billion, which is up by 1.5 per cent from 12.02 billion in October. 

In November, the number of transactions was 11.4 billion, resulting in a value of Rs 17.4 trillion. The NPCI data shows that the volume of transactions in January was 52 per cent higher and the value was 42 per cent higher than the same month of the previous financial year, according to NPCI data. 

Earlier in November 2023, a report stating that the government was considering implementing a minimum time constraint within the initial interaction between two individuals who are carrying out transactions exceeding an adjustable amount was published. 

The Indian Express reported that, according to government sources, the proposed plan would dictate a four-hour timeframe to be imposed on the first digital payment between two users, particularly for transactions exceeding Rs 2,000, based on inputs that were received from the government.

Global Outage Strikes Social Media Giant X

The recent global outage of Social Media Platform X caused a stir in the online community during a time when digital media predominates. Users everywhere became frustrated and curious about the cause of this extraordinary disruption when they realized they couldn't use the platform on December 21, 2023.

Reports of the outage, which was first discovered by Downdetector, began to arrive from all over the world, affecting millions of customers. The impact of the outage has increased because Social Media Platform X, a significant player in the social media ecosystem, has grown to be an essential part of peoples' everyday lives.

One significant aspect of the outage was the diverse range of issues users faced. According to reports, users experienced difficulties in tweeting, accessing their timelines, and even logging into their accounts. The widespread nature of these problems hinted at a major technical glitch rather than localized issues.

TechCrunch reported that the outage lasted for several hours, leaving users in limbo and sparking speculation about the root cause. The incident raised questions about the platform's reliability and prompted discussions about the broader implications of such outages in an interconnected digital world.

Assuring users that their technical teams were actively working to repair the issue, the platform's official response was prompt in admitting the inconvenience. Both users and specialists were in the dark, though, as there were few details regarding the precise cause.

Experts weighed in on the outage, emphasizing the need for robust infrastructure and redundancy measures to prevent such widespread disruptions in the future. The incident served as a reminder of the vulnerabilities inherent in our dependence on centralized digital platforms.

In the aftermath of the outage, Social Media Platform X released a formal apology, expressing regret for the inconvenience caused to users. The incident prompted discussions about the need for transparency from tech giants when addressing such disruptions and the importance of contingency plans to mitigate the impact on users.

Amidst the growing digitalization of our world, incidents such as the worldwide disruption of Social Media Platform X highlight the vulnerability of our interdependent networks. It's a wake-up call for users and tech businesses alike to put resilience and transparency first when faced with unanticipated obstacles in the digital space.

Tech Giants Grapple Russian Propaganda: EU's Call to Action

 


In a recent study published by the European Commission, it was found that after Elon Musk changed X's safety policies, Russian propaganda was able to reach a much wider audience, thanks to the changes made by Musk. 

After an EU report revealed they failed to curb a massive Kremlin disinformation campaign surrounding Russia's invasion of Ukraine last month, there has been intense scrutiny on social media platforms Meta, YouTube, X (formerly Twitter), and TikTok, among others. 

As part of the study conducted by civil society groups and published last week by the European Commission, it was revealed that after the dismantling of Twitter's safety standards, very clearly Kremlin-backed accounts have gained further influence in the early part of 2023, especially because of the weakened safety standards. 

In the first two months of 2022, pro-Russian accounts have garnered over 165 million subscribers across major platforms, and have generated over 16 billion views since then. There have still been few details as to whether or not the EU will ban the content of Russian state media. According to the EU study, the failure of X to deal with disinformation, had these rules been in place last year, would have violated these rules if they had been in effect at the time. 

Musk has proven to be more cooperative than social media companies in terms of limiting propaganda on their platforms, even though they are finding it hard to do the same. In fact, according to the study, Telegram and Meta, the company that owns Instagram and Facebook, have made little headway in limiting Russian disinformation campaigns as a result of their efforts. 

There has been a much more aggressive approach to the fight against disinformation in Europe than the US has. By the Digital Services Act that took effect last month, major tech companies are expected to take proactive measures to reduce risks related to children's safety, harassment, the use of illegal content, and threats to democratic processes, or risk getting fined significantly. 

There were tougher rules introduced for the world's biggest online platforms earlier this month under the EU's Digital Services Act (DSA). Several large social media companies are currently operating under the DSA's stricter rules, which demand that they take a more aggressive approach to policing content after the website has been identified as having a minimum of 45 million monthly active users, which includes hate speech and disinformation.

If the DSA had been operational a month earlier, there is a possibility that social media companies could have been fined if they had breached their legal duties – leading to a breach of legal duties. The most damning aspect of Elon Musk's acquisition of X last October is the rapid growth of hate and lies that have reigned on the social network. 

As a result of the new owner's decision to lift mitigation measures on Kremlin-backed accounts, along with removing labels from related Russian state-affiliated accounts, engagement grew by 36 percent between January and May 2023. The new owner argued that "all news" is propaganda to some degree, thus increasing engagement percentages. 

As a consequence, the Kremlin has stepped up its sophisticated information warfare campaign across Europe, threatening free and fair elections across the continent as well as fundamental human rights. There is a chance that platforms will be required to act fast before it is too late to comply with the new EU digital regulation that is now in effect, the Digital Services Act, which was implemented on August 25th, before the European parliamentary elections in 2024 arrive.

It was recently outlined by the Digital Security Alliance that large social media companies and search engines in the EU with at least 45 million monthly active users are now required to adopt stricter content moderation policies, which include clamping down on hate speech and disinformation in a proactive manner, or else face heavy fines if they do not.   

The Race to Train AI: Who's Leading the Pack in Social Media?

 


A growing number of computer systems and data sets consisting of large, complex information have enabled the rise of artificial intelligence over the last few years. AI has the potential to be practical and profitable by being used in numerous applications such as machine learning, which provides a way for a system to locate patterns within large sets of data. 

In modern times, AI is playing a significant role in a wide range of computer systems, including iPhones that recognize and translate voice, driverless cars that carry out complicated manoeuvres under their power, and robots in factories and homes that automate tasks. 

AI has become increasingly important in research over the last few years, and it is now being used in several applications, such as the processing of vast amounts of data that lie at the heart of fields like astronomy and genomics, producing weather forecasts and climate models, and interpreting medical imaging images for signs of disease. 

In a recent update to its privacy policy, X, the social media platform that used to be known as Twitter, stated it may train an AI model based on posts from users. According to Bloomberg early this week, X's recently updated privacy policy informs its users that the company is now collecting various kinds of information about its users, including biometric data, as well as their job history and educational background. 

The data that X will be collecting on users appears to be more than what it has planned to do with them. Another update to the company's policy specifies that it plans to use the data it collects along with other publicly available information to train its machine learning and artificial intelligence models based on the information it collects and other data sources.

Several schools are recommending the use of private data, such as text messages in direct messages, to train their models, according to The Office of Elon Musk, owner of the company and former CEO. There is no reason to be surprised by this change. 

According to Musk, his latest startup, xAI, was founded to help researchers and engineers in the enterprise build new products by utilizing data collected from the microblogging site and his latest startup, Twitter. For accessing the company's data via an API, X charges companies using its API $42,000. 

It was reported in April that X was removed from Microsoft's advertising platforms due to increased fees and, in response, had threatened to sue the company for allegedly using Twitter data illegally. These fees increased after Microsoft reportedly pulled X from its advertising platforms. Elon Musk has called on AI research labs to halt work on systems that can compete with human intelligence, in a tweet published late Thursday.

Musk has called on several tech leaders to stop the development of systems that are at human levels of brightness. Several AI labs have been strongly urged to cease training models more powerful than GPT-4, the newest version of the large language model software developed by the U.S. startup OpenAI, according to an open letter signed by Musk, Steve Wozniak, and 2020 presidential candidate Andrew Yang from the Future of Life Institute. 

Founded in Cambridge, Massachusetts, the Future of Life Institute is a non-profit organization dedicated to pushing forward the responsible and ethical development of artificial intelligence in the future. A few of the founders of the company include Max Tegmark, a cosmologist at MIT, and Jaan Tallinn, the co-founder of Skype. 

Musk and Google's AI lab DeepMind, which is owned by Google, have previously agreed not to develop lethal autonomous weapons systems as part of the organization's previous campaign. In an appeal to all AI labs, the institute said it was taking immediate steps to “pause for at least 6 months at least the training of AI systems with higher levels of power than the GPT-4.” 

The GPT-4, which was released earlier this month, is believed to be a far more sophisticated version of the GPT-3 than its predecessor. Researchers have been amazed to learn that ChatGPT, the viral artificial intelligence chatbot, has been able to mimic human-type responses when users ask its questions. In only two months after its launch in January this year, ChatGPT had accrued 100 million monthly active users, making it the fastest-growing application in the history of consumer applications. 

A machine learning algorithm is trained by taking vast amounts of data from the internet at a time and applying it to write poetry in the style of William Shakespeare to draft legal opinions based on the facts in a case. However, some ethical and moral scholars have raised concerns that AI might also be abused for crime and misinformation purposes, which could lead to exploitation. 

During CNBC's contact with OpenAI, the company was unable to comment immediately upon being contacted. Microsoft, the world's largest technology company whose headquarters are located in Redmond, Washington, has invested $10 billion in OpenAI, which is backed by the company. 

Microsoft is also integrating its natural language processing technology, called GPT, into its Bing search engine for natural language search to make it more conversational and useful to users. There was a follow-up announcement from Google, which announced its line of conversational artificial intelligence (AI) products aimed at consumers. 

According to Musk, AI, or artificial intelligence, may represent one of the biggest threats to civilization shortly. OpenAI was founded by Elon Musk and Sam Altman in 2015, though Musk left OpenAI's board in 2018 and therefore does not hold any stake in the company that he helped found. It has been his view that the organization has diverged from its original purpose several times lately, and he has voiced his opinion about the same.

There is also a race among regulators to get a grip on AI tools due to the rapid advance of technology in this area. In a report published on Wednesday, the United Kingdom announced the publication of a white paper on artificial intelligence, deferring the job of overseeing the use of such tools in different sectors by applying the existing laws within their jurisdictions.

Elon Musk's X Steps Up: Pledges Legal Funds for Workers Dealing with Unfair Bosses

 


In a recent interview, Elon Musk said that his company X social media platform, formerly known as Twitter, would cover members' legal bills and sue those whose jobs are unfairly treated by their employers for posting or liking something on the platform.  

There have been no further details shared by Musk about how "unfair treatment" by employers is viewed by him or how he will vet users seeking legal counsel. 

In a follow-up, he stated that the company would fund the legal fees regardless of how much they charge. However, there has not been any response from the company regarding who qualifies for legal support and how users will be screened for eligibility for legal support. 

Throughout the years, Facebook users, as well as celebrities and many other public figures, have faced controversy with their employers in the form of posts, likes, or reposts they have made while using the platform. 

As Musk announced earlier in the day, a fight between him and Matrix's CEO Mark Zuckerberg would also be streamed live on the microblogging platform, which is largely operated by Facebook. Two of the top tech titans had faced off against one another in a cage fight last month after both had accepted a challenge from the other. 

Musk has made a statement to the effect that the Zuck v Musk fight will be live-streamed on X and all proceeds will go to a charity for veterans. In late October, the tech billionaire shared a graph showing the latest count, and a statement that he had reached a new record for monthly users of X. 

X had reached 540 million users at the end of October, he added. It was reported in January by the Daily Wire that Kara Lynne, a streamer at a gaming company, was fired from her job for following the controversial X account "Libs of TikTok".

In the wake of organizational changes at the company and in an attempt to boost falling advertising revenue, the figures have come out and the company is going through restructuring. The Twitter logo was familiar for 17 years, but in July, Musk launched a new logo accompanied by a new name, renaming the social media platform to X and committing to building an "all-in-one app" rather than the existing blue bird logo.  

A few weeks ago, Musk stated that the platform has a negative cash flow because advertising revenues have dropped nearly 50 percent and the platform has a large amount of debt. Even though advertising revenues rose in June more than expected, the good news did not play out as expected. 

Many previously banned users have been allowed to rejoin since he has taken control of the company—including former President Donald Trump, for example. In addition, he has weakened the content moderation policies and fired a majority of the team responsible for overseeing hate speech/other forms of potentially harmful content on the site, as well as loosened up the rules regarding moderation. 

As Musk's commitment to free speech has been demonstrated, it has not been without consequences for those who exercise that right, as several journalists who wrote about Musk's organization were temporarily suspended by Musk, and an account that tracked his private jet's flight path using publicly available data was banned as well. 

Several reports indicate Musk also publicly fired an employee who criticized him on his platform and laid off colleagues who criticized him in private, but both actions were reportedly taken in response to criticism. There is an apparent presence of a "woke mind virus" in the minds of people that Musk campaigns against some social causes such as transgender rights since he launched his initial bid to acquire Twitter early last year and has shared several posts on social media. 

The CEO of Tesla, Elon Musk, also tweeted that "cis" and "cisgender" would now be considered slurs on the app, a change he announced back in June. There has been a rise in the number of employee terminations after employees post or publicly endorse offensive content on social media platforms, and this is not just for controversial activities that relate to social issues, but also for a wide range of other major reasons. 

The Californian tech worker Michelle Serna, who posted a video on TikTok while a meeting was taking place in the background, was fired from her company in May after posting the video online. Inadequate moderation of hate speech during recent months, the tycoon who purchased Twitter for $44 billion last October has seen the company's advertising business collapse, in part because the company did not moderate hate speech as it should have, and previously banned accounts have returned to the platform. 

According to Musk, his desire for free expression motivates his changes, and he has often lashed out at what he views as a threat posed to free expression caused by the shifting cultural sensibilities influencing technological advancement. CCDH, the non-profit organization focused on countering the spread of hate speech on the Internet, feels that the platform has flourished under the influence of hate speech.  This finding of the CCDH is disputed by X and he is suing the agency for its findings. 

Trump's Twitter account was reinstated by Musk in December, but it appears the former US president is yet to resume his use of Twitter. Several supporters of the ex-president tried unsuccessfully to overturn the results of the 2020 election by attacking the Capitol Building on January 6 of the following year, but he was banned from Twitter in early 2021 as a result of his role in the attack. A US media outlet reports that social media platform X recently reinstated Kanye West's account after he was suspended eight months ago when it was found that he posted an antisemitic comment.

Will Threads be a 'Threat' to Twitter?


About Threads

Meta, Instagram’s parent company launched Threads, which will be a text-based conversation app, rivaling Twitter.

Threads, released on Wednesday evening, a day before its scheduled release, allows users to join up directly from their Instagram accounts; it is a platform that allows users to publish short posts or updates that are up to 500 characters. They can include links, photos, or videos up to 5 minutes long.

More than 2 billion monthly active users will be able to import their accounts into Threads once it is made available to everyone.

Threads now have 70 million signups, according to a Friday morning post by Meta CEO Mark Zuckerberg, and that number is certain to rise over the next few days. (In comparison, Instagram has 1.3 billion users that log on every day. Twitter has 259 million daily active users at the end of 2022. 13 million accounts in total are on Mastodon.)

A Threat to Twitter

Adam Mosseri, the CEO of Instagram, claimed that under Musk, Twitter's "volatility" and "unpredictability" gave Instagram the chance to compete. According to Mosseri in an interview, Threads is made for "public conversations," which is an obvious reference to how Twitter executives have described the service's function throughout the years.

In regards to its threads’ competitor space, Mosseri says “Obviously, Twitter pioneered the space[…]And there are a lot of good offerings out there for public conversations. But just given everything that was going on, we thought there was an opportunity to build something that was open and something that was good for the community that was already using Instagram.”

For some time now, Meta has been getting ready to introduce Threads, which it calls a "sanely run" substitute for Twitter. The response to Musk's recent limitation on how many tweets people may watch per day, according to internal business documents I've seen, served as the impetus for this week's app release. Furthermore, they assert that Meta expects "tens of millions" of users to use Threads within the first few months of its release.

As described by Mosseri, Thread is a “risky endeavor,” especially considering that it's a brand-new program that users must download. After receiving access to Threads earlier, users were able to rapidly fill out account information and follow lists by having Meta automatically pull information from my Instagram account.

In many important aspects, Threads is surprisingly similar to Twitter. Posts (or, as Mosseri refers to them, "threads") from accounts you follow are displayed in the app's main feed along with accounts that Instagram's algorithm has recommended. Reposting something allows you to add users’ opinions, and main feed answers are clearly shown. Though it might be added later, there is no feed that solely contains the people you follow.

Since Twitter has been around for a while and has amassed a distinctive network, it presents another element that Threads must deal with. It is evident from Meta's behavior that, despite Musk's theatrics over the previous few months, unseating Twitter would not be easy. It would be a mistake, in Mosseri's opinion, to "undervalue Twitter and Elon." The community on Twitter is tremendously powerful and vibrant, and it has a long history. The network effects are very powerful.

User Data Goldmine: Google's Ambitious Mission to Scrape Everything for AI Advancement

 


It was announced over the weekend that Google had made a change to its privacy policies. This change explicitly states that the company reserves the right to scrape everything you post online to build its artificial intelligence tools. Considering how far Google can read what you have to say, you can assume that you can expect your words to end up nestled somewhere within the bowels of a chatbot now that Google can read them. 

Google and Facebook privacy policies were quietly updated over the weekend and, likely, you didn't notice. There has been a slight change in the policy wording, but the change is significant, particularly because it is a revision.

In a recent report by Gizmodo, Google revised its privacy policy. Even though most of the policy is not particularly noteworthy, there is one section that stands out - one related to research and development - that could make a significant difference. 

The Gizmodo team has learned that Google's new privacy statement has been revised. While most of the policy is relatively unremarkable, one section in particular, the one dealing with research and development, stands out, particularly from the rest.  

For those who love history, Google has compiled a history of changes to its terms of service over the years that can be found here. According to the new language, the tech giant has written new ways in which your online musings might be used in the company's AI tools, which would not contradict the existing language in its policies. 

Google said in the past that the data would be used "for language models," rather than making "AI models," and places like Bard, Cloud AI, and Google Translate are now being mentioned, as well as the older policy that only mentioned Google Translate. 

Generally, a privacy policy does not include a clause such as this one. This type of policy describes how companies use your information when you post it on a company's service such as their website or their social media. It appears that Google has a right to harvest and harness any data posted to any part of the public web. This is as if the entire internet is the firm's playground for artificial intelligence experiments. Several requests for comment were sent to Google, but the company did not respond immediately. 

The practice raises interesting questions regarding the privacy of patients and raises new privacy concerns. Public posts are understood by the majority of people as being public. It is important to remember that what it means to write something online has changed over the years. 

The question is no longer whether a person has access to the information, but how can they use it based on that information. Your long-forgotten blog posts or even restaurant reviews from 15 years ago are very likely to have been ingested by Bard and ChatGPT. In the course of reading this, the chatbots may regurgitate some funny, humonculoid version of the words you have just spoken. This is in ways that are difficult to predict and comprehend. 

It seems odd for a company to add such a clause to its contract, as pointed out by this outlet. There is something peculiar about this because the way it has been worded gives the impression that the tech giant does reserve the right to harvest and use any data available on any part of the public internet at any time. There are times when a company's data usage policy only addresses how that company plans to make use of the personal information it has collected. 

The vast majority of people probably realize that whatever information they post online will be visible to the world at large, but this development opens up a whole new world of opportunities. The issue of privacy does not just extend to those who see your online posts, but to everything that is done with those posts as well. 

There used to be a reference here to "AI models" rather than "language models" before the update, and that statement has been changed. Furthermore, it mentioned the addition of Bard and Cloud AI to Google Translate, a service that has been included with Bard since then. 

In the outlet's opinion, this is an unusual clause that a business would enshrine in its policies. The writing of this statement seems odd since the way it's written implies that Apple owns the right to collect and use data from any section of the Internet that is open to the public. The purpose of a policy such as this is normally to tell the customer how its services will use the data it posts.

It is well known that anything you post online will be seen by almost everyone, but with the new developments that have come about, there is an unexpected twist: the possibility of using it. The thing you need to keep in mind is not just who can read what you write online, but also how that information will be used by the people who can read it. 

It is also possible to use real-time data-looking technology such as Bard, ChatGPT, Bing Chat, and other AI models that scrape data from the internet in real-time. Often, sources of information can be found in other people's intellectual property and come from their sources. AI tools currently being used for such activities are accused of theft, and more lawsuits are likely. 

The question of where data-hungry chatbots acquire their information in the post-ChatGPT world is one of the lesser-known complications of the post-ChatGPT world. Google and OpenAI scrape the Internet to fuel their robot habits. 

There is no clear legal guidance on whether it is legal. There is no doubt that the courts will have to deal with copyright questions that seemed like science fiction a few years ago when they first came up. At the same time, there have been some surprising effects on consumers that have been caused by the phenomenon so far.    

There is some aggrievement among Twitter and Reddit overlords related to the AI issue. Both have made controversial changes to lock down their platforms going forward. There has been a change in both companies' API which prevented third parties from downloading large quantities of posts for free. This was something they allowed anyone to download. There is no doubt that this statement is intended to protect social media sites from being harvested by other companies looking to steal their intellectual property. However, the consequences of this decision are far more significant. 

Third-party tools that people used to access Twitter and Reddit have been broken by the API changes that Twitter and Reddit implemented. At one point, Twitter even appeared to be considering requiring public entities such as weather forecasts, transit lines, and emergency services to pay a monthly fee to use their Twitter services, but Twitter backed down after receiving a hailstorm of criticism for this plan. 

Elon Musk has historically made web scraping his favorite boogieman in recent years. Musk explained a number of the recent Twitter disasters as a result of the company's need to guard against the theft of data from the site by others, even when the issues do not seem to be related. There was a problem with Twitter over the weekend when the number of tweets a user was permitted to view per day was limited, making the service almost unusable for many users. 

Musk believed rate-limiting was a necessary response to "data scraping" and "system manipulation." However, most IT experts agree that it was more likely a crisis response resulting from mismanagement or incompetence rather than an attempt to solve a problem. Despite Gizmodo's repeated requests for information on the matter, Twitter did not respond.

Meta's Ambitious Move: Launching a Dedicated App to Challenge Twitter's Dominance

 


There is talk that Meta, the Mark Zuckerberg company, is working on developing a rival for Twitter shortly since it has been announced that it wants public figures to join it, including the Dalai Lama and Oprah Winfrey, who are either planning to use it or will refer to it as a rival for Twitter. 

This standalone application is codenamed Project92, but a report by tech news site The Verge suggests that the official title could be Threads. This is based on its codename.

During an internal meeting on Thursday, Meta's chief product officer, Chris Cox, told employees that the app was Meta's response to Twitter, the social network owned by Facebook and Instagram. 

In addition to allowing users to follow accounts they already follow on Instagram, Meta's image-sharing application may also offer them the opportunity to bring over followers they previously had on decentralized platforms such as Mastodon, if they choose to do so. 

Meta spokesperson says the platform is being developed and released soon. According to Chris Cox, Meta's chief product officer, Meta's platform is currently being coded. There is no specific date for releasing the app though the tech giant intends to do so very soon. Several sources speculate that the launch could happen as early as June, but that is still far from certain. 

In recent weeks, screenshots of the company's upcoming app have surfaced online, providing a glimpse of how it might look shortly. The screenshots were shown internally to senior employees.

This BBC report is based on confirmation made to the BBC by sources within the company that these screenshots are genuine. The new platform layout will likely be familiar to people who use Twitter as a social media platform.

The screenshot shows that Meta will allow users to log in with their Facebook or Instagram ID number. This will save them the hassle of creating their ID number later. There are several options available to users for how to share their thoughts in a Twitter-style prompt, with other users able to like, comment, and re-share (basically retweet) their posts. Further, based on the screenshot, it appears that users may also be able to create a thread as well, which is a tangle of posts placed one after the other in a particular order. 

Moreover, according to The Verge, the app would be integrated with ActivityPub, a technology underpinning Mastodon, a decentralized collection of thousands of web pages that serves as a Twitter rival. This technology will allow social networks to interact with each other more easily. Theoretically speaking, users of the upcoming Meta app can move their accounts and followers over to apps supported by ActivityPub, like Mastodon, the new Meta app. 

The app is expected to be based on Instagram and users will be able to log in with their Instagram username and password, while their followers, user bio, and verification information will also transfer over to the new app as well, according to earlier reports. 

The app aims to give creators a "stable place to build and grow their audience" in addition to providing a safe, easy-to-use, and reliable place to create. 

There is no question that Elon Musk's Twitter will be facing a lot of opposition from the short text-based network P92, which has the potential to surpass both BlueSky and Mastodon in terms of its level of rivalry with Elon Musk's Twitter. The fact that both Mastodon and BlueSky have attracted users who were disillusioned with Twitter is a testament to the fact that building your social network from scratch and reestablishing the community from scratch is not easy.

Meta's Instagram community, however, is enormous, boasting more than a billion users worldwide. This far surpasses Twitter's estimated 300 million users, although Twitter's numbers are no longer verifiable. 

Moreover, the report points out that Meta, which is inspired by Twitter, will be able to populate a user's info via Instagram's account system in much the same way as Twitter does. A Meta spokesperson reportedly told me on the sidelines of the meeting that the company has already been working with prominent personalities such as Oprah Winfrey and the Dalai Lama to attract others to try the "Project 92" web app by joining the platform. 

As Musk has said, Twitter under his leadership has been experiencing a difficult time, although he has insisted Twitter's users have not declined since the Tesla boss purchased the platform back in October. Musk claimed several weeks after purchasing Twitter that a peak of more than 250 million daily active users had been achieved. This was a record high then. Because Twitter is based almost entirely on advertising revenue, it is experiencing financial difficulties. 

Several concerns were responsible for the current advertiser boycott, including the degradation of the platform's moderation standards and the botched re-launch of Twitter's subscription service. This led to several verified impersonator accounts that started appearing on the platform. 

There is no doubt Meta has made a bold and ambitious move in entering the social media landscape with its announcement that it will launch a dedicated app to compete with Twitter's dominance in its space. By reshaping how people engage in real-time conversations in real-time, Meta has the potential to disrupt the status quo and disrupt people's social norms. 

The battle for microblogging supremacy intensifies as users eagerly await the release of this new app. It promises to be an exciting and transformational time in online communication as the world becomes more integrated.

Elon Musk Withdraws Twitter from EU’s Disinformation Code of Practice


European Union has recently confirmed that Twitter has withdrawn from the European Union’s voluntary code against disinformation.

The news was announced on Twitter, by EU’s internal market commissioner Thierry Breton. Breton later took to social media, warning Twitter that it cannot escape from the legal liability consequences that are incoming.

“Twitter leaves EU voluntary Code of Practice against disinformation. But obligations remain. You can run but you can’t hide[…]Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25. Our teams will be ready for enforcement,” Breton wrote.

Herein, he referred to the legal duties that the platform must follow as a "very large online platform" (VLOP) under the EU's Digital Services Act (DSA).

European Union Disinformation Code

A number of tech firms, small and big, are apparently signed up to the EU’s disinformation code, along with Facebook’s parent company Meta, TikTok, Google, Microsoft and Twitch.

The code, which was introduced in June of last year, seeks to decrease profiteering from fake news and disinformation, increase transparency, and stop the spread of bots and fraudulent accounts. Companies who sign the code are free to decide on the what obligations they want to make, such as working with fact-checkers or monitoring political advertising.

Apparently, since Elon Musk took over Twitter, the company’s moderation has largely reduced, which as per the critics has resulted in a increase in spread of disinformation. 

However, experts and former Twitter employees claim that the majority of these specialists left their positions or were fired. The social media company once had a dedicated team that tried to combat coordinated disinformation campaigns.

Last month, BBC exposed hundreds of Russian and Chinese state propaganda accounts lurking on Twitter. However, Musk claims that there is now “less misinformation rather than more,” since he took Twitter’s ownership.

Moreover, the EU, along with its voluntary code has brought in a Digital Service Act- a law which will coerce firms to put more efforts in tackling illegal contents online.

From August 25, platforms with more than 45 million active users per month in the EU—including Twitter—must abide by the DSA's legislative requirements.

Twitter will be required by legislation to implement measures to combat the spread of misinformation, provide users with a way to identify illegal content, and respond "expeditiously" to notifications.

In regards to the issue, AFP news agency on Friday quoted a statement of a EU Commission official saying “If (Elon Musk) doesn’t take the code seriously, then it’s better that he quits.”  

Twitter Launches End-to-End Encrypted Messaging Services


Twitter has become the newest social media platform to be providing encrypted messaging service.

End-to-end Encryption 

Direct messages delivered on the platform will be end-to-end encrypted, i.e. private and only readable by the sender and receiver. However, Chief executive Elon Musk has warned Twitter users to “try it, but don’t trust it yet,” taking into account that it is only an early version of the service.

Only users of Twitter Blue or those connected to verified Twitter accounts are currently able to use the service, which is not yet available to the general public. Additionally, users can only send text and links in conversations for now; media attachments cannot yet be sent.

In a post on its support site, Twitter writes “It was not quite there yet” with encryption. "While messages themselves are encrypted, metadata (recipient, creation time, etc) are not, and neither is any linked content[…]If someone - for example, a malicious insider, or Twitter itself as a result of a compulsory legal process - were to compromise an encrypted conversation, neither the sender or receiver would know," it further read. 

Online Safety Bill Criticized 

Musk indicated his plans to make Twitter into a "super-app" with many features when he purchased it in 2022. There is not really a similar platform in the West to China's super-app WeChat, which can be used for anything from social media and restaurant ordering to payments and texting.

Since then, he has made a number of significant modifications to the social network, such as the addition of a subscription service and the elimination of the previous version of Twitter's blue tick badges, which were designed to combat the spread of disinformation.

For a long time, many Twitter users have demanded that the platform's private messaging function be made more secure. The UK, where the government's Online Safety Bill would impose additional rules for social media companies, reportedly in an effort to safeguard youngsters from abuse, may find Mr. Musk's timing unsettling.

Messaging services WhatsApp and Signal have both criticized this part of the Online Safety Bill, which is presently making its way through Parliament.

They expressed concerns that the legislation might weaken end-to-end encryption, which is seen as a crucial tool by privacy activists and campaigners.

Following this, heads of the two messaging platforms signed a letter demanding a rethink over the bill. According to them, the bill, in its current form, opens the door to "routine, general and indiscriminate surveillance" of personal messages. In regards to this, a Home Office spokesperson stated, "The Online Safety Bill applies to all platforms, regardless of their design and functionality. Therefore, end-to-end encrypted services are in scope and will be required to meet their duties of care to users."

"We have made clear that companies should only implement end-to-end encryption if they can simultaneously uphold public safety. We continue to work with the tech industry to collaborate on mutually agreeable solutions that protect public safety without compromising security," he added.

This Twitter Bug is Making Users Secret Circle Tweets Public

 

Twitter launched Circle in August 2022, allowing you to limit your tweets to a chosen group of users without making your account private. While the function was designed to limit the visibility of your tweets to a group smaller than your number of followers, a recent issue has reportedly exposed your private tweets to many others outside your Circle, even if they do not follow you.

Many users have observed that tweets intended for Twitter Circles are reaching all followers rather than just those in the Circle. Amanda Silberling of TechCrunch, who saw another person's ostensibly private tweet, notes that personal posts display under Twitter's newly launched "For You" area.

Because the feature is intended to allow users to tweet secretly, many people use it to express sensitive thoughts and sentiments, as well as restricted media such as naked photographs, and the flaw poses a significant privacy risk to the account that posts all of those private tweets.

For months, Twitter Circle has been buggy. Certain users have reported that their tweets from the Circle have reached other followers outside of it. Meanwhile, some users claim that the tweets are available to anyone other than followers. Affected users discovered the flawed nature of the service when a few strangers responded with tweets intended for the inner circle.

While it's difficult to pinpoint a specific cause for the glitch, it could be related to recent changes to Twitter's recommendation algorithm, which divided the feed into "For You" and "Following" timelines. As the names suggest, For You also displays tweets from users you don't follow.

Elon Musk's private jet was made public on Twitter in October. Musk compared the incident to "doxing" and responded by suspending the @ElonJet account as well as the accounts of journalists who reported on it. 

However, when it comes to users' privacy — despite using a mechanism that ostensibly guarantees it — Musk does not appear to be concerned. Twitter Circle has allegedly been plagued by bugs for several months. These difficulties have not piqued Twitter's interest, despite the digital titan persistently promoting the platform's paid tier, Twitter Blue.

This could be considered a violation of users' permission and a data breach under EU legislation. Any monetary punishment, however, may be subject to interference by US authorities and legislators.


Twitter Returns After Two-Hour Outage Affecting Tweets

On Wednesday, Twitter experienced a service disruption that resulted in users being unable to access certain parts of the platform, specifically the "Following" and "For you" feed. These feeds displayed an error message rather than the expected content. 

The problem was widespread and affected users globally. The issue persisted for approximately two hours before being resolved by Twitter's engineering team. 

DownDetector, a website that tracks service outages, reported issues with Twitter at 10:00 GMT, but the problem was resolved by 12:00. In the UK alone, over 5,000 users reported problems to DownDetector within half an hour of the Twitter service outage. 

The root cause of the outage is still unknown, and it is unclear if Twitter's recent 200 staff layoffs on Monday played any role in the incident. Further investigation is needed to identify the underlying cause of the outage and prevent similar incidents from occurring in the future. 

Even though some parts of Twitter, like the feeds, were not working, users could still send tweets as usual. However, no one could see or interact with those tweets. This caused top trending hashtags including "#TwitterDown" and "Welcome To Twitter".

Nevertheless, Twitter has had some temporary problems in the past few months. During a short outage in early February, some users were mistakenly told they had reached the daily limit for sending tweets. 

"It started shortly before the Musk takeover itself. The main spike has happened after the takeover, with four to five incidents in a month - which was comparable to what used to happen in a year,” Alp Toker, director of internet outage tracker NetBlocks, said Twitter has started experiencing more issues under Mr. Musk's tenure as CEO. 

Now we will learn why social media platforms generally suffer service disruptions and sudden outrage:

Social media networks can suffer shutdowns for a variety of reasons, including technical issues, cyber-attacks, policy violations, and government censorship. Technical issues such as server errors or bugs can cause social media networks to crash and become unavailable to users. 

In some cases, these issues can be quickly resolved, and the platform can be restored. However, if the issue is more severe, it may take longer to fix, and the platform may be down for an extended period. 

Cyber attacks such as Distributed Denial of Service (DDoS) attacks can also cause social media networks to go down. These attacks overwhelm a network with traffic, causing it to become unavailable to users. Cyber attackers may launch DDoS attacks for various reasons, such as to disrupt a particular organization or to extort money.

Meta Verified: New Paid Verification Service Launched for Instagram and Facebook


Instagram and Facebook’s parent company Meta has recently announced that users will now have to pay in order to acquire a blue tick verification for their user IDs. 

Meta Verified will be costing $11.99 a month on the web, while $14.99 for iPhone users, and will be made available to users in Australia and New Zealand starting this week. 

According to Meta CEO Mark Zuckerberg, this act will aid to the security and authenticity on social networking sites and apps. This move comes right after Twitter announced its premium Twitter Blue subscription to its users, which was implemented from November 2022. 

Although Meta’s paid subscription is not yet made available for businesses, interested individuals can subscribe and pay for verification. 

All You Need to Know About the “Blue Ticks” 

Badges or “blue ticks” are offered as a verification tool to users who are high-profiled or signify their authenticity. According to a post on Meta's website: 

  • The subscription would grant paying users a blue badge, more visibility for their postings, protection from impersonators, and simpler access to customer service. 
  • This change would not affect accounts that have already been verified, but it will make some smaller users who utilize the paid function to become certified more visible.
  • According to Meta, users' Facebook and Instagram usernames must match those on a government-issued ID document in order to receive verification, and they must have a profile picture with their face in it. 

Many other platforms such as Reddit, YouTube and Discord possess similar subscription-based models. 

Although Mr. Zuckerberg stated in a post that it would happen "soon," Meta has not yet defined when the feature will be made available in other nations. 

"As part of this vision, we are evolving the meaning of the verified badge so we can expand access to verification and more people can trust the accounts they interact with are authentic," Meta's press release read. 

This announcement of Meta charging for verification was made following the loss faced by the company of more than $600 billion in market value last year. 

For the last three quarters in a row, the company has recorded year-over-year revenue declines, but the most recent report might indicate that circumstances are starting to change. 

This act will eventually aid Meta to meet its goal, which was to focus on “efficiency” to recover, since the company’s sudden fall in revenue made it to cut costs by laying off 13% of its workforce (11,000 employees) in November and consolidated office buildings.  

Can Twitter Fix its Bot Crisis with an API Paywall?

 


A newly updated Twitter policy relating to the application programming interface (API) has just been implemented, according to researchers - and the changes will have a profound impact on social media bots, both positive (RSS integration, for example) and negative (political influencer campaigns), respectively. 

A tweet from the Twitter development team announced that starting February 9, the API would no longer be accessible for free. It was Elon Musk's personal amendment. Upon hearing some negative publicity, Elon Musk stepped in personally to amend the original terms of service - Twitter is to continue to provide its bots with a light, write-only API that allows them to produce high-quality content for free. 

In a computer program, APIs are used to enable different parts of the program to communicate with each other. An API provides an interface for two software programs to interact with one another. This is the same way that your computer provides an interface so that you can easily interact with all of its many complex functions. Enterprises, educational institutions, or bot developers who want to develop applications on Twitter are most likely to need the API for management and analytics. 

Whether you choose a limited or subscription model, we are at risk of displacing smaller, less well-funded developers and academics who have utilized free access to develop bots, applications, and research that provide real value for users. 

It is also pertinent to note that Twitter has been targeted by malicious bots since the start of time. The use of these social media platforms is on the increase by hackers spreading scams and by evil regimes spreading fake news, and that's without mentioning the smaller-scale factors that affect influencer culture, marketing, and general trolling, which are widespread as well. 

What are the pros and cons of using a paid API to solve Twitter's influence campaigns and bot-driven problems? Several experts believe the new move is just a smokescreen to cover up the real problem. 

Bad bots on Twitter 


According to a report published by the National Bureau of Economic Research in Cambridge, Mass., in May 2018, social media bots play a significant role in shaping public opinion, particularly at the local level. It was found that Twitter bots had been greatly influenced by the US presidential election and the UK vote on leaving the European Union. This was during the 2016 elections. Based on the data, it appears that the aggressive use of Twitter bots, along with the fragmentation of social media and the influence of sentiment, may all be factors that contributed to the outcome of the votes. 

In the UK, the increase in automated pro-leave tweets may have resulted in 1.76 percentage points of the actual pro-leave vote share is explained by the increasing volume of automated tweets. While in the US, 3.23 percentage points of the actual vote could be explained by the influence of bots. 

During that election, three states were critical swing states - Pennsylvania, Wisconsin, and Michigan - with a combined number of electoral votes that could have made the difference between victory or defeat - won the election by a mere fraction of a percent.   

Often, bots are just helpful tools that can be used by hackers to commit cybercrime at scale without necessarily swaying world history - this can make them a useful tool for committing cybercrime at scale. The use of Twitter bots by cyber criminals has been observed in the distribution of spam and malicious links on Twitter. This is as well as the amplifying of their content and profiles on the site. 

David Maynor, director of the Cybrary Threat Intelligence Team and chief technology officer for Dark Reading, explains in an interview that bots are an incredibly huge problem for the Internet. Some random objects taunt people so much that victims would spend hours or days trying to prove that they were wrong. That would be the real world. Bots also give Astroturf efforts a veneer of legitimacy, they do not deserve. 

Astroturfing is a type of marketing strategy designed to create an impression that a product or service has been chosen by the general public in a way that appears to be an independent assessment without actually being so (hiding sponsorship information, for instance, or presenting "reviews" as objective third-party assessments). 

Are Twitter's motives hidden? 


According to some people, Twitter's real motive behind placing its API behind a paywall has nothing to do with security, and instead, it could be something else entirely. The question is then, would a basic subscription plan be strong enough to guard against a cybercrime group, or indeed a lone scammer, who might be targeting your account? One of the most active operators of social media influence campaigns in the world is certainly not the Russian government. 

There are many mobile app security platforms and cloud-based solutions that can be used to eliminate bot traffic from mobile apps easily, and Elon Musk is well aware of these technologies. Ted Miracco, CEO at Approov, says: Bot traffic could be largely eliminated overnight if the proper technologies are implemented. 

Several methods and tools exist to help social media sites (and site owners and administrators of all types of websites) snuff out botnets, and they can be used by all our social media users. It is imperative to keep in mind that bots tend to respond predictably. They, for example, post regularly and only in certain ways. There are specialized tools that can help you identify entire networks of bots. By identifying just a few suspect accounts, these tools can help reveal what are a few suspect accounts. 

There is a theory that naming and shaming may well be critically significant in diagnosing malicious automated tweets along with detecting malicious automated tweets: This might not be popular, but it is the only way to stop bots and information operations. People and organizations must be tied to real-life accounts and organizations. 

In this regard, Livnek adds, Whilst this raises concerns about privacy and misuse of data, remember that these platforms are already mining all of the available data on the platforms to increase user engagement. Tying accounts to real-world identities wouldn't affect the platforms' data harvesting, but would instead enable them to stamp out bots and [astroturfing]. 

It seems a bit extreme to remove free API access before we have exhausted all feasible security measures that might have been available to us. 

As Miracco argues, the reason for this is an open secret in Silicon Valley - it is basically the elephant in the room. According to Miracco, social media companies are increasingly liking their bots in terms of generating revenue for them. 

Twitter makes money by selling advertisements and this is the basis of its business model. As a result, bots are viewed by advertisers as users, i.e. they generate revenue in the same way as users do. There is more money to be made when there are more bots. 

Tesla CEO Elon Musk threatened to pull out of his plan to buy Twitter in January, reportedly as a result of the revelation that a large portion of Twitter's alleged users is actually bots or other automated programming. As he transitioned from being an interested party to becoming the outright owner of the company, his mood may have changed. The Miracco Group's CEO predicts that "revealing the problem now will result in a precipitous fall in traffic, so revenue must be discovered along the way to maintain the company's relevance along the path to reduced traffic, which was the motivation behind the API paywall. His explanation is straightforward: the paywall is ostensibly used to stop bots, but the truth is that it is being used to drive revenue. 

There has just been the implementation of a paywall. Whether it will be able to solve Twitter's bot problem by itself or if it will only be a matter of Musk's pockets being lined, only time will tell. 

Despite a request from reporters for comment, Twitter did not respond immediately to the query.