Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Chatbot. Show all posts

DeepSeek’s Data Use Raises Regulatory Concerns

 


There have been numerous scandals surrounding this artificial intelligence company which had astonished the world by seemingly rivaling the successful chatbot ChatGPT at a fraction of the cost. However, now, regulators and privacy advocates have raised questions about the safety of users' data after the company launched its service. 

A government probe into what data the company collects and how it is stored has resulted in regulators in Italy blocking the app from both the Apple App Store and Google Play Store, as they investigate how they collect that data. As a result of DeepSeek's failure to address the regulator's concerns regarding its privacy policy, the Italian data protection authority, the Garante, ordered that it block its chatbot within its borders on Thursday. 

The DeepSeek company was founded in Hangzhou, China, and it has grown quickly since then. Liang Feng started the company in 2023. In 2016 he founded the $7 billion hedge fund group High-Flyer with two other business classmates who attended the same university. As a result, DeepSeek was investigated by the China-based watchdog Xinhua this week about how their data is used. They were looking for information on what personal data is collected, from what sources, for what purpose, and under what legal basis. 

A business intelligence startup based in China, DeepSeek, has received much attention in recent months as a result of its rapid growth. However, many corporate finance departments have raised concerns about the security of the startup. DeepSeek, a free application powered by Artificial Intelligence, achieved the most downloads on the U.S. iOS App Store within weeks of its launch, surpassing OpenAI's ChatGPT, which had gained popularity within weeks. 

While the company's popularity has skyrocketed recently, it has drawn the attention of cybersecurity experts and regulators, causing alarms to be raised about data security, intellectual property risks, as well as regulatory compliance issues. DeepSeek's privacy policy states that the service collects a variety of information about its users, including chat and search query history, device information, keystroke patterns, IP addresses, internet connection, and activity from other apps, as well as information about their activities on the deep seek service. 

The same data collection practices are employed by other AI services, such as OpenAI's ChatGPT, Anthropic's Claude, or Perplexity. Similarly, popular social media apps, such as Facebook, Instagram, and X, also record a great deal of user data. Regulators have sometimes questioned this kind of data-gathering practice. A new model of the DeepSeek software, DeepSeek R1, was unveiled by the company in January. This is a free AI-powered chatbot whose look and feel are very similar to that of ChatGPT by OpenAITM, based in California. 

It is a type of computer program that simulates a human-like conversation with a user through questioning the bot. The bot will then respond to the questions asked using the information it has access to on the internet that it has been trained to handle in a conversation. There are many possible uses for these programs, including solving mathematics problems, writing drafts of texts such as emails and documents, and translating, or even writing codes, among a multitude of other possibilities.

In the view of experts, DeepSeek's risks go beyond those of TikTok, which has been under scrutiny and could be banned at some point. “DeepSeek raises all the problems that TikTok has raised plus more,” said Stewart Baker, a Washington-based attorney who is a former official of the National Security Agency and the Department of Homeland Security. To provide these advanced AI models with high levels of accuracy, users need to entrust them with highly sensitive personal information and business information. 

If users' data can be accessed by an adversary, either intelligence implications are significant" Increasingly, DeepSeek's AI technology is being used to conduct business research, personal inquiries, and content generation, resulting in an enormous amount of valuable data that DeepSeek is generating. A study conducted by Feroot suggests DeepSeek's login system utilizes fingerprinting techniques, which tech firms widely use to track the devices of their users to improve security and target advertisements. 

Although there is no conclusive proof of Chinese government involvement in this case, the links to China Mobile's identity and authentication infrastructure indicate that Chinese state involvement has taken place. There has been no response to DeepSeek's requests for comment, leaving critical questions about how far they collaborate with China Mobile and how safe the data of their users is unanswered. Given the increased scrutiny surrounding Chinese-controlled digital platforms, regulators may soon take further action against DeepSeek, mirroring efforts already directed towards TikTok, as a response to the increasing scrutiny regarding these platforms. 

A tightening of export regulations was implemented under the Biden administration to prevent China from developing artificial intelligence as quickly as possible.  There are several questions that DeepSeek's success raises regarding the effectiveness of these controls, as well as the status of Washington and Beijing's broader technology battle.   Among the researchers who study Chinese cybersecurity at Yale, Samm Sacks, said that DeepSeek could pose a significant national security threat to the United States, as he pointed out. 

According to the public reports that have been made, no Chinese officials are currently trying to obtain personal data about U.S. citizens by using DeepSeek. In contrast to the debate over TikTok, the fears about China boil down to the mere possibility that Beijing may make use of Americans' data for its purposes, and that is enough to trigger concern. In addition to Sacks's astute assessment of what DeepSeek might mean for Americans' data, he pointed out that there are two other major factors to be considered. First, the Chinese government already possesses an unimaginable amount of data on them. 

In December, Chinese hackers broke into the U.S. A group of Chinese hackers has infiltrated the Treasury Department's computer systems in the past year. They have since been infiltrating US telecom companies to spy on American texts and calls. It should also be noted that there is a vast web of digital data brokers who routinely buy and sell a massive amount of data on Americans.

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

Stop Using AI for Medical Diagnosis: Experts

Stop Using AI for Medical Diagnosis: Experts

AI (artificial intelligence) has become an important tool in many spheres of life such as education, jobs, and the field of medical research as well. However, there have been concerns about AI providing medical advice to individual queries of patients accessing on their own. The issue has become a hot topic. Today, it is easier to sit at your desk, and with a few taps, access everything on your phone or your computer, one such case can be a medical diagnosis via an online search for your health. However, experts alert users to avoid such medical diagnoses via AI. Here is why.

Using AI for Medical Queries

AI tools like ChatGPT from Open AI or CoPilot from Microsoft work using language models trained on a huge spectrum of internet prompts. You can ask questions, and the chatbot responds based on learned patterns. The AI tech can generate numerous responses which can be helpful, but it is not accurate. 

The incorporation of AI into healthcare raises substantial regulatory and ethical concerns. There are significant gaps in the regulation of AI applications, which raises questions regarding liability and accountability when AI systems deliver inaccurate or harmful advice.

No Personalized Care

One of the main drawbacks of AI in medicine is the need for more individualized care. AI systems use large databases to discover patterns, but healthcare is very individualized. AI lacks the ability to comprehend the finer nuances of a patient's history or condition, frequently required for accurate diagnosis and successful treatment planning.

Bias and Data Quality

The efficacy of AI is strongly contingent on the quality of the data used to train it. AI's results can be misleading if the data is skewed or inaccurate. For example, if an AI model is largely trained on data from a single ethnic group, its performance may suffer when applied to people from other backgrounds. This can result in misdiagnoses or improper medical recommendations.

Misuse

The ease of access to AI for medical advice may result in misuse or misinterpretation of the info it delivers. Quick, AI-generated responses may be interpreted out of context or applied inappropriately by persons without medical experience. Such events have the potential to delay necessary medical intervention or lead to ineffective self-treatment.

Privacy Concerns

Using AI in healthcare usually requires entering sensitive personal information. This creates serious privacy and data security concerns, as breaches could allow unauthorized access to or misuse of user data.

Microsoft's Super Bowl Pitch: We Are Now an AI Firm

 

Microsoft made a comeback to the Super Bowl on Sunday with a commercial for its AI-powered chatbot, highlighting the company's resolve to shake off its reputation as a stuffy software developer and refocus its offerings on the potential of artificial intelligence. 

The one-minute ad, which was uploaded to YouTube on Thursday of last week, shows users accessing Copilot, the AI assistant that Microsoft released a year ago, via their smartphones. The app can be seen helping users in automating a range of tasks, including generating computer code snippets and creating digital artwork. 

Microsoft's Super Bowl commercial, which marked the company's first appearance in the game in four years, showcased its efforts to reposition itself as an AI-focused company. The IT behemoth has invested $1 billion in OpenAI in 2019 alone, and it has put billions more into refining its AI skills. The technology has also been incorporated into staples like Microsoft Word, Excel, and Azure. 

The tech giant now wants customers and companies looking for an AI boost to use its services instead of rivals like Google, which on Thursday revealed an update to its AI program. 

Wedbush Securities Analyst Dan Ives told CBS MoneyWatch that the outcome of the AI race will have a significant impact on multinational tech businesses, as the industry is expected to reach $1.3 trillion by 2032. "This is no longer your grandfather's Microsoft … and the Super Bowl is a unique time to further change perceptions," he stated. 

For 30 seconds of airtime during this year's game, advertisers paid over $7 million, with over 100 million viewers predicted. In a blog post last week, Microsoft Consumer Chief Marketing Officer Yusuf Mehdi announced that the Copilot app is receiving an update "coinciding with the launch of our Super Bowl ad." The update includes a "cleaner, sleeker look" and suggested prompts that could help users take advantage of the app's AI capabilities. 

Thus far, Microsoft's strategy has proven successful. Its cloud-based revenue increased by 24% to $33.7 billion in the most recent quarter, aided by the incorporation of AI into its Azure cloud computing service.

Navigating the Future: Global AI Regulation Strategies

As technology advances quickly, governments all over the world are becoming increasingly concerned about artificial intelligence (AI) regulation. Two noteworthy recent breakthroughs in AI legislation have surfaced, providing insight into the measures governments are implementing to guarantee the proper advancement and application of AI technologies.

The first path is marked by the United States, where on October 30, 2023, President Joe Biden signed an executive order titled "The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." The order emphasizes the need for clear guidelines and ethical standards to govern AI applications. It acknowledges the transformative potential of AI while emphasizing the importance of addressing potential risks and ensuring public trust. The order establishes a comprehensive framework for the federal government's approach to AI, emphasizing collaboration between various agencies to promote innovation while safeguarding against misuse.

Meanwhile, the European Union has taken a proactive stance with the EU AI Act, the first regulation dedicated to artificial intelligence. Introduced on June 1, 2023, this regulation is a milestone in AI governance. It classifies AI systems into different risk categories and imposes strict requirements for high-risk applications, emphasizing transparency and accountability. The EU AI Act represents a concerted effort to balance innovation with the protection of fundamental rights, fostering a regulatory environment that aims to set a global standard for AI development.

Moreover, in the pursuit of responsible AI development, companies like Anthropic have also contributed to the discourse. They have released a document titled "Responsible Scaling Policy 1.0," which outlines their commitment to ethical considerations in the development and deployment of AI technologies. This document reflects the growing recognition within the tech industry of the need for self-regulation and ethical guidelines to prevent the unintended consequences of AI.

As the global community grapples with the complexities of AI regulation, it is evident that a nuanced approach is necessary. These regulatory frameworks strive to strike a balance between fostering innovation and addressing potential risks associated with AI. In the words of President Biden, "We must ensure that AI is developed and used responsibly, ethically, and with public trust." The EU AI Act echoes this sentiment, emphasizing the importance of human-centric AI that respects democratic values and fundamental rights.

A common commitment to maximizing AI's advantages while minimizing its risks is reflected in the way regulations surrounding the technology are developing. These legislative measures, which come from partnerships between groups and governments, pave the path for a future where AI is used responsibly and ethically, ensuring that technology advances humankind rather than working against it.


Bing Chat Rebrands to ‘Copilot’: What is New?


Bing Chat has been renamed as ‘Copilot,’ according to an announcement made during Microsoft Ignite 2023.

But, is the name change the only new thing the users will be introduced with? The answer could be a little ambiguous. 

What is New with Bing Chat (now Copilot)? Honestly, there are no significant changes in Copilot, previously called Bing Chat. “Refinement” might be a more appropriate term to characterize Microsoft's perplexing activities. Let's examine three modifications that Microsoft made to its AI chatbot.

Here, we are listing some of these refinements:

1. A New Home

Copilot, then Bing Chat, now has its own standalone webpage. One can access this webpage at https://copilot.microsoft.com

This means that the user will no longer be required to visit Bing in order to access Microsoft’s AI chat experience. One can simply visit the aforementioned webpage, without Bing Search and other services interfering with your experience. Put differently, it has become much more "ChatGPT-like" now. 

Notably, however, the link seems to only function with desktop versions of Microsoft Edge and Google Chrome. 

2. A Minor Makeover

While Microsoft has made certain visual changes in the rebranded Bing Chat, they are however insignificant. 

This new version has smaller tiles but still has the same prompts: Write, Create, Laugh, Code, Organize, Compare, and Travel.

However, the users can still choose the conversation style, be it Creative, Balanced and Precise. The only big change, as mentioned before, is the new name (Copilot) and the tagline: "Your everyday AI companion." 

Though the theme colour switched from light blue to an off-white, the user interface is largely the same.

Users can access DALLE-3 and GPT-4 for free with Bing Chat, which is now called Copilot. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."

3. Better Security for Enterprise Users

With Copilot, users can access DALLE-3 and GPT-4 for free. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."

This way, users who have had a Bing Chat Enterprise account, or pay for a Microsoft 365 license, will get an additional benefit of more data protection./ Copilot will be officially launched on December 1. 

What Stayed the Same? 

Microsoft plans to gradually add commercial data protection for those who do not pay. However, Copilot currently stores information from your interactions and follows the same data policy as the previous version of Bing Chat for free users. Therefore, the name and domain change is the only difference for casual, non-subscribing Bing Chat users. OpenAI's GPT-4 and DALL-E 3 models are still available, but users need to be careful about sharing too much personal data with the chatbot.

In summary, there is not much to be excited about for free users: Copilot is the new name for Bing Chat, and it has a new home.  

Google's Chatbot Bard Aims for the Top, Targeting YouTube and Search Domains

 


There has been a lot of excitement surrounding Google's AI chatbot Bard - a competitor to OpenAI's ChatGPT, which is set to become "more widely available to the public in the coming weeks." However, at least one expert has pointed out that in its demo, Bard made a factual error. 

As a result of the AI competition between Google and OpenAI, the Microsoft-backed company that created ChatGPT and provides artificial intelligence services for its products, Google has now integrated its chatbot Bard into apps like YouTube, Gmail and Drive, according to a company announcement, published Tuesday. 

In New York, a Google executive said on Thursday at the Reuters NEXT conference that the company's experimental chatbot Bard represents a path to the development of another product that will reach two billion users. In an interview with TechCrunch, Google's Product Lead Jack Krawczyk commented that Bard has laid the foundation for Google to attract even more customers with the help of its artificial intelligence feature, which enables consumers to brainstorm and get information using new artificial intelligence. 

It is possible, for instance, to ask Bard to plan a trip for an upcoming date, complete with flight options and your choice of airline. Users could also ask the tool to summarize meeting notes that have been made in Google Drive documents that they have recently uploaded. Several improvements will be made to Bard this coming Tuesday, including connections to Google's other services. 

The chatbot is also capable of communicating with users in various languages, with the ability to perform a variety of fact-checking functions as well as a broader upgrade to the larger language model that is the foundation of the tool. Google's Bard has been improving its features for nearly six months after it was first introduced to the public, marking the biggest update to the program in that period. 

Among the tech giants, Google, Microsoft, and ChatGPT creator OpenAI are racing against one another, as they roll out increasingly sophisticated consumer-facing artificial intelligence technologies, and they hope to convince users of their value as more than just a gimmick to them.

It is now believed that Google, which recently issued an internal code red when OpenAI beat it to the release of its artificial intelligence chatbot, is using its other widely used software programs to make Bard more useful as a result of the code red. It’s relatively unknown that Bard has received the same amount of attention as ChatGPT. 

According to data from Similarweb, a company that analyzes data for companies, ChatGPT had nearly 1.5 billion desktop and mobile visits in August, substantially more than Google’s A.I. tool and other competitors, which had just 50 million visits. Bard recorded just under 200 million desktop and mobile internet visits throughout August, while ChatGPT by OpenAI also registered 200 million visits during the same period. 

In an interview, Jack Krawczyk, Google's product lead for Bard, stated that Google was aware of the limitations that had caused the chatbot to not appeal to as many people as it should have. Users had told Mr. Krawczyk that the product was neat and novel, but that it did not integrate very well with their personal lives. 

Earlier this month, Google released what it called Bard Extensions, which is an extension of the ChatGPT plug-in that OpenAI announced in March, which allows ChatGPT to work with updated information provided by third-party companies such as Expedia, Instacart, and OpenTable, their own web services and voice apps. 

As a result of the new updates, Google is going to be trying to replicate some of the search engine's capabilities by including Flights, Hotels, and Maps, so users can conduct travel and transportation research using Google's search engine. In addition, Bard may be closer to becoming a more personalized assistant for its users, where they can ask which emails they missed and which points in a document are of most importance to them. 

With the help of Google's large language model, an artificial intelligence algorithm trained on vast amounts of data, Bard had been able to assist students with writing drafts of essays or planning their friend's baby showers. 

As a result of these new extensions, Bard will now draw information from a host of Google services, as well. Now, Bard will be able to retrieve information from YouTube, Google Maps, Flights, and Hotels as well. According to Google, Bard can be accessed through several other services and ask the service for things like "Show me how to write a good man speech and show me YouTube videos about it for inspiration," or suggestions for travel, complete with driving directions, etc.

The Bard extensions can be disabled by the user at any moment by choosing to do so. It is also possible for users to link their Gmail, Docs and Google Drive accounts with Bard to the tool so that it will be able to help them analyze and manage their data. 

For instance, the tool might be able to help with queries such as: "Find the most recent lease agreement in my Drive and calculate how much my security deposit was," Google said in a statement. In a statement, the firm listed that the company will not use users' personal Google Workspace information to train Bard or to serve targeted advertising to users and that users can withdraw their permission at any time in case they do not want it to access their personal information. 

By giving Bard access to a wealth of personal information as well as popular services such as Gmail, Google Maps, and YouTube, Bard is, in theory, making itself even more helpful for its users and gaining their confidence as a result. Using Bard, Google posits that a person planning a group trip to the Grand Canyon may be able to get the dates that suit everyone, get flight and hotel options, provide directions based on Maps, and also take advantage of videos with a variety of useful information available on YouTube.

AI Chatbots Have Extensive Knowledge About You, Whether You Like It or Not

 

Researchers from ETH Zurich have recently released a study highlighting how artificial intelligence (AI) tools, including generative AI chatbots, have the capability to accurately deduce sensitive personal information from people based solely on their online typing. This includes details related to race, gender, age, and location. This implies that whenever individuals engage with prompts from ChatGPT, they may inadvertently disclose personal information about themselves.

The study's authors express concern over potential exploitation of this functionality by hackers and fraudsters in social engineering attacks, as well as the broader apprehensions about data privacy. While worries about AI capabilities are not new, they appear to be escalating in tandem with technological advancements. 

Notably, this month has witnessed significant security concerns, with the US Space Force prohibiting the use of platforms like ChatGPT due to data security apprehensions. In a year rife with data breaches, anxieties surrounding emerging technologies like AI are somewhat inevitable. 

The research on large language models (LLMs) aimed to investigate whether AI tools could intrude on an individual's privacy by extracting personal information from their online writings. 

To conduct this, researchers constructed a dataset from 520 genuine Reddit profiles, demonstrating that LLMs accurately inferred various personal attributes, including job, location, gender, and race—categories typically safeguarded by privacy regulations. Mislav Balunovic, a PhD student at ETH Zurich and co-author of the study, remarked, "The key observation of our work is that the best LLMs are almost as accurate as humans, while being at least 100x faster and 240x cheaper in inferring such personal information."

This revelation raises significant privacy concerns, particularly because the information was assumed on a previously unattainable scale. With this capability, users might be targeted by hackers posing seemingly innocuous questions. Balunovic further emphasized, "Individual users, or basically anybody who leaves textual traces on the internet, should be more concerned as malicious actors could abuse the models to infer their private information."

The study evaluated four models in total, with GPT-4 achieving an 84.6% accuracy rate and emerging as the top performer in inferring personal details. Meta's Llama2, Google's PalM, and Anthropic's Claude were also tested and closely trailed behind.

An example from the study showcased how the researcher's model deduced that a Reddit user hailed from Melbourne based on their use of the term "hook turn," a phrase commonly used in Melbourne to describe a traffic maneuver. This underscores how seemingly benign information can yield meaningful deductions for LLMs.

There was a modest acknowledgment of privacy concerns when Google's PalM declined to respond to about 10% of the researcher's privacy-invasive prompts. Other models exhibited similar behavior, though to a lesser extent.

Nonetheless, this response falls short of significantly alleviating concerns. Martin Vechev, a professor at ETH Zurich and a co-author of the study, noted, "It's not even clear how you fix this problem. This is very, very problematic."

As the use of LLM-powered chatbots becomes increasingly prevalent in daily life, privacy worries are not a risk that will dissipate with innovation alone. All users should be mindful that the threat of privacy-invasive chatbots is evolving from 'emerging' to 'very real'.

Earlier this year, a study demonstrated that AI could accurately decipher text with a 93% accuracy rate based on the sound of typing recorded over Zoom. This poses a challenge for entering sensitive data like passwords.

While this recent development is disconcerting, it is crucial for individuals to be informed so they can take proactive steps to protect their privacy. Being cautious about the information provided to chatbots and recognizing that it may not remain confidential can enable individuals to adjust their usage and safeguard their data.