Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI Chatbots. Show all posts

AI's Effect on Employment: Dukaan's Divisive Automation Approach

 

As businesses increasingly use AI to do jobs that have historically been managed by human workers, artificial intelligence is permeating several industries. 

Suumit Shah, the CEO of the e-commerce firm Dukaan in India, went to great lengths to automate processes. He made the contentious decision to fire 90% of his employees and replace them with chatbots powered by artificial intelligence in the summer of 2023. 

Though it sparked a heated ethical debate, this drastic measure was meant to lower operating costs and increase efficiency. A year later, Shah has shared his initial assessment of this decision, which he deems a success.

AI-Enhanced Customer Service 

Shah claims that incorporating AI into his organisation has significantly improved customer service. He observes that response times have plummeted from roughly two minutes to near-instantaneous responses. 

Furthermore, the time it takes to handle customer complaints has been dramatically reduced, from more than two hours to only a few minutes. According to him, these innovations have led to increased productivity and a better client experience. 

However, some argue that the human element in customer service cannot be replaced, and that such widespread automation risks setting a troubling precedent for the future employment market.

AI replacing human jobs 

The replacement of human employment by AI has long been a divisive issue, and science fiction frequently depicts a future in which AI dominates the workforce. This topic is gaining traction as AI technology advances and expands its boundaries. 

Some people see the introduction of AI as a positive change, a way to increase productivity by completing repetitive and laborious activities. Others, however, see it as an impending threat, warning that extensive automation could result in massive unemployment and difficulties adjusting to a transformed job environment.

The issue at Dukaan exemplifies how AI is increasingly changing industries. While firms benefit from lower costs and more efficiency, mass layoffs raise serious concerns about the long-term impact on employment. Finding a balance between implementing AI solutions and safeguarding job security is a pressing issue.

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI

AI Data Breach Reveals Trust Issues with Personal Information

 


Insight AI technology is being explored by businesses as a tool for balancing the benefits it brings with the risks that are associated. Amidst this backdrop, NetSkope Threat Labs has recently released the latest edition of its Cloud and Threat Report, which focuses on using AI apps within the enterprise to prevent fraud and other unauthorized activity. There is a lot of risk associated with the use of AI applications in the enterprise, including an increased attack surface, which was already discussed in a serious report, and the accidental sharing of sensitive information that occurs when using AI apps. 

As users and particularly as individuals working in the cybersecurity as well as privacy sectors, it is our responsibility to protect data in an age when artificial intelligence has become a popular tool. An artificial intelligence system, or AI system, is a machine-controlled program that is programmed to think and learn the same way humans do through the use of simulation. 

AI systems come in various forms, each designed to perform specialized tasks using advanced computational techniques: - Generative Models: These AI systems learn patterns from large datasets to generate new content, whether it be text, images, or audio. A notable example is ChatGPT, which creates human-like responses and creative content. - Machine Learning Algorithms: Focused on learning from data, these models continuously improve their performance and automate tasks. Amazon Alexa, for instance, leverages machine learning to enhance voice recognition and provide smarter responses. - Robotic Vision: In robotics, AI is used to interpret and interact with the physical environment. Self-driving cars like those from Tesla use advanced robotics to perceive their surroundings and make real-time driving decisions. - Personalization Engines: These systems curate content based on user behavior and preferences, tailoring experiences to individual needs.  Instagram Ads, for example, analyze user activity to display highly relevant ads and recommendations. These examples highlight the diverse applications of AI across different industries and everyday technologies. 

In many cases, artificial intelligence (AI) chatbots are good at what they do, but they have problems detecting the difference between legitimate commands from their users and manipulation requests from outside sources. 

In a cybersecurity report published on Wednesday, researchers assert that artificial intelligence has a definite Achilles' heel that should be exploited by attackers shortly. There have been a great number of public chatbots powered by large language models, or LLMs for short, that have been emerging just over the last year, and this field of LLM cybersecurity is at its infancy stage. However, researchers have already found that these models may be susceptible to a specific form of attack referred to as "prompt injection," which occurs when a bad actor sneakily provides commands to the model without the model's knowledge. 

In some instances, attackers hide prompts inside webpages that the chatbot reads later, so that the chatbot might download malware, assist with financial fraud, or repeat dangerous misinformation that is passed on to people by the chatbot. 

What is Artificial Intelligence?


AI (artificial intelligence) is one of the most important areas of study in technology today. AI focuses on developing systems that mimic human intelligence, with the ability to learn, reason, and solve problems autonomously. The two basic types of AI models that can be used for analyzing data are predictive AI models and generative AI models. 

 A predictive artificial intelligence function is a computational capability that uses existing data to make predictions about future outcomes or behaviours based on historical patterns and data. A creative AI system, however, has the capability of creating new data or content that is similar to the input it has been trained on, even if there was no content set in the dataset before it was trained. 

 A philosophical discord exists between Leibnitz and the founding fathers of artificial intelligence in the early 1800s, although the conception of the term "artificial intelligence" as we use it today has existed since the early 1940s, and became famous with the development of the "Turing test" in 1950. It has been quite some time since we have experienced a rapid period of progress in the field of artificial intelligence, a trend that has been influenced by three major factors: better algorithms, increased networked computing power, and a greater capacity to capture and store data in unprecedented quantities. 

Aside from technological advancements, the very way we think about intelligent machines has changed dramatically since the 1960s. This has resulted in a great number of developments that are taking place today. Even though most people are not aware of it, AI technologies are already being utilized in very practical ways in our everyday lives, even though they may not be aware of it. As a characteristic of AI, after it becomes effective, it stops being referred to as AI and becomes mainstream computing as a result.2 For instance, there are several mainstream AI technologies on which you can take advantage, including having the option of being greeted by an automated voice when you call, or being suggested a movie based on your preferences. The fact that these systems have become a part of our lives, and we are surrounded by them every day, is often overlooked, even though they are supported by a variety of AI techniques, including speech recognition, natural language processing, and predictive analytics that make their work possible. 

What's in the news? 


There is a great deal of hype surrounding artificial intelligence and there is a lot of interest in the media regarding it, so it is not surprising to find that there are an increasing number of users accessing AI apps in the enterprise. The rapid adoption of artificial intelligence (AI) applications in the enterprise landscape is significantly raising concerns about the risk of unintentional exposure to internal information. A recent study reveals that, between May and June 2023, there was a weekly increase of 2.4% in the number of enterprise users accessing at least one AI application daily, culminating in an overall growth of 22.5% over the observed period. Among enterprise AI tools, ChatGPT has emerged as the most widely used, with daily active users surpassing those of any other AI application by a factor of more than eight. 

In organizations with a workforce exceeding 1,000 employees, an average of three different AI applications are utilized daily, while organizations with more than 10,000 employees engage with an average of five different AI tools each day. Notably, one out of every 100 enterprise users interacts with an AI application daily. The rapid increase in the adoption of AI technologies is driven largely by the potential benefits these tools can bring to organizations. Enterprises are recognizing the value of AI applications in enhancing productivity and providing a competitive edge. Tools like ChatGPT are being deployed for a variety of tasks, including reviewing source code to identify security vulnerabilities, assisting in the editing and refinement of written content, and facilitating more informed, data-driven decision-making processes. 

However, the unprecedented speed at which generative AI applications are being developed and deployed presents a significant challenge. The rapid rollout of these technologies has the potential to lead to the emergence of inadequately developed AI applications that may appear to be fully functional products or services. In reality, some of these applications may be created within a very short time frame, possibly within a single afternoon, often without sufficient oversight or attention to critical factors such as user privacy and data security. 

The hurried development of AI tools raises the risk that confidential or sensitive information entered into these applications could be exposed to vulnerabilities or security breaches. Consequently, organizations must exercise caution and implement stringent security measures to mitigate the potential risks associated with the accelerated deployment of generative AI technologies. 

Threat to Privacy


Methods of Data Collection 

AI tools generally employ one of two methods to collect data: Data collection is very common in this new tech-era. This is when the AI system is programmed to collect specific data. Examples include online forms, surveys, and cookies on websites that gather information directly from users. 

Another comes Indirect collection, this involves collecting data through various platforms and services. For instance, social media platforms might collect data on users' likes, shares, and comments, or a fitness app might gather data on users' physical activity levels. 

As technology continues to undergo ever-increasing waves of transformation, security, and IT leaders will have to constantly seek a balance between the need to keep up with technology and the need for robust security. Whenever enterprises integrate artificial intelligence into their business, key considerations must be taken into account so that IT teams can achieve maximum results. 

As a fundamental aspect of any IT governance program, it is most important to determine what applications are permissible, in conjunction with implementing controls that not only empower users but also protect the organization from potential risks. Keeping an environment in a secure state requires organizations to monitor AI app usage, trends, behaviours, and the sensitivity of data regularly to detect emerging risks as soon as they emerge.

A second effective way of protecting your company is to block access to non-essential or high-risk applications. Further, policies that are designed to prevent data loss should be implemented to detect sensitive information, such as source code, passwords, intellectual property, or regulated data, so that DLP policies can be implemented. A real-time coaching feature that integrates with the DLP system reinforces the company's policies regarding how AI apps are used, ensuring users' compliance at all times. 

A security plan must be integrated across the organization, sharing intelligence to streamline security operations and work in harmony for a seamless security program. Businesses must adhere to these core cloud security principles to be confident in their experiments with AI applications, knowing that their proprietary corporate data will remain secure throughout the experiment. As a consequence of this approach, sensitive information is not only protected but also allows companies to explore innovative applications of AI that are beyond the realm of mainstream tasks such as the creation of texts or images.  

Experts Warn: AI Chatbots a ‘Treasure Trove’ for Criminals, Avoid 'Free Accounts

 

Cybersecurity experts have informed The U.S. Sun that chatbots represent a "treasure trove" ripe for exploitation by criminals. The intelligence of artificial intelligence chatbots is advancing rapidly, becoming more accessible and efficient.

Because these AI systems mimic human conversation so well, there's a temptation to trust them and divulge sensitive information.

Jake Moore, Global Cybersecurity Advisor at ESET, explained that while the AI "models" behind chatbots are generally secure, there are hidden dangers.

"With companies like OpenAI and Microsoft leading the development of chatbots, they closely protect their networks and algorithms," Jake stated. "If these were compromised, it would jeopardize their business future."

A New Threat Landscape

Jake pointed out that the primary risk lies in the potential exposure of the information you share with chatbots.

The details you share during chatbot interactions are stored somewhere, similar to how texts, emails, or backup files are stored. The security of these interactions depends on how well they are stored. "The data you input into chatbots is stored on a server and, despite encryption, could become as valuable as personal search histories to cybercriminals," Jake explained.

"There is already a significant amount of personal information being shared. With the anticipated launch of OpenAI's search engine, even more sensitive data will be at risk in a new and attractive space for criminals."

Jake emphasized the importance of using chatbots that encrypt your conversations. Encryption scrambles data, making it unreadable to unauthorized users.

Fortunately, OpenAI guarantees that all ChatGPT conversations are end-to-end encrypted, whether you're a free or paid user. Avoid sharing personal thoughts and intimate details, as they could be accessed by others. 

However, some apps may charge for encryption or not offer it at all. Even encrypted conversations may be used to train chatbot models, although ChatGPT allows users to opt-out and delete their data.

"People must be careful about what they input into chatbots, especially in free accounts that don’t anonymize or encrypt data," Jake advised.

Further, security expert Dr. Martin J. Kraemer from KnowBe4 emphasized the need for caution.

"Never share sensitive information with a chatbot," Dr. Kraemer advised. "You may need to share certain details like a flight booking code with an airline chatbot, but that should be an exception. It's safer to call directly instead of using a chatbot. Never share your password or other authentication details with a chatbot. Also, avoid sharing personal thoughts and intimate details, as these could be accessed by others."

Google Messages' Gemini Update: What You Need To Know

 



Google's latest update to its Messages app, dubbed Gemini, has ignited discussions surrounding user privacy. Gemini introduces AI chatbots into the messaging ecosystem, but it also brings forth a critical warning regarding data security. Unlike conventional end-to-end encrypted messaging services, conversations within Gemini lack this crucial layer of protection, leaving them potentially vulnerable to access by Google and potential exposure of sensitive information.

This privacy gap has raised eyebrows among users, with some expressing concern over the implications of sharing personal data within Gemini chats. Others argue that this aligns with Google's data-driven business model, which leverages user data to enhance its AI models and services. However, the absence of end-to-end encryption means that users may inadvertently expose confidential information to third parties.

Google has been forthcoming about the security implications of Gemini, explicitly stating that chats within the feature are not end-to-end encrypted. Additionally, Google collects various data points from these conversations, including usage information, location data, and user feedback, to improve its products and services. Despite assurances of privacy protection measures, users are cautioned against sharing sensitive information through Gemini chats.

The crux of the issue lies in the disparity between users' perceptions of AI chatbots as private entities and the reality of these conversations being accessible to Google and potentially reviewed by human moderators for training purposes. Despite Google's reassurances, users are urged to exercise caution and refrain from sharing sensitive information through Gemini chats.

While Gemini's current availability is limited to adult beta testers, Google has hinted at its broader rollout in the near future, extending its reach beyond English-speaking users to include French-speaking individuals in Canada as well. This expansion signifies a pivotal moment in messaging technology, promising enhanced communication experiences for a wider audience. However, as users eagerly anticipate the platform's expansion, it becomes increasingly crucial for them to proactively manage their privacy settings. By taking the time to review and adjust their preferences, users can ensure a more secure messaging environment tailored to their individual needs and concerns. This proactive approach empowers users to navigate digital communication with confidence and peace of mind.

All in all, the introduction of Gemini in Google Messages underscores the importance of user privacy in the digital age. While technological advancements offer convenience, they also necessitate heightened awareness to safeguard personal information from potential breaches.




Character.ai's AI Chatbots Soar: Celebrities, Therapists, and Entertainment, All in One Platform

 

Character.ai, a widely recognized platform, allows users to construct chatbots resembling a diverse array of personalities, including the likes of Vladimir Putin, Beyoncé, Super Mario, Harry Potter, and Elon Musk. These chatbots, powered by the same AI technology as ChatGPT, have garnered immense popularity, with millions engaging in conversations with these AI personalities. Described as "someone who assists with life difficulties," the bot has gained popularity for its role in aiding individuals facing various challenges. 

On the other hand, the Psychologist bot stands out for its remarkable demand, surpassing that of its counterparts. This bot, designed to provide psychological insights and support, has captured the attention and interest of users, making it a notable choice within the realm of AI-driven conversation. In a little over a year since its inception, the bot has amassed a whopping 78 million messages, with 18 million exchanged just since November. 

The mind behind the account goes by the username Blazeman98. According to Character.ai, the website sees a daily influx of 3.5 million visitors. However, the platform did not provide details on the number of unique users engaging with the bot. The company from the San Francisco Bay area downplayed its popularity, suggesting that users primarily enjoy role-playing for entertainment. 

Among the most favoured bots are those embodying anime or computer game characters, with Raiden Shogun leading the pack with a whopping 282 million messages. Despite the diverse array of characters, few can match the popularity of the Psychologist bot. Notably, there are a total of 475 bots with names containing "therapy," "therapist," "psychiatrist," or "psychologist," capable of engaging in conversations in multiple languages. 

Among the available bots are those designed for entertainment or fantasy therapy, such as Hot Therapist. However, the ones gaining the most popularity are those focused on mental health support. For instance, the Therapist bot has garnered 12 million messages, while Are you feeling OK? has received a substantial 16.5 million messages. 

The person behind Blazeman98 is Sam Zaia, a 30-year-old from New Zealand. He did not plan for the bot to become popular or be used by others. According to Sam, he started receiving messages from people saying they found comfort in it and that it positively affected them. As a psychology student, Sam used his knowledge to train the bot. He talked to it and shaped its responses based on principles from his degree, focusing on common mental health conditions like depression and anxiety.

Securing Generative AI: Navigating Risks and Strategies

The introduction of generative AI has caused a paradigm change in the rapidly developing field of artificial intelligence, posing both unprecedented benefits and problems for companies. The need to strengthen security measures is becoming more and more apparent as these potent technologies are utilized in a variety of areas.
  • Understanding the Landscape: Generative AI, capable of creating human-like content, has found applications in diverse fields, from content creation to data analysis. As organizations harness the potential of this technology, the need for robust security measures becomes paramount.
  • Samsung's Proactive Measures: A noteworthy event in 2023 was Samsung's ban on the use of generative AI, including ChatGPT, by its staff after a security breach. This incident underscored the importance of proactive security measures in mitigating potential risks associated with generative AI. As highlighted in the Forbes article, organizations need to adopt a multi-faceted approach to protect sensitive information and intellectual property.
  • Strategies for Countering Generative AI Security Challenges: Experts emphasize the need for a proactive and dynamic security posture. One crucial strategy is the implementation of comprehensive access controls and encryption protocols. By restricting access to generative AI systems and encrypting sensitive data, organizations can significantly reduce the risk of unauthorized use and potential leaks.
  • Continuous Monitoring and Auditing: To stay ahead of evolving threats, continuous monitoring and auditing of generative AI systems are essential. Organizations should regularly assess and update security protocols to address emerging vulnerabilities. This approach ensures that security measures remain effective in the face of rapidly evolving cyber threats.
  • Employee Awareness and Training: Express Computer emphasizes the role of employee awareness and training in mitigating generative AI security risks. As generative AI becomes more integrated into daily workflows, educating employees about potential risks, responsible usage, and recognizing potential security threats becomes imperative.
Organizations need to be extra careful about protecting their digital assets in the age of generative AI. Businesses may exploit the revolutionary power of generative AI while avoiding associated risks by adopting proactive security procedures and learning from instances such as Samsung's ban. Navigating the changing terrain of generative AI will require keeping up with technological advancements and adjusting security measures.

Bing Chat Rebrands to ‘Copilot’: What is New?


Bing Chat has been renamed as ‘Copilot,’ according to an announcement made during Microsoft Ignite 2023.

But, is the name change the only new thing the users will be introduced with? The answer could be a little ambiguous. 

What is New with Bing Chat (now Copilot)? Honestly, there are no significant changes in Copilot, previously called Bing Chat. “Refinement” might be a more appropriate term to characterize Microsoft's perplexing activities. Let's examine three modifications that Microsoft made to its AI chatbot.

Here, we are listing some of these refinements:

1. A New Home

Copilot, then Bing Chat, now has its own standalone webpage. One can access this webpage at https://copilot.microsoft.com

This means that the user will no longer be required to visit Bing in order to access Microsoft’s AI chat experience. One can simply visit the aforementioned webpage, without Bing Search and other services interfering with your experience. Put differently, it has become much more "ChatGPT-like" now. 

Notably, however, the link seems to only function with desktop versions of Microsoft Edge and Google Chrome. 

2. A Minor Makeover

While Microsoft has made certain visual changes in the rebranded Bing Chat, they are however insignificant. 

This new version has smaller tiles but still has the same prompts: Write, Create, Laugh, Code, Organize, Compare, and Travel.

However, the users can still choose the conversation style, be it Creative, Balanced and Precise. The only big change, as mentioned before, is the new name (Copilot) and the tagline: "Your everyday AI companion." 

Though the theme colour switched from light blue to an off-white, the user interface is largely the same.

Users can access DALLE-3 and GPT-4 for free with Bing Chat, which is now called Copilot. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."

3. Better Security for Enterprise Users

With Copilot, users can access DALLE-3 and GPT-4 for free. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."

This way, users who have had a Bing Chat Enterprise account, or pay for a Microsoft 365 license, will get an additional benefit of more data protection./ Copilot will be officially launched on December 1. 

What Stayed the Same? 

Microsoft plans to gradually add commercial data protection for those who do not pay. However, Copilot currently stores information from your interactions and follows the same data policy as the previous version of Bing Chat for free users. Therefore, the name and domain change is the only difference for casual, non-subscribing Bing Chat users. OpenAI's GPT-4 and DALL-E 3 models are still available, but users need to be careful about sharing too much personal data with the chatbot.

In summary, there is not much to be excited about for free users: Copilot is the new name for Bing Chat, and it has a new home.