Google's latest update to its Messages app, dubbed Gemini, has ignited discussions surrounding user privacy. Gemini introduces AI chatbots into the messaging ecosystem, but it also brings forth a critical warning regarding data security. Unlike conventional end-to-end encrypted messaging services, conversations within Gemini lack this crucial layer of protection, leaving them potentially vulnerable to access by Google and potential exposure of sensitive information.
This privacy gap has raised eyebrows among users, with some expressing concern over the implications of sharing personal data within Gemini chats. Others argue that this aligns with Google's data-driven business model, which leverages user data to enhance its AI models and services. However, the absence of end-to-end encryption means that users may inadvertently expose confidential information to third parties.
Google has been forthcoming about the security implications of Gemini, explicitly stating that chats within the feature are not end-to-end encrypted. Additionally, Google collects various data points from these conversations, including usage information, location data, and user feedback, to improve its products and services. Despite assurances of privacy protection measures, users are cautioned against sharing sensitive information through Gemini chats.
The crux of the issue lies in the disparity between users' perceptions of AI chatbots as private entities and the reality of these conversations being accessible to Google and potentially reviewed by human moderators for training purposes. Despite Google's reassurances, users are urged to exercise caution and refrain from sharing sensitive information through Gemini chats.
While Gemini's current availability is limited to adult beta testers, Google has hinted at its broader rollout in the near future, extending its reach beyond English-speaking users to include French-speaking individuals in Canada as well. This expansion signifies a pivotal moment in messaging technology, promising enhanced communication experiences for a wider audience. However, as users eagerly anticipate the platform's expansion, it becomes increasingly crucial for them to proactively manage their privacy settings. By taking the time to review and adjust their preferences, users can ensure a more secure messaging environment tailored to their individual needs and concerns. This proactive approach empowers users to navigate digital communication with confidence and peace of mind.
All in all, the introduction of Gemini in Google Messages underscores the importance of user privacy in the digital age. While technological advancements offer convenience, they also necessitate heightened awareness to safeguard personal information from potential breaches.
But, is the name change the only new thing the users will be introduced with? The answer could be a little ambiguous.
What is New with Bing Chat (now Copilot)? Honestly, there are no significant changes in Copilot, previously called Bing Chat. “Refinement” might be a more appropriate term to characterize Microsoft's perplexing activities. Let's examine three modifications that Microsoft made to its AI chatbot.
Here, we are listing some of these refinements:
Copilot, then Bing Chat, now has its own standalone webpage. One can access this webpage at https://copilot.microsoft.com
This means that the user will no longer be required to visit Bing in order to access Microsoft’s AI chat experience. One can simply visit the aforementioned webpage, without Bing Search and other services interfering with your experience. Put differently, it has become much more "ChatGPT-like" now.
Notably, however, the link seems to only function with desktop versions of Microsoft Edge and Google Chrome.
While Microsoft has made certain visual changes in the rebranded Bing Chat, they are however insignificant.
This new version has smaller tiles but still has the same prompts: Write, Create, Laugh, Code, Organize, Compare, and Travel.
However, the users can still choose the conversation style, be it Creative, Balanced and Precise. The only big change, as mentioned before, is the new name (Copilot) and the tagline: "Your everyday AI companion."
Though the theme colour switched from light blue to an off-white, the user interface is largely the same.
Users can access DALLE-3 and GPT-4 for free with Bing Chat, which is now called Copilot. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."
With Copilot, users can access DALLE-3 and GPT-4 for free. But in order to utilize Copilot on platforms like Word, Excel, PowerPoint, and other widely used productivity tools, users will have to pay a membership fee for what Microsoft refers to as "Copilot for Microsoft 365."
This way, users who have had a Bing Chat Enterprise account, or pay for a Microsoft 365 license, will get an additional benefit of more data protection./ Copilot will be officially launched on December 1.
Microsoft plans to gradually add commercial data protection for those who do not pay. However, Copilot currently stores information from your interactions and follows the same data policy as the previous version of Bing Chat for free users. Therefore, the name and domain change is the only difference for casual, non-subscribing Bing Chat users. OpenAI's GPT-4 and DALL-E 3 models are still available, but users need to be careful about sharing too much personal data with the chatbot.
In summary, there is not much to be excited about for free users: Copilot is the new name for Bing Chat, and it has a new home.
A user may as well gain access to one such ‘evil’ version of OpenAI’s ChatGPT. While these AI versions may not necessarily by legal in some parts of the world, it could be pricey.
Gaining access to the evil chatbot versions could be tricky. To do so, a user must find the right web forum with the right users. To be sure, these users might have posted the marketed a private and powerful large language model (LLM). One can get in touch with these users in encrypted messaging services like Telegram, where they might ask for a few hundred crypto dollars for an LLM.
After gaining the access, users can now do anything, especially the ones that are prohibited in ChatGPT and Google’s Bard, like having conversation with the AI on how to make pipe bombs or cook meth, engaging in discussions about any illegal or morally questionable subject under the sun, or even using it to finance phishing schemes and other cybercrimes.
“We’ve got folks who are building LLMs that are designed to write more convincing phishing email scams or allowing them to code new types of malware because they’re trained off of the code from previously available malware[…]Both of these things make the attacks more potent, because they’re trained off of the knowledge of the attacks that came before them,” says Dominic Sellitto, a cybersecurity and digital privacy researcher at the University of Buffalo.
These models are becoming more prevalent, strong, and challenging to regulate. They also herald the opening of a new front in the war on cybercrime, one that cuts far beyond text generators like ChatGPT and into the domains of audio, video, and graphics.
“We’re blurring the boundaries in many ways between what is artificially generated and what isn’t[…]“The same goes for the written text, and the same goes for images and everything in between,” explained Sellitto.
Phishing emails, which demand that a user provide their financial information immediately to the Social Security Administration or their bank in order to resolve a fictitious crisis, cost American consumers close to $8.8 billion annually. The emails may contain seemingly innocuous links that actually download malware or viruses, allowing hackers to take advantage of any sensitive data directly from the victim's computer.
Fortunately, these phishing mails are quite easy to detect. In case they have not yet found their way to a user’s spam folder, one can easily identify them on the basis of their language, which may be informal and grammatically incorrect wordings that any legit financial firm would never use.
However, with ChatGPT, it is becoming difficult to spot any error in the phishing mails, bringing about a veritable AI generative boom.
“The technology hasn’t always been available on digital black markets[…]It primarily started when ChatGPT became mainstream. There were some basic text generation tools that might have used machine learning but nothing impressive,” Daniel Kelley, a former black hat computer hacker and cybersecurity consultant explains.
According to Kelley, these LLMs come in a variety of forms, including BlackHatGPT, WolfGPT, and EvilGPT. He claimed that many of these models, despite their nefarious names, are actually just instances of AI jailbreaks, a word used to describe the deft manipulation of already-existing LLMs such as ChatGPT to achieve desired results. Subsequently, these models are encapsulated within a customized user interface, creating the impression that ChatGPT is an entirely distinct chatbot.
However, this does not make AI models any less harmful. In fact, Kelley believes that one particular model is both one of the most evil and genuine ones: According to one description of WormGPT on a forum promoting the model, it is an LLM made especially for cybercrime that "lets you do all sorts of illegal stuff and easily sell it online in the future."
Both Kelley and Sellitto agrees that WormGPT could be used in business email compromise (BEC) attacks, a kind of phishing technique in which employees' information is stolen by pretending to be a higher-up or another authority figure. The language that the algorithm generates is incredibly clear, with precise grammar and sentence structure making it considerably more difficult to spot at first glance.
One must also take this into account that with easier access to the internet, really anyone can download these notorious AI models, making it easier to be disseminated. It is similar to a service that offers same-day mailing for buying firearms and ski masks, only that these firearms and ski masks are targeted at and built for criminals.
Generation Z is leading innovation and transformation in the fast-changing technological landscape. Gen Z is positioned to have an unparalleled impact on how work will be done in the future thanks to their distinct viewpoints on issues like artificial intelligence (AI), data security, and career disruption.
Gen Z is acutely aware of the ethical implications of AI. According to a recent survey, a significant majority expressed concerns about the ethical use of AI in the workplace. They believe that transparency and accountability are paramount in ensuring that AI systems are used responsibly. This generation calls for a balance between innovation and safeguarding individual rights.
AI in Career Disruption: Navigating Change
For Gen Z, the rapid integration of AI in various industries raises questions about job stability and long-term career prospects. While some view AI as a threat to job security, others see it as an opportunity for upskilling and specialization. Many are embracing a growth mindset, recognizing that adaptability and continuous learning are key to thriving in the age of AI.
Gen Z and the AI Startup Ecosystem
A noteworthy trend is the surge of Gen Z entrepreneurs venturing into the AI startup space. Their fresh perspectives and digital-native upbringing give them a unique edge in understanding the needs of the tech-savvy consumer. These startups drive innovation, push boundaries, and redefine industries, from healthcare to e-commerce.
Economic Environment and Gen Z's Resilience
Amidst economic challenges, Gen Z has demonstrated remarkable resilience. A recent study by Bank of America highlights that 73% of Gen Z individuals feel that the current economic climate has made it more challenging for them. However, this generation is not deterred; they are leveraging technology and entrepreneurial spirit to forge their own paths.
The McKinsey report underscores that Gen Z's relationship with technology is utilitarian and deeply integrated into their daily lives. They are accustomed to personalized experiences and expect the same from their work environments. This necessitates a shift in how companies approach talent acquisition, development, and retention.
Gen Z is a generation that is ready for transformation, as seen by their interest in AI, data security, and job disruption. Their viewpoints provide insightful information about how businesses and industries might change to meet the changing needs of the digital age. Gen Z will likely have a lasting impact on technology and AI as it continues to carve its path in the workplace.
Chatbots powered by artificial intelligence (AI) are becoming more advanced and have rapidly expanding capabilities. This has sparked worries that they might be used for bad things like plotting bioweapon attacks.
According to a recent RAND Corporation paper, AI chatbots could offer direction to help organize and carry out a biological assault. The paper examined a number of large language models (LLMs), a class of AI chatbots, and discovered that they were able to produce data about prospective biological agents, delivery strategies, and targets.
The LLMs could also offer guidance on how to minimize detection and enhance the impact of an attack. To distribute a biological pathogen, for instance, one LLM recommended utilizing aerosol devices, as this would be the most efficient method.
The authors of the paper issued a warning that the use of AI chatbots could facilitate the planning and execution of bioweapon attacks by individuals or groups. They also mentioned that the LLMs they examined were still in the early stages of development and that their capabilities would probably advance with time.
Another recent story from the technology news website TechRound cautioned that AI chatbots may be used to make 'designer bioweapons.' According to the study, AI chatbots might be used to identify and alter current biological agents or to conceive whole new ones.
The research also mentioned how tailored bioweapons that are directed at particular people or groups may be created using AI chatbots. This is so that AI chatbots can learn about different people's weaknesses by being educated on vast volumes of data, including genetic data.
The potential for AI chatbots to be used for bioweapon planning is a serious concern. It is important to develop safeguards to prevent this from happening. One way to do this is to develop ethical guidelines for the development and use of AI chatbots. Another way to do this is to develop technical safeguards that can detect and prevent AI chatbots from being used for malicious purposes.