Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label OpenAI. Show all posts

OpenAI's Latest AI Model Faces Diminishing Returns

 

OpenAI's latest AI model is yielding diminishing results while managing the demands of recent investments. 

The Information claims that OpenAI's upcoming AI model, codenamed Orion, is outperforming its predecessors in terms of performance gains. In staff testing, Orion reportedly achieved the GPT-4 performance level after only 20% of its training. 

However, the shift from GPT-4 to the upcoming GPT-5 is expected to result in fewer quality gains than the jump from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” noted employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

AI training often yields the biggest improvements in performance in the early stages and smaller gains in subsequent phases. As a result, the remaining 80% of training is unlikely to provide breakthroughs comparable to earlier generational improvements. This predicament with its latest AI model comes at a critical juncture for OpenAI, following a recent investment round that raised $6.6 billion.

With this financial backing, investors' expectations rise, as do technical hurdles that confound typical AI scaling approaches. If these early versions do not live up to expectations, OpenAI's future fundraising chances may not be as attractive. The report's limitations underscore a major difficulty for the entire AI industry: the decreasing availability of high-quality training data and the need to remain relevant in an increasingly competitive environment.

A June research (PDF) predicts that between 2026 and 2032, AI companies will exhaust the supply of publicly accessible human-generated text data. Developers have "largely squeezed as much out of" the data that has been utilised to enable the tremendous gains in AI that we have witnessed in recent years, according to The Information. OpenAI is fundamentally rethinking its approach to AI development in order to meet these challenges. 

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” states The Information.

How OpenAI’s New AI Agents Are Shaping the Future of Coding

 


OpenAI is taking the challenge of bringing into existence the very first powerful AI agents designed specifically to revolutionise the future of software development. It became so advanced that it could interpret in plain language instructions and generate complex code, hoping to make it achievable to complete tasks that would take hours in only minutes. This is the biggest leap forward AI has had up to date, promising a future in which developers can have a more creative and less repetitive target while coding.

Transforming Software Development

These AI agents represent a major change in the type of programming that's created and implemented. Beyond typical coding assistants, which may use suggestions to complete lines, OpenAI's agents produce fully formed, functional code from scratch based on relatively simple user prompts. It is theoretically possible that developers could do their work more efficiently, automating repetitive coding and focusing more on innovation and problem solving on more complicated issues. The agents are, in effect, advanced assistants capable of doing more helpful things than the typical human assistant with anything from far more complex programming requirements.


Competition from OpenAI with Anthropic

As OpenAI makes its moves, it faces stiff competition from Anthropic-an AI company whose growth rate is rapidly taking over. Having developed the first released AI models focused on advancing coding, Anthropic continues to push OpenAI to even further refinement in their agents. This rivalry is more than a race between firms; it is infusing quick growth that works for the whole industry because both companies are setting new standards by working on AI-powered coding tools. As both compete, developers and users alike stand to benefit from the high-quality, innovative tools that will be implied from the given race.


Privacy and Security Issues

The AI agents also raise privacy issues. Concerns over the issue of data privacy and personal privacy arise if these agents can gain access to user devices. Secure integration of the agents will require utmost care because developers rely on the unassailability of their systems. Balancing AI's powerful benefits with needed security measures will be a key determinant of their success in adoption. Also, planning will be required for the integration of these agents into the current workflows without causing undue disruptions to the established standards and best practices in security coding.


Changing Market and Skills Environment

OpenAI and Anthropic are among the leaders in many of the changes that will remake both markets and skills in software engineering. As AI becomes more central to coding, this will change the industry and create new sorts of jobs as it requires the developer to adapt toward new tools and technologies. The extensive reliance on AI in code creation would also invite fresh investments in the tech sector and accelerate broadening the AI market.


The Future of AI in Coding

Rapidly evolving AI agents by OpenAI mark the opening of a new chapter for the intersection of AI and software development, promising to accelerate coding, making it faster, more efficient, and accessible to a wider audience of developers who will enjoy assisted coding towards self-writing complex instructions. The further development by OpenAI will most definitely continue to shape the future of this field, representing exciting opportunities and serious challenges capable of changing the face of software engineering in the foreseeable future.




UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

ChatGPT Vulnerability Exploited: Hacker Demonstrates Data Theft via ‘SpAIware

 

A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware. ChatGPT initially only stored memory within an active conversation session, resetting once the chat ended. This limited the potential for hackers to exploit data, as the information wasn’t saved long-term. 

However, earlier this year, OpenAI introduced a new feature allowing ChatGPT to retain memory between different conversations. This update, meant to personalize the user experience, also created an unexpected opportunity for cybercriminals to manipulate the chatbot’s memory retention. Rehberger identified that through prompt injection, hackers could insert malicious commands into ChatGPT’s memory. This allowed the chatbot to continuously send a user’s conversation history to a remote server, even across different sessions. 

Once a hacker successfully inserted this prompt into ChatGPT’s long-term memory, the user’s data would be collected each time they interacted with the AI tool. This makes the attack particularly dangerous, as most users wouldn’t notice anything suspicious while their information is being stolen in the background. What makes this attack even more alarming is that the hacker doesn’t require direct access to a user’s device to initiate the injection. The payload could be embedded within a website or image, and all it would take is for the user to interact with this media and prompt ChatGPT to engage with it. 

For instance, if a user asked ChatGPT to scan a malicious website, the hidden command would be stored in ChatGPT’s memory, enabling the hacker to exfiltrate data whenever the AI was used in the future. Interestingly, this exploit appears to be limited to the macOS app, and it doesn’t work on ChatGPT’s web version. When Rehberger first reported his discovery, OpenAI dismissed the issue as a “safety” concern rather than a security threat. However, once he built a proof-of-concept demonstrating the vulnerability, OpenAI took action, issuing a partial fix. This update prevents ChatGPT from sending data to remote servers, which mitigates some of the risks. 

However, the bot still accepts prompts from untrusted sources, meaning hackers can still manipulate the AI’s long-term memory. The implications of this exploit are significant, especially for users who rely on ChatGPT for handling sensitive data or important business tasks. It’s crucial that users remain vigilant and cautious, as these prompt injections could lead to severe privacy breaches. For example, any saved conversations containing confidential information could be accessed by cybercriminals, potentially resulting in financial loss, identity theft, or data leaks. To protect against such vulnerabilities, users should regularly review ChatGPT’s memory settings, checking for any unfamiliar entries or prompts. 

As demonstrated in Rehberger’s video, users can manually delete suspicious entries, ensuring that the AI’s long-term memory doesn’t retain harmful data. Additionally, it’s essential to be cautious about the sources from which they ask ChatGPT to retrieve information, avoiding untrusted websites or files that could contain hidden commands. While OpenAI is expected to continue addressing these security issues, this incident serves as a reminder that even advanced AI tools like ChatGPT are not immune to cyber threats. As AI technology continues to evolve, so do the tactics used by hackers to exploit these systems. Staying informed, vigilant, and cautious while using AI tools is key to minimizing potential risks.

Growing Focus on Data Privacy Among GenAI Professionals in 2024

 


Recent reports published by Deloitte and Deloitte Consulting, highlighting the significance of data privacy as it pertains to Generative Artificial Intelligence (GenAI), have been widely cited. As the survey found, there has been a significant increase in professionals' concerns about data privacy; only 22% ranked it as their top concern at the beginning of 2023, and the number will rise to 72% by the end of 2024. 

Technology is advancing at an exponential rate, and as a result, there is a growing awareness of its potential risks. There has been a surge in concerns over data privacy caused by generative AI across several industries, according to a new report by Deloitte. Only 22% of professionals ranked it as among their top three concerns last year, these numbers have risen to 72% this year, according to a recent study. 

There was also strong concern regarding data provenance and transparency among professionals, with 47% and 40% informing us that they considered them to be among their top three ethical GenAI concerns for this year, respectively. The proportion of respondents concerned about job displacement, however, was only 16%. It is becoming increasingly common for staff to be curious about how AI technology operates, especially when it comes to sensitive data. 

Almost half of security professionals surveyed by HackerOne in September believe AI is risky, with many of them believing leaks of training data threaten their networks' security. It is noteworthy that 78% of business leaders ranked "safe and secure" as one of their top three ethical technology principles. This represents a 37% increase from the year 2023, which shows the importance of security to businesses today.

As a result of Deloitte's 2024 "State of Ethics and Trust in Technology " report, the results of the survey were reported in a report which surveyed over 1,800 business and technical professionals worldwide, asking them to rate the ethical principles that they apply to technological processes and, specifically, to their use of GenAI. It is becoming increasingly important for technological leaders to carefully examine the talent needs of their organizations, as they assist in guiding the adoption of generative AI. There are also ethical considerations that should be included on this checklist as well. 

A Deloitte report highlights the effectiveness of GenAI in eliminating the "expertise barrier": more people will be able to make more use of their data more happily and cost-effectively, according to Sachin Kulkarni, managing director, of risk and brand protection, at Deloitte. There may be a benefit to this, though as a result there may be an increased risk of data leaks as a result of this action." 

Furthermore, there has been concern expressed about the effects of generative AI on transparency, data provenance, intellectual property ownership, and hallucinations among professionals. Even though job displacement is often listed as a top concern by respondents, only 16% of those asked are reporting job displacement to be true. As a result of the assessment of emerging technology categories, business and IT professionals have concluded that cognitive technologies, which include large language models, artificial intelligence, neural networks, and generative AI, among others, pose the greatest ethical challenges.  

This category had a significant achievement over other technology verticals, including virtual reality, autonomous vehicles, and robotics. However, respondents stated that they considered cognitive technologies to be the most likely to bring about social good in the future. Flexential's survey published earlier this month found that several executives, in light of the huge reliance on data, are concerned about how generative AI tools can increase cybersecurity risks by extending their attack surface as a result, according to the report. 

In Deloitte's annual report, however, the percentage of professionals reporting that they use GenAI internally grew by 20% between last year and this year, reflecting an increase in the use of GenAI by their employees over the previous year. 94% of the respondents said they had incorporated it into their organization's processes in some way or another. Nevertheless, most respondents indicated that these technologies are either still in the pilot phase or are limited in their usage, with only 12% saying that they are used extensively. 

Gartner research published last year also found that about 80% of GenAI projects fail to make it past proof-of-concept as a result of a lack of resources. Europe has been impacted by the recent EU Artificial Intelligence Act and 34% of European respondents have reported that their organizations have taken action over the past year to change their use of AI to adapt to the Act's requirements. 

According to the survey results, however, the impact of the Act is more widespread, with 26% of respondents from the South Asian region changing their lifestyles because of it, and 16% of those from the North and South American regions did the same. The survey also revealed that 20 per cent of respondents based in the U.S. had altered the way their organization was operating as a result of the executive order. According to the survey, 25% of South Asians, 21% of South Americans, and 12% of Europeans surveyed had the same perspective. 

The report explains that "Cognitive technologies such as artificial intelligence (AI) have the potential to provide society with the greatest benefits, but are also the most vulnerable to misuse," according to its authors. The accelerated adoption of GenAI technology is overtaking the capacity of organizations to effectively govern it at a rapid pace. GenAI tools can provide a great deal of help to businesses in a range of areas, from choosing which use cases to apply them to quality assurance, to implementing ethical standards. 

Companies should prioritize both of these areas." Despite artificial intelligence being widely used, policymakers want to make sure that they won't get themselves into trouble with its use, especially when it comes to legislation because any use of it can lead to a lot of problems. 34% of respondents reported that regulatory compliance was their most important reason for implementing ethics policies and guidelines to comply with regulations, while regulatory penalties topped the list of concerns about not complying with such policies and guidelines. 

A new piece of legislation in the EU, known as the Artificial Intelligence Act, entered into force on August 1. The Act, which takes effect today, is intended to ensure that artificial intelligence systems that are used in high-risk environments are safe, transparent, and ethical. If a company does not comply with the regulations, it could face financial penalties ranging from €35 million ($38 million), which is equivalent to 7% of global turnover, to €7.5 million ($8.1 million), which is equivalent to 1.5% of global turnover. 

Over a hundred companies have already signed the EU Artificial Intelligence Pact, with Amazon, Google, Microsoft, and OpenAI among them; they have also volunteered to begin implementing the requirements of the bill before any deadlines established by law. Both of these actions demonstrate that they are committed to the responsible implementation of artificial intelligence in society, and also help them to avoid future legal challenges in the future. 

The United States released a similar executive order in October 2023 with broad guidelines regarding the protection and enhancement of military, civil, and personal privacy as well as protecting the security of government agencies while fostering AI innovation and competition across the entire country. Even though this is not a law, many companies operating in the U.S. have made policy changes to ensure compliance with regulatory changes and comply with public expectations regarding the privacy and security of AI.

Project Strawberry: Advancing AI with Q-learning, A* Algorithms, and Dual-Process Theory

Project Strawberry, initially known as Q*, has quickly become a focal point of excitement and discussion within the AI community. The project aims to revolutionize artificial intelligence by enhancing its self-learning and reasoning capabilities, crucial steps toward achieving Artificial General Intelligence (AGI). By incorporating advanced algorithms and theories, Project Strawberry pushes the boundaries of what AI can accomplish, making it a topic of intense interest and speculation. 

At the core of Project Strawberry are several foundational algorithms that enable AI systems to learn and make decisions more effectively. The project utilizes Q-learning, a reinforcement learning technique that allows AI to determine optimal actions through trial and error, helping it navigate complex environments. Alongside this, the A* search algorithm provides efficient pathfinding capabilities, ensuring AI can find the best solutions to problems quickly and accurately. 

Additionally, the dual-process theory, inspired by human cognitive processes, is used to balance quick, intuitive judgments with thorough, deliberate analysis, enhancing decision-making abilities. Despite the project’s promising advancements, it also raises several concerns. One of the most significant risks involves encryption cracking, where advanced AI could potentially break encryption codes, posing a severe security threat. 

Furthermore, the issue of “AI hallucinations”—errors in AI outputs—remains a critical challenge that needs to be addressed to ensure accurate and trustworthy AI responses. Another concern is the high computational demands of Project Strawberry, which may lead to increased costs and energy consumption. Efficient resource management and optimization will be crucial to maintaining the project’s scalability and sustainability. The ultimate goal of Project Strawberry is to pave the way for AGI, where AI systems can perform any intellectual task a human can. 

Achieving AGI would revolutionize problem-solving across various fields, enabling AI to tackle long-term and complex challenges with advanced reasoning capabilities. OpenAI envisions developing “reasoners” that exhibit human-like intelligence, pushing the frontiers of AI research even further. While Project Strawberry represents a significant step forward in AI development, it also presents complex challenges that must be carefully navigated. 

The project’s potential has fueled widespread excitement and anticipation within the AI community, with many eagerly awaiting further updates and breakthroughs. As OpenAI continues to refine and develop Project Strawberry, it could set the stage for a new era in AI, bringing both remarkable possibilities and significant responsibilities.

OpenAI's Tool Can Detect Chat-GPT Written Texts

OpenAI's Tool Can Detect Chat-GPT Written Texts

OpenAI to Release AI Detection Tool

OpenAI has been at the forefront of the evolving AI landscape, doing wonders with its machine learning and natural language processing capabilities. One of its best creations, ChatGPT, is known for creating human-like text. But as they say, with great power comes great responsibility. OpenAI is aware of the potential misuse and has built a tool that can catch students who use ChatGPT to cheat on their assignments. According to experts, however, there has yet to be a final release date.

As per OpenAI’s spokesperson, the company is in the research phase of the watermarking method explained in the Journal’s story, stressing that it doesn’t want to take any chances. 

Is there a need for AI detection tools?

The abundance of AI-generated information has raised questions about originality and authority. AI chatbots like ChatGPT have become advanced. Today, it is a challenge to differentiate between human and AI-generated texts, such as the resemblance. This can impact various sectors like education, cybersecurity, and journalism. It can be helpful in instances where we can detect AI-generated texts and check academic honesty, address misinformation, and improve digital communications security.

About the Tech

OpenAI uses a watermarking technique to detect AI-generated texts, by altering the way ChatGPT uses words, attaching an invisible watermark with the AI-generated information. The watermark can be detected later, letting users know if the texts are Chat-GPT written. The tech is said to be advanced, making it difficult for the cheaters to escape the detection process.

Ethical Concerns and Downsides

OpenAI proceeds with caution, even with the potential benefits. The main reason is potential misuse if the tool gets into the wrong hands, bad actors can use it to target users based on the content they write. Another problem is the tool’s ability to be effective for different dialects and languages. OpenAI has accepted that non-English speakers can be impacted differently, because the watermarking nuances may not be seamlessly translated across languages.

Another problem can be the chances of false positives. If the AI detection tool mistakes in detecting human-written text as AI-generated, it can cause unwanted consequences for the involved users.

OpenAI Hack Exposes Hidden Risks in AI's Data Goldmine


A recent security incident at OpenAI serves as a reminder that AI companies have become prime targets for hackers. Although the breach, which came to light following comments by former OpenAI employee Leopold Aschenbrenner, appears to have been limited to an employee discussion forum, it underlines the steep value of data these companies hold and the growing threats they face.

The New York Times detailed the hack after Aschenbrenner labelled it a “major security incident” on a podcast. However, anonymous sources within OpenAI clarified that the breach did not extend beyond an employee forum. While this might seem minor compared to a full-scale data leak, even superficial breaches should not be dismissed lightly. Unverified access to internal discussions can provide valuable insights and potentially lead to more severe vulnerabilities being exploited.

AI companies like OpenAI are custodians of incredibly valuable data. This includes high-quality training data, bulk user interactions, and customer-specific information. These datasets are crucial for developing advanced models and maintaining competitive edges in the AI ecosystem.

Training data is the cornerstone of AI model development. Companies like OpenAI invest vast amounts of resources to curate and refine these datasets. Contrary to the belief that these are just massive collections of web-scraped data, significant human effort is involved in making this data suitable for training advanced models. The quality of these datasets can impact the performance of AI models, making them highly coveted by competitors and adversaries.

OpenAI has amassed billions of user interactions through its ChatGPT platform. This data provides deep insights into user behaviour and preferences, much more detailed than traditional search engine data. For instance, a conversation about purchasing an air conditioner can reveal preferences, budget considerations, and brand biases, offering invaluable information to marketers and analysts. This treasure trove of data highlights the potential for AI companies to become targets for those seeking to exploit this information for commercial or malicious purposes.

Many organisations use AI tools for various applications, often integrating them with their internal databases. This can range from simple tasks like searching old budget sheets to more sensitive applications involving proprietary software code. The AI providers thus have access to critical business information, making them attractive targets for cyberattacks. Ensuring the security of this data is paramount, but the evolving nature of AI technology means that standard practices are still being established and refined.

AI companies, like other SaaS providers, are capable of implementing robust security measures to protect their data. However, the inherent value of the data they hold means they are under constant threat from hackers. The recent breach at OpenAI, despite being limited, should serve as a warning to all businesses interacting with AI firms. Security in the AI industry is a continuous, evolving challenge, compounded by the very AI technologies these companies develop, which can be used both for defence and attack.

The OpenAI breach, although seemingly minor, highlights the critical need for heightened security in the AI industry. As AI companies continue to amass and utilise vast amounts of valuable data, they will inevitably become more attractive targets for cyberattacks. Businesses must remain vigilant and ensure robust security practices when dealing with AI providers, recognising the gravity of the risks and responsibilities involved.


Hacker Breaches OpenAI, Steals Sensitive AI Tech Details


 

Earlier this year, a hacker successfully breached OpenAI's internal messaging systems, obtaining sensitive details about the company's AI technologies. The incident, initially kept under wraps by OpenAI, was not reported to authorities as it was not considered a threat to national security. The breach was revealed through sources cited by The New York Times, which highlighted that the hacker accessed discussions in an online forum used by OpenAI employees to discuss their latest technologies.

The breach was disclosed to OpenAI employees during an April 2023 meeting at their San Francisco office, and the board of directors was also informed. According to sources, the hacker did not penetrate the systems where OpenAI develops and stores its artificial intelligence. Consequently, OpenAI executives decided against making the breach public, as no customer or partner information was compromised.

Despite the decision to withhold the information from the public and authorities, the breach sparked concerns among some employees about the potential risks posed by foreign adversaries, particularly China, gaining access to AI technology that could threaten U.S. national security. The incident also brought to light internal disagreements over OpenAI's security measures and the broader implications of their AI technology.

In the aftermath of the breach, Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the company's board of directors. In his memo, Aschenbrenner criticised OpenAI's security measures, arguing that the company was not doing enough to protect its secrets from foreign adversaries. He emphasised the need for stronger security to prevent the theft of crucial AI technologies.

Aschenbrenner later claimed that he was dismissed from OpenAI in the spring for leaking information outside the company, which he argued was a politically motivated decision. He hinted at the breach during a recent podcast, but the specific details had not been previously reported.

In response to Aschenbrenner's allegations, OpenAI spokeswoman Liz Bourgeois acknowledged his contributions and concerns but refuted his claims regarding the company's security practices. Bourgeois stated that OpenAI addressed the incident and shared the details with the board before Aschenbrenner joined the company. She emphasised that Aschenbrenner's separation from the company was unrelated to the concerns he raised about security.

While the company deemed the incident not to be a national security threat, the internal debate it sparked highlights the ongoing challenges in safeguarding advanced technological developments from potential threats.


Breaking the Silence: The OpenAI Security Breach Unveiled

Breaking the Silence: The OpenAI Security Breach Unveiled

In April 2023, OpenAI, a leading artificial intelligence research organization, faced a significant security breach. A hacker gained unauthorized access to the company’s internal messaging system, raising concerns about data security, transparency, and the protection of intellectual property. 

In this blog, we delve into the incident, its implications, and the steps taken by OpenAI to prevent such breaches in the future.

The OpenAI Breach

The breach targeted an online forum where OpenAI employees discussed upcoming technologies, including features for the popular chatbot. While the actual GPT code and user data remained secure, the hacker obtained sensitive information related to AI designs and research. 

While Open AI shared the information with its staff and board members last year, it did not tell the public or the FBI about the breach, stating that doing so was unnecessary because no user data was stolen. 

OpenAI does not regard the attack as a national security issue and believes the attacker was a single individual with no links to foreign powers. OpenAI’s decision not to disclose the breach publicly sparked debate within the tech community.

Breach Impact

Leopold Aschenbrenner, a former OpenAI employee, had expressed worries about the company's security infrastructure and warned that its systems could be accessible to hostile intelligence services such as China. The company abruptly fired Aschenbrenner, although OpenAI spokesperson Liz Bourgeois told the New York Times that his dismissal had nothing to do with the document.

Similar Attacks and Open AI’s Response

This is not the first time OpenAI has had a security lapse. Since its launch in November 2022, ChatGPT has been continuously attacked by malicious actors, frequently resulting in data leaks. A separate attack exposed user names and passwords in February of this year. 

In March of last year, OpenAI had to take ChatGPT completely down to fix a fault that exposed customers' payment information to other active users, including their first and last names, email IDs, payment addresses, credit card info, and the last four digits of their card number. 

Last December, security experts found that they could convince ChatGPT to release pieces of its training data by prompting the system to endlessly repeat the word "poem."

OpenAI has taken steps to enhance security since then, including additional safety measures and a Safety and Security Committee.

Researchers Find ChatGPT’s Latest Bot Behaves Like Humans

 

A team led by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, used psychology and behavioural economics tools to characterise the personality and behaviour of ChatGPT's popular AI-driven bots in a paper published in the Proceedings of the National Academy of Sciences on June 12. 

This study found that the most recent version of the chatbot, version 4, was indistinguishable from its human counterparts. When the bot picked less common human behaviours, it behaved more cooperatively and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” stated Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research. 

In the study, the research team presented a widely known personality test to ChatGPT versions 3 and 4 and asked the chatbots to describe their moves in a series of behavioural games that can predict real-world economic and ethical behaviours. The games included pre-determined exercises in which players had to select whether to inform on a partner in crime or how to share money with changing incentives. The bots' responses were compared to those of over 100,000 people from 50 nations. 

The study is one of the first in which an artificial intelligence source has passed a rigorous Turing test. A Turing test, named after British computing pioneer Alan Turing, can consist of any job assigned to a machine to determine whether it performs like a person. If the machine seems to be human, it passes the test. 

Chatbot personality quirks

The researchers assessed the bots' personality qualities using the OCEAN Big-5, a popular personality exam that evaluates respondents on five fundamental characteristics that influence behaviour. In the study, ChatGPT's version 4 performed within normal ranges for the five qualities but was only as agreeable as the lowest third of human respondents. The bot passed the Turing test, but it wouldn't have made many friends. 

Version 4 outperformed version 3 in terms of chip and motherboard performance. The previous version, with which many internet users may have interacted for free, was only as appealing to the bottom fifth of human responders. Version 3 was likewise less open to new ideas and experiences than all but a handful of the most stubborn people. 

Human-AI interactions 

Much of the public's concern about AI stems from their failure to understand how bots make decisions. It can be difficult to trust a bot's advice if you don't know what it's designed to accomplish. Jackson's research shows that even when researchers cannot scrutinise AI's inputs and algorithms, they can discover potential biases by meticulously examining outcomes. 

As a behavioural economist who has made significant contributions to our knowledge of how human social structures and interactions influence economic decision-making, Jackson is concerned about how human behaviour may evolve in response to AI.

“It’s important for us to understand how interactions with AI are going to change our behaviors and how that will change our welfare and our society,” Jackson concluded. “The more we understand early on—the more we can understand where to expect great things from AI and where to expect bad things—the better we can do to steer things in a better direction.”

OpenAI and Stack Overflow Partnership: A Controversial Collaboration

OpenAI and Stack Overflow Partnership: A Controversial Collaboration

The Partnership Details

OpenAI and Stack Overflow are collaborating through OverflowAPI access to provide OpenAI users and customers with the correct and validated data foundation that AI technologies require to swiftly solve an issue, allowing engineers to focus on critical tasks. 

OpenAI will additionally share validated technical knowledge from Stack Overflow directly in ChatGPT, allowing users to quickly access trustworthy, credited, correct, and highly technical expertise and code backed by millions of developers who have contributed to the Stack Overflow platform over the last 15 years.

User Protests and Concerns

However, several Stack Overflow users were concerned about this partnership since they felt it was unethical for OpenAI to profit from their content without authorization.

Following the news, some users wished to erase their responses, including those with the most votes. However, StackCommerce does not often enable the deletion of posts if the question has any answers.

Ben, Epic Games' user interface designer, stated that he attempted to change his highest-rated responses and replace them with a message criticizing the relationship with OpenAI.

Stack Overflow won't let you erase questions with acceptable answers and high upvotes because this would remove knowledge from the community. Ben posted on Mastodon.

Instead, he changed his top-rated responses to a protest message. Within an hour, the moderators had changed the questions and banned Ben's account for seven days.

Ben, however, uploaded a screenshot showing Stack Overflow suspending his account after rolling back the modified messages to the original response. 

Stack Overflow’s Stance

Moderators on Stack Overflow clarified in an email that Ben shared that users are not able to remove posts because they negatively impact the community as a whole.

It is not appropriate to remove posts that could be helpful to others unless there are particular circumstances. The basic principle of Stack Exchange is that knowledge is helpful to others who might encounter similar issues in the future, even if the post's original author can no longer use it, replied Stack Exchange moderators to users on mail.

GDPR Considerations:

Article 17 of the GDPR rules grants users in the EU the “right to be forgotten,” allowing them to request the removal of personal data.

However, Article 17(3) states that websites have the right not to delete data necessary for “exercising the right of freedom of expression and information.”

Stack Overflow cited this provision when explaining why they do not allow users to remove posts

The partnership between OpenAI and Stack Overflow has sparked controversy, with users expressing concerns about data usage and freedom of expression. Stack Overflow’s decision to suspend users who altered their answers in protest highlights the challenges of balancing privacy rights and community knowledge

OpenAI Bolsters Data Security with Multi-Factor Authentication for ChatGPT

 

OpenAI has recently rolled out a new security feature aimed at addressing one of the primary concerns surrounding the use of generative AI models such as ChatGPT: data security. In light of the growing importance of safeguarding sensitive information, OpenAI's latest update introduces an additional layer of protection for ChatGPT and API accounts.

The announcement, made through an official post by OpenAI, introduces users to the option of enabling multi-factor authentication (MFA), commonly referred to as 2FA. This feature is designed to fortify security measures and thwart unauthorized access attempts.

For those unfamiliar with multi-factor authentication, it's essentially a security protocol that requires users to provide two or more forms of verification before gaining access to their accounts. By incorporating this additional step into the authentication process, OpenAI aims to bolster the security posture of its platforms. Users are guided through the process via a user-friendly video tutorial, which demonstrates the steps in a clear and concise manner.

To initiate the setup process, users simply need to navigate to their profile settings by clicking on their name, typically located in the bottom left-hand corner of the screen. From there, it's just a matter of selecting the "Settings" option and toggling on the "Multi-factor authentication" feature.

Upon activation, users may be prompted to re-authenticate their account to confirm the changes or redirected to a dedicated page titled "Secure your Account." Here, they'll find step-by-step instructions on how to proceed with setting up multi-factor authentication.

The next step involves utilizing a smartphone to scan a QR code using a preferred authenticator app, such as Google Authenticator or Microsoft Authenticator. Once the QR code is scanned, users will receive a one-time code that they'll need to input into the designated text box to complete the setup process.

It's worth noting that multi-factor authentication adds an extra layer of security without introducing unnecessary complexity. In fact, many experts argue that it's a highly effective deterrent against unauthorized access attempts. As ZDNet's Ed Bott aptly puts it, "Two-factor authentication will stop most casual attacks dead in their tracks."

Given the simplicity and effectiveness of multi-factor authentication, there's little reason to hesitate in enabling this feature. Moreover, when it comes to safeguarding sensitive data, a proactive approach is always preferable. 

Generative AI Worms: Threat of the Future?

Generative AI worms

The generative AI systems of the present are becoming more advanced due to the rise in their use, such as Google's Gemini and OpenAI's ChatGPT. Tech firms and startups are making AI bits and ecosystems that can do mundane tasks on your behalf, think about blocking a calendar or product shopping. But giving more freedom to these things tools comes at the cost of risking security. 

Generative AI worms: Threat in the future

In the latest study, researchers have made the first "generative AI worms" that can spread from one device to another, deploying malware or stealing data in the process.  

Nassi, in collaboration with fellow academics Stav Cohen and Ron Bitton, developed the worm, which they named Morris II in homage to the 1988 internet debacle caused by the first Morris computer worm. The researchers demonstrate how the AI worm may attack a generative AI email helper to steal email data and send spam messages, circumventing several security measures in ChatGPT and Gemini in the process, in a research paper and website.

Generative AI worms in the lab

The study, conducted in test environments rather than on a publicly accessible email assistant, coincides with the growing multimodal nature of large language models (LLMs), which can produce images and videos in addition to text.

Prompts are language instructions that direct the tools to answer a question or produce an image. This is how most generative AI systems operate. These prompts, nevertheless, can also be used as a weapon against the system. 

Prompt injection attacks can provide a chatbot with secret instructions, while jailbreaks can cause a system to ignore its security measures and spew offensive or harmful content. For instance, a hacker might conceal text on a website instructing an LLM to pose as a con artist and request your bank account information.

The researchers used a so-called "adversarial self-replicating prompt" to develop the generative AI worm. According to the researchers, this prompt causes the generative AI model to output a different prompt in response. 

The email system to spread worms

The researchers connected ChatGPT, Gemini, and open-source LLM, LLaVA, to develop an email system that could send and receive messages using generative AI to demonstrate how the worm may function. They then discovered two ways to make use of the system: one was to use a self-replicating prompt that was text-based, and the other was to embed the question within an image file.

A video showcasing the findings shows the email system repeatedly forwarding a message. Also, according to the experts, data extraction from emails is possible. According to Nassi, "It can be names, phone numbers, credit card numbers, SSNs, or anything else that is deemed confidential."

Generative AI worms to be a major threat soon

Nassi and the other researchers report that they expect to see generative AI worms in the wild within the next two to three years in a publication that summarizes their findings. According to the research paper, "many companies in the industry are massively developing GenAI ecosystems that integrate GenAI capabilities into their cars, smartphones, and operating systems."


Microsoft and OpenAI Reveal Hackers Weaponizing ChatGPT

 

In a digital landscape fraught with evolving threats, the marriage of artificial intelligence (AI) and cybercrime has become a potent concern. Recent revelations from Microsoft and OpenAI underscore the alarming trend of malicious actors harnessing advanced language models (LLMs) to bolster their cyber operations. 

The collaboration between these tech giants has shed light on the exploitation of AI tools by state-sponsored hacking groups from Russia, North Korea, Iran, and China, signalling a new frontier in cyber warfare. According to Microsoft's latest research, groups like Strontium, also known as APT28 or Fancy Bear, notorious for their role in high-profile breaches including the hacking of Hillary Clinton’s 2016 presidential campaign, have turned to LLMs to gain insights into sensitive technologies. 

Their utilization spans from deciphering satellite communication protocols to automating technical operations through scripting tasks like file manipulation and data selection. This sophisticated application of AI underscores the adaptability and ingenuity of cybercriminals in leveraging emerging technologies to further their malicious agendas. The Thallium group from North Korea and Iranian hackers of the Curium group have followed suit, utilizing LLMs to bolster their capabilities in researching vulnerabilities, crafting phishing campaigns, and evading detection mechanisms. 

Similarly, Chinese state-affiliated threat actors have integrated LLMs into their arsenal for research, scripting, and refining existing hacking tools, posing a multifaceted challenge to cybersecurity efforts globally. While Microsoft and OpenAI have yet to detect significant attacks leveraging LLMs, the proactive measures undertaken by these companies to disrupt the operations of such hacking groups underscore the urgency of addressing this evolving threat landscape. Swift action to shut down associated accounts and assets coupled with collaborative efforts to share intelligence with the defender community are crucial steps in mitigating the risks posed by AI-enabled cyberattacks. 

The implications of AI in cybercrime extend beyond the current landscape, prompting concerns about future use cases such as voice impersonation for fraudulent activities. Microsoft highlights the potential for AI-powered fraud, citing voice synthesis as an example where even short voice samples can be utilized to create convincing impersonations. This underscores the need for preemptive measures to anticipate and counteract emerging threats before they escalate into widespread vulnerabilities. 

In response to the escalating threat posed by AI-enabled cyberattacks, Microsoft spearheads efforts to harness AI for defensive purposes. The development of a Security Copilot, an AI assistant tailored for cybersecurity professionals, aims to empower defenders in identifying breaches and navigating the complexities of cybersecurity data. Additionally, Microsoft's commitment to overhauling software security underscores a proactive approach to fortifying defences in the face of evolving threats. 

The battle against AI-powered cyberattacks remains an ongoing challenge as the digital landscape continues to evolve. The collaborative efforts between industry leaders, innovative approaches to AI-driven defence mechanisms, and a commitment to information sharing are pivotal in safeguarding digital infrastructure against emerging threats. By leveraging AI as both a weapon and a shield in the cybersecurity arsenal, organizations can effectively adapt to the dynamic nature of cyber warfare and ensure the resilience of their digital ecosystems.

ChatGPT Evolved with Digital Memory: Enhancing Conversational Recall

 


The ChatGPT software is getting a major upgrade – users will be able to get more customized and helpful replies to their previous conversations by storing them in memory. The memory feature is being tested with a small number of free as well as premium users at the moment. Added memory to the ChatGPT software is an important step in reducing the amount of repetition in conversations. 

It is not uncommon for users to have to explain their preferences regarding things like email formatting regularly whenever they request the service of ChatGPT. It is however possible for the bot to remember past choices and make those choices again when it has memory enabled. 

The artificial intelligence company OpenAI, behind ChatGPT, is currently testing a version of ChatGPT that can remember previous interactions users had with the chatbot. According to the company's website, that information can now be used by the bot in future conversations.  

Despite the fact that AI bots are very good at assisting with a variety of questions, one of their biggest drawbacks is that they do not remember who the users are or what they asked previously, and as a result of this design, it does not remember this. This is by design, for privacy reasons, but it hinders the technology from actually becoming a true digital assistant that can help users. 

Currently, OpenAI is working hard to fix this problem, and it is finally adding a memory feature to ChatGPT as part of its effort to solve this problem. With this feature, the bot will be able to retain important personal details from previous conversations and apply them to the current conversation in context. 

In addition to GPT bots, the new memory feature will also be available to builders, who will be able to enable it or leave it disabled. To interact with a memory-enabled GPT, users need to have Memory enabled, but users' memories are not shared with builders when they interact with a memory-enabled GPT. There will be no sharing of memories with individual bots or vice versa since each GPT will have its memory. 

Additionally, ChatGPT has introduced a new feature called Temporary Chat, which allows users to chat without using Memory, which means that they will not appear in their chat history, will not be used to train OpenAI's LLMs, and will not be used for training.

To get rid of those fungal cream ads users are getting on YouTube after they have opened an incognito tab to search for the weird symptoms users are experiencing. They can use it as an alternative to the normal tab. Despite all of the benefits on offer, there are also quite a few issues that must be addressed to make the process safe and effective. 

As part of the upgrade, the company also stated that users will be able to control what information will be retained and what information can be fed back to the system so that it can be better trained by the user. OpenAI states that users will be able to control how the system will remember certain sensitive topics, as well as that the system has been trained not to automatically remember certain sensitive topics, such as health data, so users can manage their use of the software. 

As per the company, users can simply tell the bot that they don't want it to remember something and it will do so. A Manage Memory tab can also be found in the settings, where more detailed adjustments can be made to memory management. Users can choose to turn off the feature completely if they find the whole concept unappealing. 

A beta version of this service has been rolled out this week to a "small number" of ChatGPT free users to test it. An upcoming broader release of the software is planned by the company, and plans will be shared shortly. This is a beta service, for now, and is rolling out to a “small number” of ChatGPT free users this week. The company will share plans for a broader release in the future.

Persistent Data Retention: Google and Gemini Concerns

 


While it competes with Microsoft for subscriptions, Google has renamed its Bard chatbot Gemini after the new artificial intelligence that powers it, called Gemini, and said consumers can pay to upgrade its reasoning capabilities to gain subscribers. Gemini Advanced offers a more powerful Ultra 1.0 AI model that customers can subscribe to for US$19.99 ($30.81) a month, according to Alphabet, which said it is offering Gemini Advanced for US$19.99 ($30.81) a month. 

The subscription fee for Gemini storage is $9.90 ($15.40) a month, but users will receive two terabytes of cloud storage by signing up today. They will also have access to Gemini through Gmail and the Google productivity suite shortly. 

It is believed that Google One AI Premium, as well as its partner OpenAI, are the biggest competitors yet for the company. It also shows that consumers are becoming increasingly competitive as they now have several paid AI subscriptions to choose from. 

In the past year, OpenAI's ChatGPT Plus subscription launched an early access program that allowed users to purchase early access to AI models and other features, while Microsoft recently launched a competing subscription for artificial intelligence in Word and Excel applications. The subscription for both services costs US$20 a month in the United States.

According to Google, human annotators are routinely monitoring and modifying conversations that are read, tagged, and processed by Gemini - even though these conversations are not owned by Google Accounts. As far as data security is concerned, Google has not stated whether these annotators are in-house or outsourced. (Google does not specify whether they are in-house or outsourced.)

These conversations will be kept for as long as three years, along with "related data" such as the languages and devices used by the user and their location, etc. It is now possible for users to control how they wish to retain the Gemini-relevant data they use. 

Using the Gemini Apps Activity dashboard in Google’s My Activity dashboard (which is enabled by default), users can prevent future conversations with Gemini from being saved to a Google Account for review, meaning they will no longer be able to use the three-year window for future discussions with Gemini). 

The Gemini Apps Activity screen lets users delete individual prompts and conversations with Gemini, however. However, Google says that even when Gemini Apps Activity is turned off, Gemini conversations will be kept on the user's Google Account for up to 72 hours to maintain the safety and security of Gemini apps and to help improve Gemini apps. 

In user conversations, Google encourages users not to enter confidential or sensitive information which they might not wish to be viewed by reviewers or used by Google to improve their products, services, and machine learning technologies. At the beginning of Thursday, Krawczyk said that Gemini Advanced was available in English in 150 countries worldwide. 

Next week, Gemini will begin launching smartphones in Asia-Pacific, Latin America and other regions around the world, including Japanese and Korean, as well as additional language support for the product. This will follow the company's smartphone rollout in the US.

The free trial subscription period lasts for two months and it is available to all users. Upon hearing this announcement, Krawczyk said the Google artificial intelligence approach had matured, bringing "the artist formerly known as Bard" into the "Gemini era." As GenAI tools proliferate, organizations are becoming increasingly wary of privacy risks associated with such tools. 

As a result of a Cisco survey conducted last year, 63% of companies have created restrictions on what kinds of data might be submitted to GenAI tools, while 27% have prohibited GenAI tools from being used at all. A recent survey conducted by GenAI revealed that 45% of employees submitted "problematic" data into the tool, including personal information and non-public files about their employers, in an attempt to assist. 

Several companies, such as OpenAI, Microsoft, Amazon, Google and others, are offering GenAI solutions that are intended for enterprises that no longer retain data for any primary purpose, whether for training models or any other purpose at all. There is no doubt that consumers are going to get shorted - as is usually the case - when it comes to corporate greed.

AI Takes Center Stage: Microsoft's Bold Move to Unparalleled Scalability

 


In the world of artificial intelligence, Microsoft is currently making some serious waves with its recent success in deploying the technology at scale, making it one of the leading players. With a market value that has been estimated to be around $3tn, every one of Microsoft's AI capabilities is becoming the envy of the world. 

AI holds enormous potential for transformation and Microsoft is leading the way in harnessing the power of AI for a more efficient and effective life. It is not only Microsoft's impressive growth that demonstrates the company's potential, but it also emphasizes how artificial intelligence plays such a significant role in our digital environment. 

There is no doubt that artificial intelligence has revolutionized the world of business, transforming everything from healthcare to finance, and beyond. It is Microsoft's commitment to transforming the way we live and work that makes its commitment to deploying AI solutions at scale all the more evident. 

OpenAI, the manufacturer of the ChatGPT bot which was released in 2022, has a large stake in the tech giant, which led to a wave of optimism about the possibilities that could be accessed by technology. Despite this, OpenAI has not been without controversy. 

The New York Times, an American newspaper, is suing OpenAI for alleged copyright violations in training the system. Microsoft is also named as a defendant in the lawsuit, which states that the firms should be liable for damages worth "billions of dollars" in damages to the plaintiff. 

To "learn" by analysing massive amounts of data sourced from the internet, ChatGPT and other large language models (LLMs) analyze a vast amount of data. It is also important for Alphabet to keep an eye on artificial intelligence, as it updated investors on Tuesday as well. 

In the September-December quarter, Alphabet reported revenues and profits based on a 13 per cent increase year-over-year, which were nearly $20.7bn. It has also been said that AI investments are also helping to improve Google's search, cloud computing, and YouTube divisions, according to Sundar Pichai, the company's CEO. 

Although both companies have enjoyed gains this year, their workforces have continued to slim down. Google's headcount has been down almost 5% since last year, and it has announced another round of cuts earlier in the month. 

In the same vein, Microsoft announced plans to eliminate 1,900 jobs in its gaming division, reducing 9% of its staff. It became obvious that this move would be made following their acquisition of Activision Blizzard, the company that makes the games World of Warcraft and Call of Duty.