OpenAI has had a very exciting week both in terms of its major success with the ChatGPT service as well as its artificial intelligence (AI) division. The CEO of OpenAI, Sam Altman, who is arguably one of the most significant figures in the race for artificial general intelligence (AGI), has been fired by the nonprofit board of the company. Although details are still sketchy, it appears that the board was concerned that Altman was not moving cautiously enough given the potential dangers that artificial intelligence might present to society, especially since it was being developed.
Nonetheless, the board has taken several actions that appear to have backfired badly on them. Microsoft announced shortly after Altman was fired that he would be heading an internal Microsoft AI research division within the company, a company that has a close partnership with OpenAI.
OpenAI's employees eventually revolted against the firing of Altman as CEO, and the board eventually decided to hire him back as the company's CEO, with several of the board members who had originally terminated Altman resigning due to public outcry.
Recently, OpenAI, which is one of the world leaders in the field of artificial intelligence (AI) and a major innovator in the field of ChatGPT, was thrown into turmoil when its chief executive and figurehead, Sam Altman, was fired from the company.
Approximately 730 OpenAI employees threatened to resign as a result of learning that he was leaving his company to join Microsoft's advanced AI research team. The company finally announced that many of the board members who terminated Altman's employment had been replaced and that Altman would probably be returning to the company soon.
A few reports have surfaced in the past few weeks that there have been vigorous discussions within OpenAI regarding AI safety and security. As a result, this serves as a microcosm of broader debates involving the regulation of artificial intelligence technologies, as well as what needs to be done to tackle the problems associated with their handling.
These discussions revolve around the concept of large language models (LLMs), which is at the core of artificial intelligence chatbots like ChatGPT.
For these machines to improve their capabilities, they are exposed to vast amounts of data in the form of training, which is a process known as learning. Nevertheless, this training process raises critical issues about fairness, privacy, and the possibility of the misuse of artificial intelligence due to its double-edged nature.
As a result of absorbing and reconstituting enormous amounts of information by LLMs, they also pose a serious privacy risk. LLMs can, for instance, remember private data or sensitive information in the training data they receive, and then make further inferences based on that information, possibly resulting in the leakage of trade secrets, the disclosure of health diagnoses, or even the leakage of other types of private information by those who receive the training data.
Even hackers or malicious software could be able to attack LLMs to gain access to them. Attacks known as prompt injections make AI systems do things that they weren't supposed to, potentially allowing them to gain unauthorised access or leak confidential information, resulting in unauthorised access to machines and private data leakage.
Users must analyze the training methods of these models, the inherent biases in the training data, and the societal factors that influence the data that is used to train these models to be able to understand these risks.
It is important to note that whatever approach is taken to regulatory matters, there are always challenges involved. It may be hard for third-party regulators to predict and mitigate risks effectively since there is a short transition time from research and development to deployment of an application for LLM research.
In addition, training and modifying models require technical skills, which can be costly, in addition to the difficulty of implementing them.
There may be a more effective way to address some risks if we could focus on early research and training within the LLM program. This would help address some of the harms that originate from the training data.
However, benchmarks must also be established for AI systems to ensure they remain safe.
A safety standard that is considered “safe enough” may differ depending on the area in which it is being implemented. For example, high-risk areas such as algorithms in the criminal justice system and recruiting, may have stricter requirements.
Since the beginning of time, artificial intelligence has slowly been advancing behind the scenes. It is responsible for the way Google autocompletes a search query or Amazon recommends books. With the release of ChatGPT-3 in November 2022, however, AI emerged from the shadows and became a tool people could use without having any technical expertise, from one that was designed for software engineers to one that was consumer-focused and that anyone could use.
In ChatGPT, users can communicate with an AI bot to ask it to help them design software so that they don't have to write code themselves. A few months later, OpenAI, the developers of ChatGPT, released GPT-4, the latest iteration of the large language model (LLM) behind ChatGPT, which OpenAI claimed to exhibit "human-level performance" in various tasks.
During the past two months, ChatGPT has grown so rapidly that it now has over 100 million visitors, making it the fastest-growing website in history.
As a result, Microsoft invested $13 billion in OpenAI, which then incorporated ChatGPT into its products, including a redesigned, AI-powered Bing and an AI race were on.
A few years ago, when Google released its DeepMind AI model in the Chinese game of Go and beat a human champion, the company immediately followed up with Bard, an artificial intelligence-driven chatbot in response.
During the announcement of Bing, Microsoft CEO Satya Nadella emphasized the importance of protecting public interest during a race in which the public will receive the best of the best.
In a race that promises to be the fastest ever run but is happening without a referee, how to protect the public interest becomes the challenge. Rules must be established and developed so that corporate AI racing does not become reckless and that the rules are enforced so that legal guardrails can be enforced to protect that race.
Although the federal government has been given the authority to deal with this rapidly changing landscape, it is unable to keep up with the pace and velocity of AI-driven change. Regulations that govern the way the government runs its activities are based on assumptions that were made in the industrial era and have already been outpaced by the advent of the digital platform era during the first decades of this century. The existing rules cannot respond to the velocity of advances in AI rapidly enough to combat the problem.