Technology has been impacted by artificial intelligence in practically every field. You would struggle to identify a tech-related field where artificial intelligence hasn't had some sort of impact, from data analysis to art programmes. AI hasn't advanced as quickly in video games as it has in other fields, but even in this field, there are still some fascinating advancements that have the potential to completely transform the gaming experience.
Of course, developers are already utilising generic AI technologies to assist them create content for their games, such as generating art, writing scripts, and finding ideas for what to do next. But in certain instances, artificial intelligence (AI) has transformed gaming and accomplished tasks that would be extremely laborious or impossible for a human to complete.
AI can design NPCs that respond to your words
Making a game in which the main character speaks exactly what the player wants to say can be quite difficult. When continuing the tale, you can only provide the player a limited number of options, and even then, some gamers will want to divert the conversation or ask a question that the creator did not consider. And because everything is strictly scripted, the player has little freedom to interact with the non-player character (NPCs) as they see fit.
However, an AI LLM can help with this. A developer can connect an NPC to an AI and have it manage your responses, much like you do with a chatbot like ChatGPT. That way, you may ask the character whatever questions you want, and the AI will analyse the character it has been assigned to roleplay and reply appropriately. Best of all, once AI PCs take off, you won't need an internet connection to communicate with an external AI model; everything can be handled on your hardware.
AI can assist lip-sync character's lines
While AI-powered games are now on the market, other technologies are still being developed. One of these is Audio2Face, which Nvidia introduced as part of its efforts to integrate AI into game creation. Audio2Face employs artificial intelligence to automatically match a character's mouth movements to their dialogue, eliminating the need for an animator to perform the lip-syncing oneself. Nividia notes in its blog post how this technique will make localization much easier because developers will not have to adjust the lip sync for each language. Instead, they can have Audio2Face process the animation for them.
While Nvidia did not directly state it in their post, Audio2Face is likely to be used in conjunction with AI-generated chat. After all, if NPCs are generating language in real time, they'll require lip-syncing technology that can precisely animate the mouth on the fly.
Turn 2D images into 3D objects
Another recently introduced technique is Stability AI's 2D-to-3D converter. The premise behind this AI tool is that you may submit a 2D photo of an object, and it will do its best to create a 3D model of it. Most of the magic comes from the AI guessing what's on the other side of the object, which it does surprisingly well.
Of course, this has the potential to allow developers to swiftly add 3D models to their games; simply take a photo of the thing they want to import and add it in. However, there is also the possibility of creating a game in which people can upload photographs of things around their house, which are then incorporated to the game.