Artificial intelligence is rapidly advancing beyond its current capabilities, transitioning from tools that generate content to systems capable of making autonomous decisions and pursuing long-term objectives. This next frontier, known as Agentic AI, has the potential to revolutionize how machines interact with the world by functioning independently and adapting to complex environments.
Generative AI models, such as ChatGPT and Google Gemini, analyze patterns in vast datasets to generate responses based on user prompts. These systems are highly versatile and assist with a wide range of tasks but remain fundamentally reactive, requiring human input to function. In contrast, agentic AI introduces autonomy, allowing machines to take initiative, set objectives, and perform tasks without continuous human oversight.
The key distinction lies in their problem-solving approaches. Generative AI acts as a responsive assistant, while agentic AI serves as an independent collaborator, capable of analyzing its environment, recognizing priorities, and making proactive decisions. By enabling machines to work autonomously, agentic AI offers the potential to optimize workflows, adapt to dynamic situations, and manage complex objectives over time.
Agentic AI systems leverage advanced planning modules, memory retention, and sophisticated decision-making frameworks to achieve their goals. These capabilities allow them to:
By incorporating these features, agentic AI ensures continuity and efficiency in executing long-term projects, distinguishing it from its generative counterparts.
The potential impact of agentic AI spans multiple industries and applications. For example:
Major AI companies are already exploring agentic capabilities. Reports suggest that OpenAI is working on projects aimed at enhancing AI autonomy, potentially enabling systems to control digital environments with minimal human input. These advancements highlight the growing importance of autonomous systems in shaping the future of technology.
Despite its transformative potential, agentic AI raises several challenges that must be addressed:
Thoughtful development and robust regulation will be essential to ensure that agentic AI operates ethically and responsibly, mitigating potential risks while unlocking its full benefits.
The transition from generative to agentic AI represents a significant leap in artificial intelligence. By integrating autonomous capabilities, these systems can transform industries, enhance productivity, and redefine human-machine relationships. However, achieving this vision requires a careful balance between innovation and regulation. As AI continues to evolve, agentic intelligence stands poised to usher in a new era of technological progress, fundamentally reshaping how we interact with the world.
With new data centres that use enormous quantities of electricity and water, as well as power-hungry GPUs used to train models, AI is becoming a greater environmental risk.
For instance, reports show that Amazon's data centre empire in North Virginia has consumed more electricity than Seattle, the company's home city. In 2022, Google data centres used 5.2 billion gallons of water, an increase of 20% from the previous year. The Llama 2 model from Meta is also thirsty.
Some examples of tech-giants that have taken initiatives to reduce the added environment strain include Microsoft’s commitment to have their Arizona data centers consume no water for more than half the year. Also, Google announced a cooperation with the industry leader in AI chip Nvidia and has a 2030 goal of replacing 120% of the freshwater used by its offices and data centres.
However, these efforts seem like some carefully-crafted marketing strategy, according to Adrienne Russell, co-director of the Center for Journalism, Media, and Democracy at the University of Washington.
"There has been this long and concerted effort by the tech industry to make digital innovation seem compatible with sustainability and it's just not," she said.
To demonstrate her point, she explains the shift to cloud computing and noted the way Apple’s products are sold and presented to show association with counterculture, independence, digital innovation, and sustainability, a strategy used by many organizations.
This marketing strategy is now being used to showcase AI as an environment-friendly concept.
The CEO of Nvidia, Jensen Huang, touted AI-driven "accelerated computing"—what his business sells—as more affordable and energy-efficient than "general purpose computing," which he claimed was more expensive and comparatively worse for the environment.
The latest Cowen research report claims that AI data centres seek power, which is more than five times the power used in a conventional facility. GPUs supplied by Nvidia consume around 400 watts of power, making one AI server consume at least 2 kilowatts of power. Apparently, a regular cloud server uses around 300-500 watts.
Russel further added, "There are things that come carted along with this, not true information that sustainability and digital innovation go hand-in-hand, like 'you can keep growing' and 'everything can be scaled massively, and it's still fine' and that one type of technology fits everyone."
As businesses attempt to integrate huge language models into more of their operations, the momentum surrounding AI and its environmental impact is set to rise.
Russel further recommended that companies should put emphasis on other sustainable innovations, like mesh networks and indigenous data privacy initiatives.
"If you can pinpoint the examples, however small, of where people are actually designing technology that's sustainable then we can start to imagine and critique these huge technologies that aren't sustainable both environmentally and socially," she said.