Generative AI (GenAI) is transforming the cybersecurity landscape, with 52% of CISOs prioritizing innovation using emerging technologies. However, a significant disconnect exists, as only 33% of board members view these technologies as a top priority. This gap underscores the challenge of aligning strategic priorities between cybersecurity leaders and company boards.
According to the latest Splunk CISO Report, cyberattacks are becoming more frequent and sophisticated. Yet, 41% of security leaders believe that the requirements for protection are becoming easier to manage, thanks to advancements in AI. Many CISOs are increasingly relying on AI to:
However, GenAI is a double-edged sword. While it enhances threat detection and protection, attackers are also leveraging AI to boost their efforts. For instance:
This has led to growing concerns among security professionals, with 36% of CISOs citing AI-powered attacks as their biggest worry, followed by cyber extortion (24%) and data breaches (23%).
One of the major challenges is the gap in budget expectations. Only 29% of CISOs feel they have sufficient funding to secure their organizations, compared to 41% of board members who believe their budgets are adequate. Additionally, 64% of CISOs attribute the cyberattacks their firms experience to a lack of support.
Despite these challenges, there is hope. A vast majority of cybersecurity experts (86%) believe that AI can help attract entry-level talent to address the skills shortage, while 65% say AI enables seasoned professionals to work more productively. Collaboration between security teams and other departments is also improving:
To strengthen cyber defenses, experts emphasize the importance of foundational practices:
Generative AI is reshaping the cybersecurity landscape, offering both opportunities and challenges. While it enhances threat detection and operational efficiency, it also empowers attackers to launch more sophisticated and frequent attacks. To navigate this evolving landscape, organizations must align strategic priorities, invest in AI-driven solutions, and reinforce foundational cybersecurity practices. By doing so, they can better protect their systems and data in an increasingly complex threat environment.
On Thursday, OpenAI’s ChatGPT experienced a significant outage in the UK, leaving thousands of users unable to access the popular AI chatbot. The disruption, which began around 11:00 GMT, saw users encountering a “bad gateway error” message when attempting to use the platform. According to Downdetector, a website that tracks service interruptions, over 10,000 users reported issues during the outage, which persisted for several hours and caused widespread frustration.
OpenAI acknowledged the issue on its official status page, confirming that a fix was implemented by 15:09 GMT. The company assured users that it was monitoring the situation closely, but no official explanation for the cause of the outage has been provided so far. This lack of transparency has fueled speculation among users, with theories ranging from server overload to unexpected technical failures.
As the outage unfolded, affected users turned to social media to voice their concerns and frustrations. On X (formerly Twitter), one user humorously remarked, “ChatGPT is down again? During the workday? So you’re telling me I have to… THINK?!” While some users managed to find humor in the situation, others raised serious concerns about the reliability of AI services, particularly those who depend on ChatGPT for professional tasks such as content creation, coding assistance, and research.
ChatGPT has become an indispensable tool for millions since its launch in November 2022. OpenAI CEO Sam Altman recently revealed that by December 2024, the platform had reached over 300 million weekly users, highlighting its rapid adoption as one of the most widely used AI tools globally. However, the incident has raised questions about service reliability, especially among paying customers. OpenAI’s premium plans, which offer enhanced features, cost up to $200 per month, prompting some users to question whether they are getting adequate value for their investment.
The outage comes at a time of rapid advancements in AI technology. OpenAI and other leading tech firms have pledged significant investments into AI infrastructure, with a commitment of $500 billion toward AI development in the United States. While these investments aim to bolster the technology’s capabilities, incidents like this serve as a reminder of the growing dependence on AI tools and the potential risks associated with their widespread adoption.
The disruption highlights the importance of robust technical systems to ensure uninterrupted service, particularly for users who rely heavily on AI for their daily tasks. Despite restoring services relatively quickly, OpenAI’s ability to maintain user trust and satisfaction may hinge on its efforts to improve its communication strategy and technical resilience. Paying customers, in particular, expect transparency and proactive measures to prevent such incidents in the future.
As artificial intelligence becomes more deeply integrated into everyday life, service disruptions like the ChatGPT outage underline both the potential and limitations of the technology. Users are encouraged to stay informed through OpenAI’s official channels for updates on any future service interruptions or maintenance activities.
Moving forward, OpenAI may need to implement backup systems and alternative solutions to minimize the impact of outages on its user base. Clearer communication during disruptions and ongoing efforts to enhance technical infrastructure will be key to ensuring the platform’s reliability and maintaining its position as a leader in the AI industry.
Artificial intelligence is rapidly advancing beyond its current capabilities, transitioning from tools that generate content to systems capable of making autonomous decisions and pursuing long-term objectives. This next frontier, known as Agentic AI, has the potential to revolutionize how machines interact with the world by functioning independently and adapting to complex environments.
Generative AI models, such as ChatGPT and Google Gemini, analyze patterns in vast datasets to generate responses based on user prompts. These systems are highly versatile and assist with a wide range of tasks but remain fundamentally reactive, requiring human input to function. In contrast, agentic AI introduces autonomy, allowing machines to take initiative, set objectives, and perform tasks without continuous human oversight.
The key distinction lies in their problem-solving approaches. Generative AI acts as a responsive assistant, while agentic AI serves as an independent collaborator, capable of analyzing its environment, recognizing priorities, and making proactive decisions. By enabling machines to work autonomously, agentic AI offers the potential to optimize workflows, adapt to dynamic situations, and manage complex objectives over time.
Agentic AI systems leverage advanced planning modules, memory retention, and sophisticated decision-making frameworks to achieve their goals. These capabilities allow them to:
By incorporating these features, agentic AI ensures continuity and efficiency in executing long-term projects, distinguishing it from its generative counterparts.
The potential impact of agentic AI spans multiple industries and applications. For example:
Major AI companies are already exploring agentic capabilities. Reports suggest that OpenAI is working on projects aimed at enhancing AI autonomy, potentially enabling systems to control digital environments with minimal human input. These advancements highlight the growing importance of autonomous systems in shaping the future of technology.
Despite its transformative potential, agentic AI raises several challenges that must be addressed:
Thoughtful development and robust regulation will be essential to ensure that agentic AI operates ethically and responsibly, mitigating potential risks while unlocking its full benefits.
The transition from generative to agentic AI represents a significant leap in artificial intelligence. By integrating autonomous capabilities, these systems can transform industries, enhance productivity, and redefine human-machine relationships. However, achieving this vision requires a careful balance between innovation and regulation. As AI continues to evolve, agentic intelligence stands poised to usher in a new era of technological progress, fundamentally reshaping how we interact with the world.
San Francisco-based data analytics leader Databricks has achieved a record-breaking milestone, raising $10 billion in its latest funding round. This has elevated the company's valuation to an impressive $62 billion, paving the way for a potential initial public offering (IPO).
Databricks has long been recognized for providing enterprises with a secure platform for hosting and analyzing their data. In 2023, the company further bolstered its offerings by acquiring MosaicML, a generative AI startup. This acquisition allows Databricks to enable its clients to build tailored AI models within a secure cloud environment.
In March, Databricks unveiled DBRX, an advanced large language model (LLM) developed through the MosaicML acquisition. DBRX offers its 12,000 clients a secure AI solution, minimizing risks associated with exposing proprietary data to external AI models.
Unlike massive models such as Google’s Gemini or OpenAI’s GPT-4, DBRX prioritizes efficiency and practicality. It addresses specific enterprise needs, such as:
DBRX employs a unique “mixture-of-experts” design, dividing its functionality into 16 specialized areas. A built-in "router" directs tasks to the appropriate expert, reducing computational demands. Although the full model has 132 billion parameters, only 36 billion are used at any given time, making it energy-efficient and cost-effective.
This efficiency lowers barriers for businesses aiming to integrate AI into daily operations, improving the economics of AI deployment.
Databricks CEO Ali Ghodsi highlighted the company's vision during a press event in March: “These are still the early days of AI. We are positioning the Databricks Data Intelligence Platform to deliver long-term value . . . and our team is committed to helping companies across every industry build data intelligence.”
With this landmark funding round, Databricks continues to solidify its role as a trailblazer in data analytics and enterprise AI. By focusing on secure, efficient, and accessible AI solutions, the company is poised to shape the future of technology across industries.
This staggering figure highlights the rapid evolution of the transnational organized crime threat landscape in the region, which has become a hotbed for illegal cyber activities. The UNODC report points out that countries like Myanmar, Cambodia, and Laos have become prime locations for these crime syndicates.
These groups are involved in a range of fraudulent activities, including romance-investment schemes, cryptocurrency scams, money laundering, and unauthorized gambling operations.
The report also notes that these syndicates are increasingly adopting new service-based business models and advanced technologies, such as malware, deepfakes, and generative AI, to carry out their operations. One of the most alarming aspects of this rise in cybercrime is the professionalization and innovation of these criminal groups.
The UNODC report highlights that these syndicates are not just using traditional methods of fraud but are also integrating cutting-edge technologies to create more sophisticated and harder-to-detect schemes. For example, generative AI is being used to create phishing messages in multiple languages, chatbots that manipulate victims, and fake documents to bypass know-your-customer (KYC) checks.
Deepfakes are also being used to create convincing fake videos and images to deceive victims. The report also sheds light on the role of messaging platforms like Telegram in facilitating these illegal activities.
Criminal syndicates are using Telegram to connect with each other, conduct business, and even run underground cryptocurrency exchanges and online gambling rings. This has led to the emergence of a "criminal service economy" in Southeast Asia, where organized crime groups are leveraging technological advances to expand their operations and diversify their activities.
The impact of this rise in cybercrime is not just financial It also has significant social and political implications. The report notes that the sheer scale of proceeds from the illicit economy reflects the growing professionalization of these criminal groups, which has made Southeast Asia a testing ground for transnational networks eager to expand their reach.
This has put immense pressure on law enforcement agencies in the region, which are struggling to keep up with the rapidly evolving threat landscape.
In response to this growing threat, the UNODC has called for increased international cooperation and stronger law enforcement efforts to combat cybercrime in Southeast Asia The report emphasizes the need for a coordinated approach to tackle these transnational criminal networks and disrupt their operations.
It also highlights the importance of raising public awareness about the risks of cybercrime and promoting cybersecurity measures to protect individuals and businesses from falling victim to these schemes.
Generative AI, which includes technologies like GPT-4, DALL-E, and other advanced machine learning models, has shown immense potential in creating content, automating tasks, and enhancing decision-making processes.
These technologies can generate human-like text, create realistic images, and even compose music, making them valuable tools across industries such as healthcare, finance, marketing, and entertainment.
However, the capabilities of generative AI also raise significant data privacy concerns. As these models require vast amounts of data to train and improve, the risk of mishandling sensitive information increases. This has led to heightened scrutiny from both regulatory bodies and the public.
Data Collection and Usage: Generative AI systems often rely on large datasets that may include personal and sensitive information. The collection, storage, and usage of this data must comply with stringent privacy regulations such as GDPR and CCPA. Organizations must ensure that data is anonymized and used ethically to prevent misuse.
Transparency and Accountability: One of the major concerns is the lack of transparency in how generative AI models operate. Users and stakeholders need to understand how their data is being used and the decisions being made by these systems. Establishing clear accountability mechanisms is crucial to build trust and ensure ethical use.
Bias and Discrimination: Generative AI models can inadvertently perpetuate biases present in the training data. This can lead to discriminatory outcomes, particularly in sensitive areas like hiring, lending, and law enforcement. Addressing these biases requires continuous monitoring and updating of the models to ensure fairness and equity.
Security Risks: The integration of generative AI into various systems can introduce new security vulnerabilities. Cyberattacks targeting AI systems can lead to data breaches, exposing sensitive information. Robust security measures and regular audits are essential to safeguard against such threats.
80% of respondents are required to complete mandatory technology ethics training, marking a 7% increase since 2022. Nearly three-quarters of IT and business professionals rank data privacy among their top three ethical concerns related to generative AI:
The European Union's major AI law goes into effect on Thursday, bringing significant implications for American technology companies.
The AI Act is a piece of EU legislation that regulates AI. The law, first suggested by the European Commission in 2020, seeks to combat the harmful effects of artificial intelligence.
The legislation establishes a comprehensive and standardized regulatory framework for AI within the EU.
It will largely target huge U.S. tech businesses, which are currently the main architects and developers of the most advanced AI systems.
However, the laws will apply to a wide range of enterprises, including non-technology firms.
Tanguy Van Overstraeten, head of legal firm Linklaters' technology, media, and technology practice in Brussels, described the EU AI Act as "the first of its kind in the world." It is expected to influence many enterprises, particularly those building AI systems, as well as those implementing or simply employing them in certain scenarios, he said.
High-risk AI systems include self-driving cars, medical equipment, loan decisioning systems, educational scores, and remote biometric identification systems.
The regulation also prohibits all AI uses that are judged "unacceptable" in terms of danger. Unacceptable-risk artificial intelligence applications include "social scoring" systems that evaluate citizens based on data gathering and analysis, predictive policing, and the use of emotional detection technology in the workplace or schools.
Amid a global craze over artificial intelligence, US behemoths such as Microsoft, Google, Amazon, Apple, and Meta have been aggressively working with and investing billions of dollars in firms they believe can lead the field.
Given the massive computer infrastructure required to train and run AI models, cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud are critical to supporting AI development.
In this regard, Big Tech companies will likely be among the most aggressively targeted names under the new regulations.
The EU AI Act defines generative AI as "general-purpose" artificial intelligence. This title refers to tools that are designed to do a wide range of jobs on a par with, if not better than, a person.
General-purpose AI models include but are not limited to OpenAI's GPT, Google's Gemini, and Anthropic's Claude.
The AI Act imposes stringent standards on these systems, including compliance with EU copyright law, disclosure of how models are trained, routine testing, and proper cybersecurity measures.
The cybersecurity arena is developing at a breakneck pace, creating a significant talent shortage across the industry. This challenge was highlighted by Saugat Sindhu, Senior Partner and Global Head of Advisory Services at Wipro Ltd. He emphasised the pressing need for skilled cybersecurity professionals, noting that the rapid advancements in technology make it difficult for the industry to keep up.
Cybersecurity: A Business Enabler
Over the past decade, cybersecurity has transformed from a corporate function to a crucial business enabler. Sindhu pointed out that cybersecurity is now essential for all companies, not just as a compliance measure but as a strategic asset. Businesses, clients, and industries understand that neglecting cybersecurity can give competitors an advantage, making robust cybersecurity practices indispensable.
The role of the Chief Information Security Officer (CISO) has also evolved. Today, CISOs are responsible for ensuring that businesses have the necessary tools and technologies to grow securely. This includes minimising outages and reputational damage from cyber incidents. According to Sindhu, modern CISOs are more about enabling business operations rather than restricting them.
Generative AI is one of the latest disruptors in the cybersecurity field, much like the cloud was a decade ago. Sindhu explained that different sectors face varying levels of risk with AI adoption. For instance, healthcare, manufacturing, and financial services are particularly vulnerable to attacks like data poisoning, model inversions, and supply chain vulnerabilities. Ensuring the security of AI models is crucial, as vulnerabilities can lead to severe backdoor attacks.
At Wipro, cybersecurity is a top priority, involving multiple departments including the audit office, risk office, core security office, and IT office. Sindhu stated that cybersecurity considerations are now integrated into the onset of any technology transformation project, rather than being an afterthought. This proactive approach ensures that adequate controls are in place from the beginning.
Wipro is heavily investing in cybersecurity training for its employees and practitioners. The company collaborates with major universities in India to support training courses, making it easier to attract new talent. Sindhu emphasised the importance of continuous education and certification to keep up with the fast-paced changes in the field.
Wipro's commitment to cybersecurity is evident in its robust infrastructure. The company boasts over 9,000 cybersecurity specialists and operates 12 global cyber defence centres across more than 60 countries. This extensive network underscores Wipro's dedication to maintaining high security standards and addressing cyber risks proactively.
The rapid evolution of cybersecurity presents pivotal challenges, but also underscores the importance of viewing it as a business enabler. With the right training, proactive measures, and integrated approaches, companies like Wipro are striving to stay ahead of threats and ensure robust protection for their clients. As the demand for cybersecurity talent continues to grow, ongoing education and collaboration will be key to bridging the skills gap.
Cyberattacks are becoming alarmingly frequent, with a new attack occurring approximately every 39 seconds. These attacks, ranging from phishing schemes to ransomware, have devastating impacts on businesses worldwide. The cost of cybercrime is projected to hit $9.5 trillion in 2024, and with AI being leveraged by cybercriminals, this figure is likely to rise.
According to a recent RiverSafe report surveying Chief Information Security Officers (CISOs) in the UK, one in five CISOs identifies AI as the biggest cyber threat. The increasing availability and sophistication of AI tools are empowering cybercriminals to launch more complex and large-scale attacks. The National Cyber Security Centre (NCSC) warns that AI will significantly increase the volume and impact of cyberattacks, including ransomware, in the near future.
AI is enhancing traditional cyberattacks, making them more difficult to detect. For example, AI can modify malware to evade antivirus software. Once detected, AI can generate new variants of the malware, allowing it to persist undetected, steal data, and spread within networks. Additionally, AI can bypass firewalls by creating legitimate-looking traffic and generating convincing phishing emails and deepfakes to deceive victims into revealing sensitive information.
Policies to Mitigate AI Misuse
AI misuse is not only a threat from external cybercriminals but also from employees unknowingly putting company data at risk. One in five security leaders reported experiencing data breaches due to employees sharing company data with AI tools like ChatGPT. These tools are popular for their efficiency, but employees often do not consider the security risks when inputting sensitive information.
In 2023, ChatGPT experienced an extreme data breach, highlighting the risks associated with generative AI tools. While some companies have banned the use of such tools, this is a short-term solution. The long-term approach should focus on education and implementing carefully managed policies to balance the benefits of AI with security risks.
The Growing Threat of Insider Risks
Insider threats are a significant concern, with 75% of respondents believing they pose a greater risk than external threats. Human error, often due to ignorance or unintentional mistakes, is a leading cause of data breaches. These threats are challenging to defend against because they can originate from employees, contractors, third parties, and anyone with legitimate access to systems.
Despite the known risks, 64% of CISOs stated their organizations lack sufficient technology to protect against insider threats. The rise in digital transformation and cloud infrastructure has expanded the attack surface, making it difficult to maintain appropriate security measures. Additionally, the complexity of digital supply chains introduces new vulnerabilities, with trusted business partners responsible for up to 25% of insider threat incidents.
Preparing for AI-Driven Cyber Threats
The evolution of AI in cyber threats necessitates a revamp of cybersecurity strategies. Businesses must update their policies, best practices, and employee training to mitigate the potential damages of AI-powered attacks. With both internal and external threats on the rise, organisations need to adapt to the new age of cyber threats to protect their valuable digital assets effectively.
Despite all the talk of generative AI disrupting the world, the technology has failed to significantly transform white-collar jobs. Workers are experimenting with chatbots for activities like email drafting, and businesses are doing numerous experiments, but office work has yet to experience a big AI overhaul.
That could be because we haven't given chatbots like Google's Gemini and OpenAI's ChatGPT the proper capabilities yet; they're typically limited to taking in and spitting out text via a chat interface.
Things may become more fascinating in commercial settings when AI businesses begin to deploy so-called "AI agents," which may perform actions by running other software on a computer or over the internet.
Anthropic, a rival of OpenAI, unveiled a big new product today that seeks to establish the notion that tool use is required for AI's next jump in usefulness. The business is allowing developers to instruct its chatbot Claude to use external services and software to complete more valuable tasks.
Claude can, for example, use a calculator to solve math problems that vex big language models; be asked to visit a database storing customer information; or be forced to use other programs on a user's computer when it would be beneficial.
Anthropic has been assisting various companies in developing Claude-based aides for their employees. For example, the online tutoring business Study Fetch has created a means for Claude to leverage various platform tools to customize the user interface and syllabus content displayed to students.
Other businesses are also joining the AI Stone Age. At its I/O developer conference earlier this month, Google showed off a few prototype AI agents, among other new AI features. One of the agents was created to handle online shopping returns by searching for the receipt in the customer's Gmail account, completing the return form, and scheduling a package pickup.
The Stone Age of chatbots represents a significant leap forward. Here’s what we can expect: