Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Generative AI. Show all posts

Generative AI in Cybersecurity: A Double-Edged Sword

Generative AI (GenAI) is transforming the cybersecurity landscape, with 52% of CISOs prioritizing innovation using emerging technologies. However, a significant disconnect exists, as only 33% of board members view these technologies as a top priority. This gap underscores the challenge of aligning strategic priorities between cybersecurity leaders and company boards.

The Role of AI in Cybersecurity

According to the latest Splunk CISO Report, cyberattacks are becoming more frequent and sophisticated. Yet, 41% of security leaders believe that the requirements for protection are becoming easier to manage, thanks to advancements in AI. Many CISOs are increasingly relying on AI to:

  • Identify risks (39%)
  • Analyze threat intelligence (39%)
  • Detect and prioritize threats (35%)

However, GenAI is a double-edged sword. While it enhances threat detection and protection, attackers are also leveraging AI to boost their efforts. For instance:

  • 32% of attackers use AI to make attacks more effective.
  • 28% use AI to increase the volume of attacks.
  • 23% use AI to develop entirely new types of threats.

This has led to growing concerns among security professionals, with 36% of CISOs citing AI-powered attacks as their biggest worry, followed by cyber extortion (24%) and data breaches (23%).

Challenges and Opportunities in Cybersecurity

One of the major challenges is the gap in budget expectations. Only 29% of CISOs feel they have sufficient funding to secure their organizations, compared to 41% of board members who believe their budgets are adequate. Additionally, 64% of CISOs attribute the cyberattacks their firms experience to a lack of support.

Despite these challenges, there is hope. A vast majority of cybersecurity experts (86%) believe that AI can help attract entry-level talent to address the skills shortage, while 65% say AI enables seasoned professionals to work more productively. Collaboration between security teams and other departments is also improving:

  • 91% of organizations are increasing security training for legal and compliance staff.
  • 90% are enhancing training for security teams.

To strengthen cyber defenses, experts emphasize the importance of foundational practices:

  1. Strong Passwords and MFA: Poor password security is linked to 80% of data breaches. Companies are encouraged to use password managers and enforce robust password policies.
  2. Regular Cybersecurity Training: Educating employees on risk management and security practices, such as using antivirus software and maintaining firewalls, can significantly reduce vulnerabilities.
  3. Third-Party Vendor Assessments: Organizations must evaluate third-party vendors for security risks, as breaches through these channels can expose even the most secure systems.

Generative AI is reshaping the cybersecurity landscape, offering both opportunities and challenges. While it enhances threat detection and operational efficiency, it also empowers attackers to launch more sophisticated and frequent attacks. To navigate this evolving landscape, organizations must align strategic priorities, invest in AI-driven solutions, and reinforce foundational cybersecurity practices. By doing so, they can better protect their systems and data in an increasingly complex threat environment.

ChatGPT Outage in the UK: OpenAI Faces Reliability Concerns Amid Growing AI Dependence

 


ChatGPT Outage: OpenAI Faces Service Disruption in the UK

On Thursday, OpenAI’s ChatGPT experienced a significant outage in the UK, leaving thousands of users unable to access the popular AI chatbot. The disruption, which began around 11:00 GMT, saw users encountering a “bad gateway error” message when attempting to use the platform. According to Downdetector, a website that tracks service interruptions, over 10,000 users reported issues during the outage, which persisted for several hours and caused widespread frustration.

OpenAI acknowledged the issue on its official status page, confirming that a fix was implemented by 15:09 GMT. The company assured users that it was monitoring the situation closely, but no official explanation for the cause of the outage has been provided so far. This lack of transparency has fueled speculation among users, with theories ranging from server overload to unexpected technical failures.

User Reactions: From Frustration to Humor

As the outage unfolded, affected users turned to social media to voice their concerns and frustrations. On X (formerly Twitter), one user humorously remarked, “ChatGPT is down again? During the workday? So you’re telling me I have to… THINK?!” While some users managed to find humor in the situation, others raised serious concerns about the reliability of AI services, particularly those who depend on ChatGPT for professional tasks such as content creation, coding assistance, and research.

ChatGPT has become an indispensable tool for millions since its launch in November 2022. OpenAI CEO Sam Altman recently revealed that by December 2024, the platform had reached over 300 million weekly users, highlighting its rapid adoption as one of the most widely used AI tools globally. However, the incident has raised questions about service reliability, especially among paying customers. OpenAI’s premium plans, which offer enhanced features, cost up to $200 per month, prompting some users to question whether they are getting adequate value for their investment.

The outage comes at a time of rapid advancements in AI technology. OpenAI and other leading tech firms have pledged significant investments into AI infrastructure, with a commitment of $500 billion toward AI development in the United States. While these investments aim to bolster the technology’s capabilities, incidents like this serve as a reminder of the growing dependence on AI tools and the potential risks associated with their widespread adoption.

The disruption highlights the importance of robust technical systems to ensure uninterrupted service, particularly for users who rely heavily on AI for their daily tasks. Despite restoring services relatively quickly, OpenAI’s ability to maintain user trust and satisfaction may hinge on its efforts to improve its communication strategy and technical resilience. Paying customers, in particular, expect transparency and proactive measures to prevent such incidents in the future.

As artificial intelligence becomes more deeply integrated into everyday life, service disruptions like the ChatGPT outage underline both the potential and limitations of the technology. Users are encouraged to stay informed through OpenAI’s official channels for updates on any future service interruptions or maintenance activities.

Moving forward, OpenAI may need to implement backup systems and alternative solutions to minimize the impact of outages on its user base. Clearer communication during disruptions and ongoing efforts to enhance technical infrastructure will be key to ensuring the platform’s reliability and maintaining its position as a leader in the AI industry.

The Rise of Agentic AI: How Autonomous Intelligence Is Redefining the Future

 


The Evolution of AI: From Generative Models to Agentic Intelligence

Artificial intelligence is rapidly advancing beyond its current capabilities, transitioning from tools that generate content to systems capable of making autonomous decisions and pursuing long-term objectives. This next frontier, known as Agentic AI, has the potential to revolutionize how machines interact with the world by functioning independently and adapting to complex environments.

Generative AI vs. Agentic AI: A Fundamental Shift

Generative AI models, such as ChatGPT and Google Gemini, analyze patterns in vast datasets to generate responses based on user prompts. These systems are highly versatile and assist with a wide range of tasks but remain fundamentally reactive, requiring human input to function. In contrast, agentic AI introduces autonomy, allowing machines to take initiative, set objectives, and perform tasks without continuous human oversight.

The key distinction lies in their problem-solving approaches. Generative AI acts as a responsive assistant, while agentic AI serves as an independent collaborator, capable of analyzing its environment, recognizing priorities, and making proactive decisions. By enabling machines to work autonomously, agentic AI offers the potential to optimize workflows, adapt to dynamic situations, and manage complex objectives over time.

Agentic AI systems leverage advanced planning modules, memory retention, and sophisticated decision-making frameworks to achieve their goals. These capabilities allow them to:

  • Break down complex objectives into manageable tasks
  • Monitor progress and maintain context over time
  • Adjust strategies dynamically based on changing circumstances

By incorporating these features, agentic AI ensures continuity and efficiency in executing long-term projects, distinguishing it from its generative counterparts.

Applications of Agentic AI

The potential impact of agentic AI spans multiple industries and applications. For example:

  • Business: Automating routine tasks, identifying inefficiencies, and optimizing workflows without human intervention.
  • Manufacturing: Overseeing production processes, responding to disruptions, and optimizing resource allocation autonomously.
  • Healthcare: Managing patient care plans, identifying early warning signs, and recommending proactive interventions.

Major AI companies are already exploring agentic capabilities. Reports suggest that OpenAI is working on projects aimed at enhancing AI autonomy, potentially enabling systems to control digital environments with minimal human input. These advancements highlight the growing importance of autonomous systems in shaping the future of technology.

Challenges and Ethical Considerations

Despite its transformative potential, agentic AI raises several challenges that must be addressed:

  • Transparency: Ensuring users understand how decisions are made.
  • Ethical Boundaries: Defining the level of autonomy granted to these systems.
  • Alignment: Maintaining alignment with human values and objectives to foster trust and widespread adoption.

Thoughtful development and robust regulation will be essential to ensure that agentic AI operates ethically and responsibly, mitigating potential risks while unlocking its full benefits.

The transition from generative to agentic AI represents a significant leap in artificial intelligence. By integrating autonomous capabilities, these systems can transform industries, enhance productivity, and redefine human-machine relationships. However, achieving this vision requires a careful balance between innovation and regulation. As AI continues to evolve, agentic intelligence stands poised to usher in a new era of technological progress, fundamentally reshaping how we interact with the world.

Common AI Promt Mistakes And How To Avoid Them

 

If you are running a business in 2025, you're probably already using generative AI in some capacity. GenAI tools and chatbots, such as ChatGPT and Google Gemini, have become indispensable in a variety of cases, ranging from content production to business planning. 

It's no surprise that more than 60% of businesses believe GenAI to be one of their top goals over the next two years. Furthermore, 87 percent of businesses are piloting or have already implemented generative AI tools in some way. 

But there is a catch. The quality of your inputs determines how well generative AI tools perform. Effective prompting can deliver you accurate AI outputs that meet your requirements, whereas ineffective prompting can take you down the wrong path. 

If you've been struggling to maximise the potential of AI technologies, it's time to rethink the cues you're employing. In this article, we'll look at the most common mistakes people make when asking AI tools questions, as well as how to avoid them. 

What are AI prompts? 

Prompts are queries or commands you give to generative AI tools such as ChatGPT or Claude. They are the inputs you utilise to communicate with AI models and instruct them on what to perform (or generate). AI models develop content based on the prompts you give them. 

The more contextual and specific the questions, the more accurate the AI responds. For example, if you're looking for strategies to increase client loyalty, you can utilise the following generative AI prompt: "What are some cost-effective strategies to improve customer loyalty for a small business?” 

Common AI prompt mistakes 

Being too vague: Neither artificial intelligence nor humans can read minds. You may have a clear image of the problem you're attempting to solve, including limits, items you've explored or done, and potential objections. But, unless you ask a very specific inquiry, neither your human friends nor your AI assistance will be able to pull those images from your thoughts. When asking for assistance, be specific and complete. 

Not being clear about the format: Would you prefer a list, a discussion, or a table? Do you want a comparison of factors or a detailed dive into the issues? The mistake happens when you ask a question but do not instruct the AI on how you want the response to be presented. This mistake isn't just about style and punctuation; it's about how the information is digested and improved for your final consumption. As with the first item on this list, be specific. Tell the AI what you're looking for and what you'll need to receive an answer. 

Not knowing when to take a step back: Sometimes AI cannot solve the problem or give the level of quality required. Fundamentally, an AI is a tool, and one tool cannot accomplish everything. Know when to hold 'em and when to fold them. Know when it's time to go back to a search engine, check forums, or create your own answers. There is a point of diminishing returns, and identifying it will save you time and frustration. 

How to write prompts successfully 

  • Use prompts that are specific, clear, and thorough. 
  • Remember that the AI is simply a program, not a magical oracle. 
  • Iterate and refine your queries by asking increasingly better questions.
  • Keep the prompt on topic. Specify details that provide context for your enquiries.

Navigating 2025: Emerging Security Trends and AI Challenges for CISOs

 

Security teams have always needed to adapt to change, but 2025 is poised to bring unique challenges, driven by advancements in artificial intelligence (AI), sophisticated cyber threats, and evolving regulatory mandates. Chief Information Security Officers (CISOs) face a rapidly shifting landscape that requires innovative strategies to mitigate risks and ensure compliance.

The integration of AI-enabled features into products is accelerating, with large language models (LLMs) introducing new vulnerabilities that attackers may exploit. As vendors increasingly rely on these foundational models, CISOs must evaluate their organization’s exposure and implement measures to counter potential threats. 

"The dynamic landscape of cybersecurity regulations, particularly in regions like the European Union and California, demands enhanced collaboration between security and legal teams to ensure compliance and mitigate risks," experts note. Balancing these regulatory requirements with emerging security challenges will be crucial for protecting enterprises.

Generative AI (GenAI), while presenting security risks, also offers opportunities to strengthen software development processes. By automating vulnerability detection and bridging the gap between developers and security teams, AI can improve efficiency and bolster security frameworks.

Trends to Watch in 2025

1. Vulnerabilities in Proprietary LLMs Could Lead to Major Security Incidents

Software vendors are rapidly adopting AI-enabled features, often leveraging proprietary LLMs. However, these models introduce a new attack vector. Proprietary models reveal little about their internal guardrails or origins, making them challenging for security professionals to manage. Vulnerabilities in these models could have cascading effects, potentially disrupting the software ecosystem at scale.

2. Cloud-Native Workloads and AI Demand Adaptive Identity Management

The rise of cloud-native applications and AI-driven systems is reshaping identity management. Traditional, static access control systems must evolve to handle the surge in service-based identities. Adaptive frameworks are essential for ensuring secure and efficient access in dynamic digital environments.

3. AI Enhances Security in DevOps

A growing number of developers—58% according to recent surveys—recognize their role in application security. However, the demand for skilled security professionals in DevOps remains unmet.

AI is bridging this gap by automating repetitive tasks, offering smart coding recommendations, and integrating security into development pipelines. Authentication processes are also being streamlined, with AI dynamically assigning roles and permissions as services deploy across cloud environments. This integration enhances collaboration between developers and security teams while reducing risks.

CISOs must acknowledge the dual-edged nature of AI: while it introduces new risks, it also offers powerful tools to counter cyber threats. By leveraging AI to automate tasks, detect vulnerabilities, and respond to threats in real-time, organizations can strengthen their defenses and adapt to an evolving threat landscape.

The convergence of technology and security in 2025 calls for strategic innovation, enabling enterprises to not only meet compliance requirements but also proactively address emerging risks.


Databricks Secures $10 Billion in Funding, Valued at $62 Billion

 


San Francisco-based data analytics leader Databricks has achieved a record-breaking milestone, raising $10 billion in its latest funding round. This has elevated the company's valuation to an impressive $62 billion, paving the way for a potential initial public offering (IPO).

Series J Funding and Key Investors

  • The Series J funding round featured prominent investors such as Thrive Capital and Andreessen Horowitz, both of whom are also investors in OpenAI.
  • This funding round ties with Microsoft’s $10 billion investment in OpenAI in 2023, ranking among the largest venture investments ever made.
  • Such substantial investments underscore growing confidence in companies poised to lead the evolving tech landscape, which now requires significantly higher capital than in previous eras.

Enhancing Enterprise AI Capabilities

Databricks has long been recognized for providing enterprises with a secure platform for hosting and analyzing their data. In 2023, the company further bolstered its offerings by acquiring MosaicML, a generative AI startup. This acquisition allows Databricks to enable its clients to build tailored AI models within a secure cloud environment.

Introducing DBRX: Advanced AI for Enterprises

In March, Databricks unveiled DBRX, an advanced large language model (LLM) developed through the MosaicML acquisition. DBRX offers its 12,000 clients a secure AI solution, minimizing risks associated with exposing proprietary data to external AI models.

Unlike massive models such as Google’s Gemini or OpenAI’s GPT-4, DBRX prioritizes efficiency and practicality. It addresses specific enterprise needs, such as:

  • Fraud detection in numerical data for financial firms
  • Analyzing patient records to identify disease patterns in healthcare

Efficiency Through "Mixture-of-Experts" Design

DBRX employs a unique “mixture-of-experts” design, dividing its functionality into 16 specialized areas. A built-in "router" directs tasks to the appropriate expert, reducing computational demands. Although the full model has 132 billion parameters, only 36 billion are used at any given time, making it energy-efficient and cost-effective.

This efficiency lowers barriers for businesses aiming to integrate AI into daily operations, improving the economics of AI deployment.

Positioning for the Future

Databricks CEO Ali Ghodsi highlighted the company's vision during a press event in March: “These are still the early days of AI. We are positioning the Databricks Data Intelligence Platform to deliver long-term value . . . and our team is committed to helping companies across every industry build data intelligence.”

With this landmark funding round, Databricks continues to solidify its role as a trailblazer in data analytics and enterprise AI. By focusing on secure, efficient, and accessible AI solutions, the company is poised to shape the future of technology across industries.

AI-Powered Dark Patterns: What's Up Next?

 

The rapid growth of generative AI (artificial intelligence) highlights how urgent it is to address privacy and ethical issues related to the use of these technologies across a range of sectors. Over the past year, data protection conferences have repeatedly emphasised AI's expanding role in the privacy and data protection domains as well as the pressing necessity for Data Protection Officers (DPOs) to handle the issues it presents for their businesses. 

These issues include the creation of deepfakes and synthetic content that could sway public opinion or threaten specific individuals as well as the public at large, the leakage of sensitive personal information in model outputs, the inherent bias in generative algorithms, and the overestimation of AI capabilities that results in inaccurate output (also known as AI hallucinations), which often refer to real individuals. 

So, what are the AI-driven dark patterns? These are deceptive UI strategies that use AI to influence application users into making decisions that favour the company rather than the user. These designs employ user psychology and behaviour in more sophisticated ways than typical dark patterns. 

Imagine getting a video call from your bank manager (created by a deepfake) informing you of some suspicious activity on your account. The AI customises the call for your individual bank branch, your bank manager's vocal patterns, and even their look, making it quite convincing. This deepfake call could tempt you to disclose sensitive data or click on suspicious links. 

Another alarming example of AI-driven dark patterns may be hostile actors creating highly targeted social media profiles that exploit your child's flaws. The AI can analyse your child's online conduct and create fake friendships or relationships that could trick the child into disclosing personal information or even their location to these people. Thus, the question arises: what can we do now to minimise these ills? How do we prevent future scenarios in which cyber criminals and even ill-intentioned organisations contact us and our loved ones via technologies on which we have come to rely for daily activities? 

Unfortunately, the solution is not simple. Mitigating AI-driven dark patterns necessitates a multifaceted approach that includes consumers, developers, and regulatory organisations. The globally recognised privacy principles of data quality, data collection limitation, purpose specification, use limitation, security, transparency, accountability, and individual participation are universally applicable to all systems that handle personal data, including training algorithms and generative AI. We must now test these principles to discover if they can actually protect us from this new, and often thrilling, technology.

Prevention tips 

First and foremost, we must educate people on AI-driven dark trends and fraudulent techniques. This can be accomplished by public awareness campaigns, educational tools at all levels of the education system, and the incorporation of warnings into user interfaces, particularly on social media platforms popular with young people. Cigarette firms must disclose the risks of their products, as should AI-powered services to which our children are exposed.

We should also look for ways to encourage users, particularly young and vulnerable users, to be critical consumers of information they come across online, especially when dealing with AI systems. In the twenty-first century, our educational systems should train members of society to question (far more) the source and intent of AI-generated content. 

Give the younger generation, and even the older ones, the tools they need to control their data and customise their interactions with AI systems. This might include options that allow users or parents of young users to opt out of AI-powered suggestions or data collection. Governments and regulatory agencies play an important role to establish clear rules and regulations for AI development and use. The European Union plans to propose its first such law this summer. The long-awaited EU AI Act puts many of these data protection and ethical concerns into action. This is a positive start.

Securing Generative AI: Tackling Unique Risks and Challenges

 

Generative AI has introduced a new wave of technological innovation, but it also brings a set of unique challenges and risks. According to Phil Venables, Chief Information Security Officer of Google Cloud, addressing these risks requires expanding traditional cybersecurity measures. Generative AI models are prone to issues such as hallucinations—where the model produces inaccurate or nonsensical content—and the leaking of sensitive information through model outputs. These risks necessitate the development of tailored security strategies to ensure safe and reliable AI use. 

One of the primary concerns with generative AI is data integrity. Models rely heavily on vast datasets for training, and any compromise in this data can lead to significant security vulnerabilities. Venables emphasizes the importance of maintaining the provenance of training data and implementing controls to protect its integrity. Without proper safeguards, models can be manipulated through data poisoning, which can result in the production of biased or harmful outputs. Another significant risk involves prompt manipulation, where adversaries exploit vulnerabilities in the AI model to produce unintended outcomes. 

This can include injecting malicious prompts or using adversarial tactics to bypass the model’s controls. Venables highlights the necessity of robust input filtering mechanisms to prevent such manipulations. Organizations should deploy comprehensive logging and monitoring systems to detect and respond to suspicious activities in real time. In addition to securing inputs, controlling the outputs of AI models is equally critical. Venables recommends the implementation of “circuit breakers”—mechanisms that monitor and regulate model outputs to prevent harmful or unintended actions. This ensures that even if an input is manipulated, the resulting output is still within acceptable parameters. Infrastructure security also plays a vital role in safeguarding generative AI systems. 

Venables advises enterprises to adopt end-to-end security practices that cover the entire lifecycle of AI deployment, from model training to production. This includes sandboxing AI applications, enforcing the least privilege principle, and maintaining strict access controls on models, data, and infrastructure. Ultimately, securing generative AI requires a holistic approach that combines innovative security measures with traditional cybersecurity practices. 

By focusing on data integrity, robust monitoring, and comprehensive infrastructure controls, organizations can mitigate the unique risks posed by generative AI. This proactive approach ensures that AI systems are not only effective but also safe and trustworthy, enabling enterprises to fully leverage the potential of this groundbreaking technology while minimizing associated risks.

AI Tools Fueling Global Expansion of China-Linked Trafficking and Scamming Networks

 

A recent report highlights the alarming rise of China-linked human trafficking and scamming networks, now using AI tools to enhance their operations. Initially concentrated in Southeast Asia, these operations trafficked over 200,000 people into compounds in Myanmar, Cambodia, and Laos. Victims were forced into cybercrime activities, such as “pig butchering” scams, impersonating law enforcement, and sextortion. Criminals have now expanded globally, incorporating generative AI for multi-language scamming, creating fake profiles, and even using deepfake technology to deceive victims. 

The growing use of these tools allows scammers to target victims more efficiently and execute more sophisticated schemes. One of the most prominent types of scams is the “pig butchering” scheme, where scammers build intimate online relationships with their victims before tricking them into investing in fake opportunities. These scams have reportedly netted criminals around $75 billion. In addition to pig butchering, Southeast Asian criminal networks are involved in various illicit activities, including job scams, phishing attacks, and loan schemes. Their ability to evolve with AI technology, such as using ChatGPT to overcome language barriers, makes them more effective at deceiving victims. 

Generative AI also plays a role in automating phishing attacks, creating fake identities, and writing personalized scripts to target individuals in different regions. Deepfake technology, which allows real-time face-swapping during video calls, is another tool scammers are using to further convince their victims of their fabricated personas. Criminals now can engage with victims in highly realistic conversations and video interactions, making it much more difficult for victims to discern between real and fake identities. The UN report warns that these technological advancements are lowering the barrier to entry for criminal organizations that may lack advanced technical skills but are now able to participate in lucrative cyber-enabled fraud. 

As scamming compounds continue to operate globally, there has also been an uptick in law enforcement seizing Starlink satellite devices used by scammers to maintain stable internet connections for their operations. The introduction of “crypto drainers,” a type of malware designed to steal funds from cryptocurrency wallets, has also become a growing concern. These drainers mimic legitimate services to trick victims into connecting their wallets, allowing attackers to gain access to their funds.  

As global law enforcement struggles to keep pace with the rapid technological advances used by these networks, the UN has stressed the urgency of addressing this growing issue. Failure to contain these ecosystems could have far-reaching consequences, not only for Southeast Asia but for regions worldwide. AI tools and the expanding infrastructure of scamming operations are creating a perfect storm for criminals, making it increasingly difficult for authorities to combat these crimes effectively. The future of digital scamming will undoubtedly see more AI-powered innovations, raising the stakes for law enforcement globally.

How Southeast Asian Cyber Syndicates Stole Billions

How Southeast Asian Cyber Syndicates Stole Billions

In 2023, cybercrime syndicates in Southeast Asia managed to steal up to $37 billion, according to a report by the United Nations Office on Drugs and Crime (UNODC).

Inside the World of Cybercrime Syndicates in Southeast Asia

This staggering figure highlights the rapid evolution of the transnational organized crime threat landscape in the region, which has become a hotbed for illegal cyber activities. The UNODC report points out that countries like Myanmar, Cambodia, and Laos have become prime locations for these crime syndicates.

These groups are involved in a range of fraudulent activities, including romance-investment schemes, cryptocurrency scams, money laundering, and unauthorized gambling operations.

Unveiling the Secrets of a $37 Billion Cybercrime Industry

The report also notes that these syndicates are increasingly adopting new service-based business models and advanced technologies, such as malware, deepfakes, and generative AI, to carry out their operations. One of the most alarming aspects of this rise in cybercrime is the professionalization and innovation of these criminal groups.

The UNODC report highlights that these syndicates are not just using traditional methods of fraud but are also integrating cutting-edge technologies to create more sophisticated and harder-to-detect schemes. For example, generative AI is being used to create phishing messages in multiple languages, chatbots that manipulate victims, and fake documents to bypass know-your-customer (KYC) checks.

How Advanced Tech Powers Southeast Asia's Cybercrime Surge

Deepfakes are also being used to create convincing fake videos and images to deceive victims. The report also sheds light on the role of messaging platforms like Telegram in facilitating these illegal activities.

Criminal syndicates are using Telegram to connect with each other, conduct business, and even run underground cryptocurrency exchanges and online gambling rings. This has led to the emergence of a "criminal service economy" in Southeast Asia, where organized crime groups are leveraging technological advances to expand their operations and diversify their activities.

Southeast Asia: The New Epicenter of Transnational Cybercrime

The impact of this rise in cybercrime is not just financial It also has significant social and political implications. The report notes that the sheer scale of proceeds from the illicit economy reflects the growing professionalization of these criminal groups, which has made Southeast Asia a testing ground for transnational networks eager to expand their reach.

This has put immense pressure on law enforcement agencies in the region, which are struggling to keep up with the rapidly evolving threat landscape.

In response to this growing threat, the UNODC has called for increased international cooperation and stronger law enforcement efforts to combat cybercrime in Southeast Asia The report emphasizes the need for a coordinated approach to tackle these transnational criminal networks and disrupt their operations.

It also highlights the importance of raising public awareness about the risks of cybercrime and promoting cybersecurity measures to protect individuals and businesses from falling victim to these schemes.

Ethics and Tech: Data Privacy Concerns Around Generative AI

Ethics and Tech: Data Privacy Concerns Around Generative AI

The tech industry is embracing Generative AI, but the conversation around data privacy has become increasingly important. The recent “State of Ethics and Trust in Technology” report by Deloitte highlights the pressing ethical considerations that accompany the rapid adoption of these technologies. 30% of organizations have adjusted new AI projects, and 25% have modified existing ones in response to the AI Act, the report mentions.

The Rise of Generative AI

54% of professionals believe that generative AI poses the highest ethical risk among emerging technologies. Additionally, 40% of respondents identified data privacy as their top concern. 

Generative AI, which includes technologies like GPT-4, DALL-E, and other advanced machine learning models, has shown immense potential in creating content, automating tasks, and enhancing decision-making processes. 

These technologies can generate human-like text, create realistic images, and even compose music, making them valuable tools across industries such as healthcare, finance, marketing, and entertainment.

However, the capabilities of generative AI also raise significant data privacy concerns. As these models require vast amounts of data to train and improve, the risk of mishandling sensitive information increases. This has led to heightened scrutiny from both regulatory bodies and the public.

Key Data Privacy Concerns

Data Collection and Usage: Generative AI systems often rely on large datasets that may include personal and sensitive information. The collection, storage, and usage of this data must comply with stringent privacy regulations such as GDPR and CCPA. Organizations must ensure that data is anonymized and used ethically to prevent misuse.

Transparency and Accountability: One of the major concerns is the lack of transparency in how generative AI models operate. Users and stakeholders need to understand how their data is being used and the decisions being made by these systems. Establishing clear accountability mechanisms is crucial to build trust and ensure ethical use.

Bias and Discrimination: Generative AI models can inadvertently perpetuate biases present in the training data. This can lead to discriminatory outcomes, particularly in sensitive areas like hiring, lending, and law enforcement. Addressing these biases requires continuous monitoring and updating of the models to ensure fairness and equity.

Security Risks: The integration of generative AI into various systems can introduce new security vulnerabilities. Cyberattacks targeting AI systems can lead to data breaches, exposing sensitive information. Robust security measures and regular audits are essential to safeguard against such threats.

Ethical Considerations and Trust

80% of respondents are required to complete mandatory technology ethics training, marking a 7% increase since 2022.  Nearly three-quarters of IT and business professionals rank data privacy among their top three ethical concerns related to generative AI:

  • Developing and implementing ethical frameworks for AI usage is crucial. These frameworks should outline principles for data privacy, transparency, and accountability, guiding organizations in the responsible deployment of generative AI.
  • Engaging with stakeholders, including employees, customers, and regulatory bodies, is essential to build trust. Open dialogues about the benefits and risks of generative AI can help in addressing concerns and fostering a culture of transparency.
  • The dynamic nature of AI technologies necessitates continuous monitoring and improvement. Regular assessments of AI systems for biases, security vulnerabilities, and compliance with privacy regulations are vital to ensure ethical use.

AI-Generated Malware Discovered in the Wild

 

Researchers found malicious code that they suspect was developed with the aid of generative artificial intelligence services to deploy the AsyncRAT malware in an email campaign that was directed towards French users. 

While threat actors have employed generative AI technology to design convincing emails, government agencies have cautioned regarding the potential exploit of AI tools to create malicious software, despite the precautions and restrictions that vendors implemented. 

Suspected cases of AI-created malware have been spotted in real attacks. The malicious PowerShell script that was uncovered earlier this year by cybersecurity firm Proofpoint was most likely generated by an AI system. 

As less technical threat actors depend more on AI to develop malware, HP security experts discovered a malicious campaign in early June that employed code commented in the same manner a generative AI system would. 

The VBScript established persistence on the compromised PC by generating scheduled activities and writing new keys to the Windows Registry. The researchers add that some of the indicators pointing to AI-generated malicious code include the framework of the scripts, the comments that explain each line, and the use of native language for function names and variables. 

AsyncRAT, an open-source, publicly available malware that can record keystrokes on the victim device and establish an encrypted connection for remote monitoring and control, is later downloaded and executed by the attacker. The malware can also deliver additional payloads. 

The HP Wolf Security research also states that, in terms of visibility, archives were the most popular delivery option in the first half of the year. Lower-level threat actors can use generative AI to create malware in minutes and customise it for assaults targeting different areas and platforms (Linux, macOS). 

Even if they do not use AI to create fully functional malware, hackers rely on it to accelerate their labour while developing sophisticated threats.

IT Leaders Raise Security Concerns Regarding Generative AI

 

According to a new Venafi survey, developers in almost all (83%) organisations utilise AI to generate code, raising concerns among security leaders that it might lead to a major security incident. 

In a report published earlier this month, the machine identity management company shared results indicating that AI-generated code is widening the gap between programming and security teams. 

The report, Organisations Struggle to Secure AI-Generated and Open Source Code, highlighted that while 72% of security leaders believe they have little choice but to allow developers to utilise AI in order to remain competitive, virtually all (92%) are concerned regarding its use. 

Because AI, particularly generative AI technology, is advancing so quickly, 66% of security leaders believe they will be unable to stay up. An even more significant number (78%) believe that AI-generated code will lead to a security reckoning for their organisation, and 59% are concerned about the security implications of AI. 

The top three issues most frequently mentioned by survey respondents are the following: 

  • Over-reliance on AI by developers will result in a drop in standards
  • Ineffective quality checking of AI-written code 
  • AI to employ dated open-source libraries that have not been well-maintained

“Developers are already supercharged by AI and won’t give up their superpowers. And attackers are infiltrating our ranks – recent examples of long-term meddling in open source projects and North Korean infiltration of IT are just the tip of the iceberg,” Kevin Bocek, Chief Innovation Officer at Venafi, stated. 

Furthermore, the Venafi poll reveals that AI-generated code raises not only technology issues, but also tech governance challenges. For example, nearly two-thirds (63%) of security leaders believe it is impossible to oversee the safe use of AI in their organisation because they lack visibility into where AI is being deployed. Despite concerns, fewer than half of firms (47%) have procedures in place to ensure the safe use of AI in development settings. 

“Anyone today with an LLM can write code, opening an entirely new front. It’s the code that matters, whether it is your developers hyper-coding with AI, infiltrating foreign agents or someone in finance getting code from an LLM trained on who knows what. We have to authenticate code from wherever it comes,” Bocek concluded. 

The Venafi report is the outcome of a poll of 800 security decision-makers from the United States, the United Kingdom, Germany, and France.

Project Strawberry: Advancing AI with Q-learning, A* Algorithms, and Dual-Process Theory

Project Strawberry, initially known as Q*, has quickly become a focal point of excitement and discussion within the AI community. The project aims to revolutionize artificial intelligence by enhancing its self-learning and reasoning capabilities, crucial steps toward achieving Artificial General Intelligence (AGI). By incorporating advanced algorithms and theories, Project Strawberry pushes the boundaries of what AI can accomplish, making it a topic of intense interest and speculation. 

At the core of Project Strawberry are several foundational algorithms that enable AI systems to learn and make decisions more effectively. The project utilizes Q-learning, a reinforcement learning technique that allows AI to determine optimal actions through trial and error, helping it navigate complex environments. Alongside this, the A* search algorithm provides efficient pathfinding capabilities, ensuring AI can find the best solutions to problems quickly and accurately. 

Additionally, the dual-process theory, inspired by human cognitive processes, is used to balance quick, intuitive judgments with thorough, deliberate analysis, enhancing decision-making abilities. Despite the project’s promising advancements, it also raises several concerns. One of the most significant risks involves encryption cracking, where advanced AI could potentially break encryption codes, posing a severe security threat. 

Furthermore, the issue of “AI hallucinations”—errors in AI outputs—remains a critical challenge that needs to be addressed to ensure accurate and trustworthy AI responses. Another concern is the high computational demands of Project Strawberry, which may lead to increased costs and energy consumption. Efficient resource management and optimization will be crucial to maintaining the project’s scalability and sustainability. The ultimate goal of Project Strawberry is to pave the way for AGI, where AI systems can perform any intellectual task a human can. 

Achieving AGI would revolutionize problem-solving across various fields, enabling AI to tackle long-term and complex challenges with advanced reasoning capabilities. OpenAI envisions developing “reasoners” that exhibit human-like intelligence, pushing the frontiers of AI research even further. While Project Strawberry represents a significant step forward in AI development, it also presents complex challenges that must be carefully navigated. 

The project’s potential has fueled widespread excitement and anticipation within the AI community, with many eagerly awaiting further updates and breakthroughs. As OpenAI continues to refine and develop Project Strawberry, it could set the stage for a new era in AI, bringing both remarkable possibilities and significant responsibilities.

World's First AI Law: A Tough Blow for Tech Giants

World's First AI Law: A Tough Blow for Tech Giants

In May, EU member states, lawmakers, and the European Commission — the EU's executive body — finalized the AI Act, a significant guideline that intends to oversee how corporations create, use, and use AI. 

The European Union's major AI law goes into effect on Thursday, bringing significant implications for American technology companies.

About the AI Act

The AI Act is a piece of EU legislation that regulates AI. The law, first suggested by the European Commission in 2020, seeks to combat the harmful effects of artificial intelligence.

The legislation establishes a comprehensive and standardized regulatory framework for AI within the EU.

It will largely target huge U.S. tech businesses, which are currently the main architects and developers of the most advanced AI systems.

However, the laws will apply to a wide range of enterprises, including non-technology firms.

Tanguy Van Overstraeten, head of legal firm Linklaters' technology, media, and technology practice in Brussels, described the EU AI Act as "the first of its kind in the world." It is expected to influence many enterprises, particularly those building AI systems, as well as those implementing or simply employing them in certain scenarios, he said.

High-risk and low-risk AI systems

High-risk AI systems include self-driving cars, medical equipment, loan decisioning systems, educational scores, and remote biometric identification systems.

The regulation also prohibits all AI uses that are judged "unacceptable" in terms of danger. Unacceptable-risk artificial intelligence applications include "social scoring" systems that evaluate citizens based on data gathering and analysis, predictive policing, and the use of emotional detection technology in the workplace or schools.

Implication for US tech firms

Amid a global craze over artificial intelligence, US behemoths such as Microsoft, Google, Amazon, Apple, and Meta have been aggressively working with and investing billions of dollars in firms they believe can lead the field.

Given the massive computer infrastructure required to train and run AI models, cloud platforms such as Microsoft Azure, Amazon Web Services, and Google Cloud are critical to supporting AI development.

In this regard, Big Tech companies will likely be among the most aggressively targeted names under the new regulations.

Generative AI and EU

The EU AI Act defines generative AI as "general-purpose" artificial intelligence. This title refers to tools that are designed to do a wide range of jobs on a par with, if not better than, a person.

General-purpose AI models include but are not limited to OpenAI's GPT, Google's Gemini, and Anthropic's Claude.

The AI Act imposes stringent standards on these systems, including compliance with EU copyright law, disclosure of how models are trained, routine testing, and proper cybersecurity measures.

Generative AI is Closing The Tech Gap Between Security Teams And Threat Actors

 

With over 17 billion records breached in 2023, data breaches have reached an all-time high. Businesses are more vulnerable than ever before due to increased ransomware attacks, third-party hacks, and the increasing sophistication of threat actors. 

Still, many security teams are ill-equipped, particularly given new data from our team shows that 55% of IT security leaders believe modern cybercriminals are more advanced than their internal teams. The perpetrators are raising their game as they adopt and weaponize the new generation of emerging artificial intelligence (AI) technology, while companies continue to slip behind. Security teams require the necessary technology and tools to overcome common obstacles and avoid falling victim to these malicious actors. 

It takes minutes, not days, for an attacker to exploit a vulnerability. Cybersecurity Ventures predicts that by 2031, a ransomware assault will occur every two seconds. The most powerful new instrument for fuelling attacks is generative AI (GenAI). 

It enables hackers to find gaps, automate attacks, and even mimic company employees to steal credentials and system access. According to the findings, the most concerning use cases for security teams include GenAI model prompt hacking (46%), LLM data poisoning (38%), RaaS (37%), API breaches (24%), and GenAI phishing. 

Ultimately, GenAI and other smart technologies are catching security personnel off guard. Researchers discovered that 35% feel the technology used in hacks is more sophisticated than what their team has access to. In fact, 53% of organisations fear that new AI tactics utilised by criminals are opening up new assault spots for which they are unprepared. Better technology will always win.

As attack methods evolve, it is logical to expect additional breaches, ransomware installations, and stolen data. According to 49% of security leaders, the frequency of cyberattacks has increased over the last year, while 43% think the severity of cyberattacks has increased. It's time for security teams to enhance their technology in order to catch up and move ahead, especially while other well-known industry pain points linger. 

While the digital divide may be growing due to new criminal usage of AI, long-standing industry issues are making matters worse. Despite the steady growth of the cybersecurity industry, there are still an estimated 4 million security experts needed to fill open positions globally. One analyst now performs the duties of numerous. Lack of technology causes manual labour, mistakes, and exhaustion for understaffed security teams. Surprisingly, despite the ongoing cybersecurity talent need, our team found that only 10% of businesses have boosted cyber hiring in the last 12 months.

Why Every Business is Scrambling to Hire Cybersecurity Experts


 

The cybersecurity arena is developing at a breakneck pace, creating a significant talent shortage across the industry. This challenge was highlighted by Saugat Sindhu, Senior Partner and Global Head of Advisory Services at Wipro Ltd. He emphasised the pressing need for skilled cybersecurity professionals, noting that the rapid advancements in technology make it difficult for the industry to keep up.


Cybersecurity: A Business Enabler

Over the past decade, cybersecurity has transformed from a corporate function to a crucial business enabler. Sindhu pointed out that cybersecurity is now essential for all companies, not just as a compliance measure but as a strategic asset. Businesses, clients, and industries understand that neglecting cybersecurity can give competitors an advantage, making robust cybersecurity practices indispensable.

The role of the Chief Information Security Officer (CISO) has also evolved. Today, CISOs are responsible for ensuring that businesses have the necessary tools and technologies to grow securely. This includes minimising outages and reputational damage from cyber incidents. According to Sindhu, modern CISOs are more about enabling business operations rather than restricting them.

Generative AI is one of the latest disruptors in the cybersecurity field, much like the cloud was a decade ago. Sindhu explained that different sectors face varying levels of risk with AI adoption. For instance, healthcare, manufacturing, and financial services are particularly vulnerable to attacks like data poisoning, model inversions, and supply chain vulnerabilities. Ensuring the security of AI models is crucial, as vulnerabilities can lead to severe backdoor attacks.

At Wipro, cybersecurity is a top priority, involving multiple departments including the audit office, risk office, core security office, and IT office. Sindhu stated that cybersecurity considerations are now integrated into the onset of any technology transformation project, rather than being an afterthought. This proactive approach ensures that adequate controls are in place from the beginning.

Wipro is heavily investing in cybersecurity training for its employees and practitioners. The company collaborates with major universities in India to support training courses, making it easier to attract new talent. Sindhu emphasised the importance of continuous education and certification to keep up with the fast-paced changes in the field.

Wipro's commitment to cybersecurity is evident in its robust infrastructure. The company boasts over 9,000 cybersecurity specialists and operates 12 global cyber defence centres across more than 60 countries. This extensive network underscores Wipro's dedication to maintaining high security standards and addressing cyber risks proactively.

The rapid evolution of cybersecurity presents pivotal challenges, but also underscores the importance of viewing it as a business enabler. With the right training, proactive measures, and integrated approaches, companies like Wipro are striving to stay ahead of threats and ensure robust protection for their clients. As the demand for cybersecurity talent continues to grow, ongoing education and collaboration will be key to bridging the skills gap.



AI Brings A New Era of Cyber Threats – Are We Ready?

 



Cyberattacks are becoming alarmingly frequent, with a new attack occurring approximately every 39 seconds. These attacks, ranging from phishing schemes to ransomware, have devastating impacts on businesses worldwide. The cost of cybercrime is projected to hit $9.5 trillion in 2024, and with AI being leveraged by cybercriminals, this figure is likely to rise.

According to a recent RiverSafe report surveying Chief Information Security Officers (CISOs) in the UK, one in five CISOs identifies AI as the biggest cyber threat. The increasing availability and sophistication of AI tools are empowering cybercriminals to launch more complex and large-scale attacks. The National Cyber Security Centre (NCSC) warns that AI will significantly increase the volume and impact of cyberattacks, including ransomware, in the near future.

AI is enhancing traditional cyberattacks, making them more difficult to detect. For example, AI can modify malware to evade antivirus software. Once detected, AI can generate new variants of the malware, allowing it to persist undetected, steal data, and spread within networks. Additionally, AI can bypass firewalls by creating legitimate-looking traffic and generating convincing phishing emails and deepfakes to deceive victims into revealing sensitive information.

Policies to Mitigate AI Misuse

AI misuse is not only a threat from external cybercriminals but also from employees unknowingly putting company data at risk. One in five security leaders reported experiencing data breaches due to employees sharing company data with AI tools like ChatGPT. These tools are popular for their efficiency, but employees often do not consider the security risks when inputting sensitive information.

In 2023, ChatGPT experienced an extreme data breach, highlighting the risks associated with generative AI tools. While some companies have banned the use of such tools, this is a short-term solution. The long-term approach should focus on education and implementing carefully managed policies to balance the benefits of AI with security risks.

The Growing Threat of Insider Risks

Insider threats are a significant concern, with 75% of respondents believing they pose a greater risk than external threats. Human error, often due to ignorance or unintentional mistakes, is a leading cause of data breaches. These threats are challenging to defend against because they can originate from employees, contractors, third parties, and anyone with legitimate access to systems.

Despite the known risks, 64% of CISOs stated their organizations lack sufficient technology to protect against insider threats. The rise in digital transformation and cloud infrastructure has expanded the attack surface, making it difficult to maintain appropriate security measures. Additionally, the complexity of digital supply chains introduces new vulnerabilities, with trusted business partners responsible for up to 25% of insider threat incidents.

Preparing for AI-Driven Cyber Threats

The evolution of AI in cyber threats necessitates a revamp of cybersecurity strategies. Businesses must update their policies, best practices, and employee training to mitigate the potential damages of AI-powered attacks. With both internal and external threats on the rise, organisations need to adapt to the new age of cyber threats to protect their valuable digital assets effectively.




From Text to Action: Chatbots in Their Stone Age

From Text to Action: Chatbots in Their Stone Age

The stone age of AI

Despite all the talk of generative AI disrupting the world, the technology has failed to significantly transform white-collar jobs. Workers are experimenting with chatbots for activities like email drafting, and businesses are doing numerous experiments, but office work has yet to experience a big AI overhaul.

Chatbots and their limitations

That could be because we haven't given chatbots like Google's Gemini and OpenAI's ChatGPT the proper capabilities yet; they're typically limited to taking in and spitting out text via a chat interface.

Things may become more fascinating in commercial settings when AI businesses begin to deploy so-called "AI agents," which may perform actions by running other software on a computer or over the internet.

Tool use for AI

Anthropic, a rival of OpenAI, unveiled a big new product today that seeks to establish the notion that tool use is required for AI's next jump in usefulness. The business is allowing developers to instruct its chatbot Claude to use external services and software to complete more valuable tasks. 

Claude can, for example, use a calculator to solve math problems that vex big language models; be asked to visit a database storing customer information; or be forced to use other programs on a user's computer when it would be beneficial.

Anthropic has been assisting various companies in developing Claude-based aides for their employees. For example, the online tutoring business Study Fetch has created a means for Claude to leverage various platform tools to customize the user interface and syllabus content displayed to students.

Other businesses are also joining the AI Stone Age. At its I/O developer conference earlier this month, Google showed off a few prototype AI agents, among other new AI features. One of the agents was created to handle online shopping returns by searching for the receipt in the customer's Gmail account, completing the return form, and scheduling a package pickup.

Challenges and caution

  • While tool use is exciting, it comes with challenges. Language models, including large ones, don’t always understand context perfectly.
  • Ensuring that AI agents behave correctly and interpret user requests accurately remains a hurdle.
  • Companies are cautiously exploring these capabilities, aware of the potential pitfalls.

The Next Leap

The Stone Age of chatbots represents a significant leap forward. Here’s what we can expect:

Action-oriented chatbots

  • Chatbots that can interact with external services will be more useful. Imagine a chatbot that books flights, schedules meetings, or orders groceries—all through seamless interactions.
  • These chatbots won’t be limited to answering questions; they’ll take action based on user requests.

Enhanced Productivity

  • As chatbots gain tool-using abilities, productivity will soar. Imagine a virtual assistant that not only schedules your day but also handles routine tasks.
  • Businesses can benefit from AI agents that automate repetitive processes, freeing up human resources for more strategic work.

Risks of Generative AI for Organisations and How to Manage Them

 

Employers should be aware of the potential data protection issues before experimenting with generative AI tools like ChatGPT. You can't just feed human resources data into a generative AI tool because of the rise in privacy and data protection laws in the US, Europe, and other countries in recent years. After all, employee data—including performance, financial, and even health data—is often quite sensitive.

Obviously, this is an area where companies should seek legal advice. It's also a good idea to consult with an AI expert regarding the ethics of utilising generative AI (to ensure that you're acting not only legally, but also ethically and transparently). But, as a starting point, here are two major factors that employers should be aware of. 

Feeding personal data

As I previously stated, employee data is often highly sensitive and sensitive. It is precisely the type of data that, depending on your jurisdiction, is usually subject to the most stringent forms of legal protection.

This makes it highly dangerous to feed such data into a generative AI tool. Why? Because many generative AI technologies use the information provided to fine-tune the underlying language model. In other words, it may use the data you provide for training purposes, and it may eventually expose that information to other users. So, suppose you employ a generative AI tool to generate a report on employee salary based on internal employee information. In the future, the AI tool can employ the data to generate responses for other users (outside of your organisation). Personal information could easily be absorbed by the generative AI tool and reused. 

This isn't as shady as it sounds. Many generative AI programmes' terms and conditions explicitly specify that data provided to the AI may be utilised for training and fine-tuning or revealed when users request cases of previously submitted inquiries. As a result, when you agree to the terms of service, always make sure you understand exactly what you're getting yourself into. Experts urge that any data given to a generative AI service be anonymised and free of personally identifiable information. This is frequently referred to as "de-identifying" the data.

Risks of generative AI outputs 

There are risks associated with the output or content developed by generative AIs, in addition to the data fed into them. In particular, there is a possibility that the output from generative AI technologies will be based on personal data acquired and handled in violation of data privacy laws. 

For example, suppose you ask a generative AI tool to provide a report on average IT salary in your area. There is a possibility that the programme will scrape personal data from the internet without your authorization, violating data protection rules, and then serve it to you. Employers who exploit personal data provided by a generative AI tool may be held liable for data protection violations. For the time being, it is a legal grey area, with the generative AI provider likely bearing the most or all of the duty, but the risk remains. 

Cases like this are already appearing. Indeed, one lawsuit claims that ChatGPT was trained on "massive amounts of personal data," such as medical records and information about children, that was accessed without consent. You do not want your organisation to become unwittingly involved in a litigation like this. Essentially, we're discussing an "inherited" risk of violating data protection regulations. However, there is a risk involved. 

The way forward

Employers must carefully evaluate the data protection and privacy consequences of utilising generative AI and seek expert assistance. However, don't let this put you off adopting generative AI altogether. Generative AI, when used properly and within the bounds of the law, can be an extremely helpful tool for organisations.