Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Generative AI. Show all posts

Hong Kong Launches Its First Generative AI Model

 

Last week, Hong Kong launched its first generative artificial intelligence (AI) model, HKGAI V1, ushering in a new era in the city's AI development. The tool was designed by the Hong Kong Generative AI Research and Development Centre (HKGAI) for the Hong Kong Special Administrative Region (HKSAR) government's InnoHK innovation program. 

The locally designed AI tool, which is driven by DeepSeek's data learning model, has so far been tested by about 70 HKSAR government departments. According to a press statement from HKGAI, this innovative accomplishment marks the successful localisation of DeepSeek in Hong Kong, injecting new vitality to the city's AI ecosystem and demonstrating the strong collaborative innovation capabilities between Hong Kong and the Chinese mainland in AI. 

Sun Dong, the HKSAR government's Secretary for Innovation, Technology, and Industry, highlighted during the launch ceremony that artificial intelligence (AI) is in the vanguard of a new industrial revolution and technical revolution, and Hong Kong is actively participating in this wave. 

Sun also emphasised the HKSAR government's broader efforts to encourage AI research, which include the building of an AI supercomputing centre, a 3-billion Hong Kong dollar (386 million US dollar) AI funding plan, and the clustering of over 800 AI enterprises at Science Park and Cyberport. He expressed confidence that the locally produced large language model will soon be available for usage by not just enterprises and people, but also for overseas Chinese communities. 

DeepSeek, founded by Liang Wengfeng, previously stunned the world with its low-cost AI model, which was created with substantially fewer processing resources than larger US tech businesses such as OpenAI and Meta. The HKGAI V1 system is the first in the world to use DeepSeek's full-parameter fine-tuning research methodology. 

The financial secretary allocated HK$1 billion (US$128.6 million) in the budget to build the Hong Kong AI Research and Development Institute. The government intends to start the institute by the fiscal year 2026-27, with cash set aside for the first five years to cover operational costs, including employing staff. 

“Our goal is to ensure Hong Kong’s leading role in the development of AI … So the Institute will focus on facilitating upstream research and development [R&D], midstream and downstream transformation of R&D outcomes, and expanding application scenarios,” Sun noted.

Generative AI in Cybersecurity: A Double-Edged Sword

Generative AI (GenAI) is transforming the cybersecurity landscape, with 52% of CISOs prioritizing innovation using emerging technologies. However, a significant disconnect exists, as only 33% of board members view these technologies as a top priority. This gap underscores the challenge of aligning strategic priorities between cybersecurity leaders and company boards.

The Role of AI in Cybersecurity

According to the latest Splunk CISO Report, cyberattacks are becoming more frequent and sophisticated. Yet, 41% of security leaders believe that the requirements for protection are becoming easier to manage, thanks to advancements in AI. Many CISOs are increasingly relying on AI to:

  • Identify risks (39%)
  • Analyze threat intelligence (39%)
  • Detect and prioritize threats (35%)

However, GenAI is a double-edged sword. While it enhances threat detection and protection, attackers are also leveraging AI to boost their efforts. For instance:

  • 32% of attackers use AI to make attacks more effective.
  • 28% use AI to increase the volume of attacks.
  • 23% use AI to develop entirely new types of threats.

This has led to growing concerns among security professionals, with 36% of CISOs citing AI-powered attacks as their biggest worry, followed by cyber extortion (24%) and data breaches (23%).

Challenges and Opportunities in Cybersecurity

One of the major challenges is the gap in budget expectations. Only 29% of CISOs feel they have sufficient funding to secure their organizations, compared to 41% of board members who believe their budgets are adequate. Additionally, 64% of CISOs attribute the cyberattacks their firms experience to a lack of support.

Despite these challenges, there is hope. A vast majority of cybersecurity experts (86%) believe that AI can help attract entry-level talent to address the skills shortage, while 65% say AI enables seasoned professionals to work more productively. Collaboration between security teams and other departments is also improving:

  • 91% of organizations are increasing security training for legal and compliance staff.
  • 90% are enhancing training for security teams.

To strengthen cyber defenses, experts emphasize the importance of foundational practices:

  1. Strong Passwords and MFA: Poor password security is linked to 80% of data breaches. Companies are encouraged to use password managers and enforce robust password policies.
  2. Regular Cybersecurity Training: Educating employees on risk management and security practices, such as using antivirus software and maintaining firewalls, can significantly reduce vulnerabilities.
  3. Third-Party Vendor Assessments: Organizations must evaluate third-party vendors for security risks, as breaches through these channels can expose even the most secure systems.

Generative AI is reshaping the cybersecurity landscape, offering both opportunities and challenges. While it enhances threat detection and operational efficiency, it also empowers attackers to launch more sophisticated and frequent attacks. To navigate this evolving landscape, organizations must align strategic priorities, invest in AI-driven solutions, and reinforce foundational cybersecurity practices. By doing so, they can better protect their systems and data in an increasingly complex threat environment.

ChatGPT Outage in the UK: OpenAI Faces Reliability Concerns Amid Growing AI Dependence

 


ChatGPT Outage: OpenAI Faces Service Disruption in the UK

On Thursday, OpenAI’s ChatGPT experienced a significant outage in the UK, leaving thousands of users unable to access the popular AI chatbot. The disruption, which began around 11:00 GMT, saw users encountering a “bad gateway error” message when attempting to use the platform. According to Downdetector, a website that tracks service interruptions, over 10,000 users reported issues during the outage, which persisted for several hours and caused widespread frustration.

OpenAI acknowledged the issue on its official status page, confirming that a fix was implemented by 15:09 GMT. The company assured users that it was monitoring the situation closely, but no official explanation for the cause of the outage has been provided so far. This lack of transparency has fueled speculation among users, with theories ranging from server overload to unexpected technical failures.

User Reactions: From Frustration to Humor

As the outage unfolded, affected users turned to social media to voice their concerns and frustrations. On X (formerly Twitter), one user humorously remarked, “ChatGPT is down again? During the workday? So you’re telling me I have to… THINK?!” While some users managed to find humor in the situation, others raised serious concerns about the reliability of AI services, particularly those who depend on ChatGPT for professional tasks such as content creation, coding assistance, and research.

ChatGPT has become an indispensable tool for millions since its launch in November 2022. OpenAI CEO Sam Altman recently revealed that by December 2024, the platform had reached over 300 million weekly users, highlighting its rapid adoption as one of the most widely used AI tools globally. However, the incident has raised questions about service reliability, especially among paying customers. OpenAI’s premium plans, which offer enhanced features, cost up to $200 per month, prompting some users to question whether they are getting adequate value for their investment.

The outage comes at a time of rapid advancements in AI technology. OpenAI and other leading tech firms have pledged significant investments into AI infrastructure, with a commitment of $500 billion toward AI development in the United States. While these investments aim to bolster the technology’s capabilities, incidents like this serve as a reminder of the growing dependence on AI tools and the potential risks associated with their widespread adoption.

The disruption highlights the importance of robust technical systems to ensure uninterrupted service, particularly for users who rely heavily on AI for their daily tasks. Despite restoring services relatively quickly, OpenAI’s ability to maintain user trust and satisfaction may hinge on its efforts to improve its communication strategy and technical resilience. Paying customers, in particular, expect transparency and proactive measures to prevent such incidents in the future.

As artificial intelligence becomes more deeply integrated into everyday life, service disruptions like the ChatGPT outage underline both the potential and limitations of the technology. Users are encouraged to stay informed through OpenAI’s official channels for updates on any future service interruptions or maintenance activities.

Moving forward, OpenAI may need to implement backup systems and alternative solutions to minimize the impact of outages on its user base. Clearer communication during disruptions and ongoing efforts to enhance technical infrastructure will be key to ensuring the platform’s reliability and maintaining its position as a leader in the AI industry.

The Rise of Agentic AI: How Autonomous Intelligence Is Redefining the Future

 


The Evolution of AI: From Generative Models to Agentic Intelligence

Artificial intelligence is rapidly advancing beyond its current capabilities, transitioning from tools that generate content to systems capable of making autonomous decisions and pursuing long-term objectives. This next frontier, known as Agentic AI, has the potential to revolutionize how machines interact with the world by functioning independently and adapting to complex environments.

Generative AI vs. Agentic AI: A Fundamental Shift

Generative AI models, such as ChatGPT and Google Gemini, analyze patterns in vast datasets to generate responses based on user prompts. These systems are highly versatile and assist with a wide range of tasks but remain fundamentally reactive, requiring human input to function. In contrast, agentic AI introduces autonomy, allowing machines to take initiative, set objectives, and perform tasks without continuous human oversight.

The key distinction lies in their problem-solving approaches. Generative AI acts as a responsive assistant, while agentic AI serves as an independent collaborator, capable of analyzing its environment, recognizing priorities, and making proactive decisions. By enabling machines to work autonomously, agentic AI offers the potential to optimize workflows, adapt to dynamic situations, and manage complex objectives over time.

Agentic AI systems leverage advanced planning modules, memory retention, and sophisticated decision-making frameworks to achieve their goals. These capabilities allow them to:

  • Break down complex objectives into manageable tasks
  • Monitor progress and maintain context over time
  • Adjust strategies dynamically based on changing circumstances

By incorporating these features, agentic AI ensures continuity and efficiency in executing long-term projects, distinguishing it from its generative counterparts.

Applications of Agentic AI

The potential impact of agentic AI spans multiple industries and applications. For example:

  • Business: Automating routine tasks, identifying inefficiencies, and optimizing workflows without human intervention.
  • Manufacturing: Overseeing production processes, responding to disruptions, and optimizing resource allocation autonomously.
  • Healthcare: Managing patient care plans, identifying early warning signs, and recommending proactive interventions.

Major AI companies are already exploring agentic capabilities. Reports suggest that OpenAI is working on projects aimed at enhancing AI autonomy, potentially enabling systems to control digital environments with minimal human input. These advancements highlight the growing importance of autonomous systems in shaping the future of technology.

Challenges and Ethical Considerations

Despite its transformative potential, agentic AI raises several challenges that must be addressed:

  • Transparency: Ensuring users understand how decisions are made.
  • Ethical Boundaries: Defining the level of autonomy granted to these systems.
  • Alignment: Maintaining alignment with human values and objectives to foster trust and widespread adoption.

Thoughtful development and robust regulation will be essential to ensure that agentic AI operates ethically and responsibly, mitigating potential risks while unlocking its full benefits.

The transition from generative to agentic AI represents a significant leap in artificial intelligence. By integrating autonomous capabilities, these systems can transform industries, enhance productivity, and redefine human-machine relationships. However, achieving this vision requires a careful balance between innovation and regulation. As AI continues to evolve, agentic intelligence stands poised to usher in a new era of technological progress, fundamentally reshaping how we interact with the world.

Common AI Promt Mistakes And How To Avoid Them

 

If you are running a business in 2025, you're probably already using generative AI in some capacity. GenAI tools and chatbots, such as ChatGPT and Google Gemini, have become indispensable in a variety of cases, ranging from content production to business planning. 

It's no surprise that more than 60% of businesses believe GenAI to be one of their top goals over the next two years. Furthermore, 87 percent of businesses are piloting or have already implemented generative AI tools in some way. 

But there is a catch. The quality of your inputs determines how well generative AI tools perform. Effective prompting can deliver you accurate AI outputs that meet your requirements, whereas ineffective prompting can take you down the wrong path. 

If you've been struggling to maximise the potential of AI technologies, it's time to rethink the cues you're employing. In this article, we'll look at the most common mistakes people make when asking AI tools questions, as well as how to avoid them. 

What are AI prompts? 

Prompts are queries or commands you give to generative AI tools such as ChatGPT or Claude. They are the inputs you utilise to communicate with AI models and instruct them on what to perform (or generate). AI models develop content based on the prompts you give them. 

The more contextual and specific the questions, the more accurate the AI responds. For example, if you're looking for strategies to increase client loyalty, you can utilise the following generative AI prompt: "What are some cost-effective strategies to improve customer loyalty for a small business?” 

Common AI prompt mistakes 

Being too vague: Neither artificial intelligence nor humans can read minds. You may have a clear image of the problem you're attempting to solve, including limits, items you've explored or done, and potential objections. But, unless you ask a very specific inquiry, neither your human friends nor your AI assistance will be able to pull those images from your thoughts. When asking for assistance, be specific and complete. 

Not being clear about the format: Would you prefer a list, a discussion, or a table? Do you want a comparison of factors or a detailed dive into the issues? The mistake happens when you ask a question but do not instruct the AI on how you want the response to be presented. This mistake isn't just about style and punctuation; it's about how the information is digested and improved for your final consumption. As with the first item on this list, be specific. Tell the AI what you're looking for and what you'll need to receive an answer. 

Not knowing when to take a step back: Sometimes AI cannot solve the problem or give the level of quality required. Fundamentally, an AI is a tool, and one tool cannot accomplish everything. Know when to hold 'em and when to fold them. Know when it's time to go back to a search engine, check forums, or create your own answers. There is a point of diminishing returns, and identifying it will save you time and frustration. 

How to write prompts successfully 

  • Use prompts that are specific, clear, and thorough. 
  • Remember that the AI is simply a program, not a magical oracle. 
  • Iterate and refine your queries by asking increasingly better questions.
  • Keep the prompt on topic. Specify details that provide context for your enquiries.

Navigating 2025: Emerging Security Trends and AI Challenges for CISOs

 

Security teams have always needed to adapt to change, but 2025 is poised to bring unique challenges, driven by advancements in artificial intelligence (AI), sophisticated cyber threats, and evolving regulatory mandates. Chief Information Security Officers (CISOs) face a rapidly shifting landscape that requires innovative strategies to mitigate risks and ensure compliance.

The integration of AI-enabled features into products is accelerating, with large language models (LLMs) introducing new vulnerabilities that attackers may exploit. As vendors increasingly rely on these foundational models, CISOs must evaluate their organization’s exposure and implement measures to counter potential threats. 

"The dynamic landscape of cybersecurity regulations, particularly in regions like the European Union and California, demands enhanced collaboration between security and legal teams to ensure compliance and mitigate risks," experts note. Balancing these regulatory requirements with emerging security challenges will be crucial for protecting enterprises.

Generative AI (GenAI), while presenting security risks, also offers opportunities to strengthen software development processes. By automating vulnerability detection and bridging the gap between developers and security teams, AI can improve efficiency and bolster security frameworks.

Trends to Watch in 2025

1. Vulnerabilities in Proprietary LLMs Could Lead to Major Security Incidents

Software vendors are rapidly adopting AI-enabled features, often leveraging proprietary LLMs. However, these models introduce a new attack vector. Proprietary models reveal little about their internal guardrails or origins, making them challenging for security professionals to manage. Vulnerabilities in these models could have cascading effects, potentially disrupting the software ecosystem at scale.

2. Cloud-Native Workloads and AI Demand Adaptive Identity Management

The rise of cloud-native applications and AI-driven systems is reshaping identity management. Traditional, static access control systems must evolve to handle the surge in service-based identities. Adaptive frameworks are essential for ensuring secure and efficient access in dynamic digital environments.

3. AI Enhances Security in DevOps

A growing number of developers—58% according to recent surveys—recognize their role in application security. However, the demand for skilled security professionals in DevOps remains unmet.

AI is bridging this gap by automating repetitive tasks, offering smart coding recommendations, and integrating security into development pipelines. Authentication processes are also being streamlined, with AI dynamically assigning roles and permissions as services deploy across cloud environments. This integration enhances collaboration between developers and security teams while reducing risks.

CISOs must acknowledge the dual-edged nature of AI: while it introduces new risks, it also offers powerful tools to counter cyber threats. By leveraging AI to automate tasks, detect vulnerabilities, and respond to threats in real-time, organizations can strengthen their defenses and adapt to an evolving threat landscape.

The convergence of technology and security in 2025 calls for strategic innovation, enabling enterprises to not only meet compliance requirements but also proactively address emerging risks.


Databricks Secures $10 Billion in Funding, Valued at $62 Billion

 


San Francisco-based data analytics leader Databricks has achieved a record-breaking milestone, raising $10 billion in its latest funding round. This has elevated the company's valuation to an impressive $62 billion, paving the way for a potential initial public offering (IPO).

Series J Funding and Key Investors

  • The Series J funding round featured prominent investors such as Thrive Capital and Andreessen Horowitz, both of whom are also investors in OpenAI.
  • This funding round ties with Microsoft’s $10 billion investment in OpenAI in 2023, ranking among the largest venture investments ever made.
  • Such substantial investments underscore growing confidence in companies poised to lead the evolving tech landscape, which now requires significantly higher capital than in previous eras.

Enhancing Enterprise AI Capabilities

Databricks has long been recognized for providing enterprises with a secure platform for hosting and analyzing their data. In 2023, the company further bolstered its offerings by acquiring MosaicML, a generative AI startup. This acquisition allows Databricks to enable its clients to build tailored AI models within a secure cloud environment.

Introducing DBRX: Advanced AI for Enterprises

In March, Databricks unveiled DBRX, an advanced large language model (LLM) developed through the MosaicML acquisition. DBRX offers its 12,000 clients a secure AI solution, minimizing risks associated with exposing proprietary data to external AI models.

Unlike massive models such as Google’s Gemini or OpenAI’s GPT-4, DBRX prioritizes efficiency and practicality. It addresses specific enterprise needs, such as:

  • Fraud detection in numerical data for financial firms
  • Analyzing patient records to identify disease patterns in healthcare

Efficiency Through "Mixture-of-Experts" Design

DBRX employs a unique “mixture-of-experts” design, dividing its functionality into 16 specialized areas. A built-in "router" directs tasks to the appropriate expert, reducing computational demands. Although the full model has 132 billion parameters, only 36 billion are used at any given time, making it energy-efficient and cost-effective.

This efficiency lowers barriers for businesses aiming to integrate AI into daily operations, improving the economics of AI deployment.

Positioning for the Future

Databricks CEO Ali Ghodsi highlighted the company's vision during a press event in March: “These are still the early days of AI. We are positioning the Databricks Data Intelligence Platform to deliver long-term value . . . and our team is committed to helping companies across every industry build data intelligence.”

With this landmark funding round, Databricks continues to solidify its role as a trailblazer in data analytics and enterprise AI. By focusing on secure, efficient, and accessible AI solutions, the company is poised to shape the future of technology across industries.

AI-Powered Dark Patterns: What's Up Next?

 

The rapid growth of generative AI (artificial intelligence) highlights how urgent it is to address privacy and ethical issues related to the use of these technologies across a range of sectors. Over the past year, data protection conferences have repeatedly emphasised AI's expanding role in the privacy and data protection domains as well as the pressing necessity for Data Protection Officers (DPOs) to handle the issues it presents for their businesses. 

These issues include the creation of deepfakes and synthetic content that could sway public opinion or threaten specific individuals as well as the public at large, the leakage of sensitive personal information in model outputs, the inherent bias in generative algorithms, and the overestimation of AI capabilities that results in inaccurate output (also known as AI hallucinations), which often refer to real individuals. 

So, what are the AI-driven dark patterns? These are deceptive UI strategies that use AI to influence application users into making decisions that favour the company rather than the user. These designs employ user psychology and behaviour in more sophisticated ways than typical dark patterns. 

Imagine getting a video call from your bank manager (created by a deepfake) informing you of some suspicious activity on your account. The AI customises the call for your individual bank branch, your bank manager's vocal patterns, and even their look, making it quite convincing. This deepfake call could tempt you to disclose sensitive data or click on suspicious links. 

Another alarming example of AI-driven dark patterns may be hostile actors creating highly targeted social media profiles that exploit your child's flaws. The AI can analyse your child's online conduct and create fake friendships or relationships that could trick the child into disclosing personal information or even their location to these people. Thus, the question arises: what can we do now to minimise these ills? How do we prevent future scenarios in which cyber criminals and even ill-intentioned organisations contact us and our loved ones via technologies on which we have come to rely for daily activities? 

Unfortunately, the solution is not simple. Mitigating AI-driven dark patterns necessitates a multifaceted approach that includes consumers, developers, and regulatory organisations. The globally recognised privacy principles of data quality, data collection limitation, purpose specification, use limitation, security, transparency, accountability, and individual participation are universally applicable to all systems that handle personal data, including training algorithms and generative AI. We must now test these principles to discover if they can actually protect us from this new, and often thrilling, technology.

Prevention tips 

First and foremost, we must educate people on AI-driven dark trends and fraudulent techniques. This can be accomplished by public awareness campaigns, educational tools at all levels of the education system, and the incorporation of warnings into user interfaces, particularly on social media platforms popular with young people. Cigarette firms must disclose the risks of their products, as should AI-powered services to which our children are exposed.

We should also look for ways to encourage users, particularly young and vulnerable users, to be critical consumers of information they come across online, especially when dealing with AI systems. In the twenty-first century, our educational systems should train members of society to question (far more) the source and intent of AI-generated content. 

Give the younger generation, and even the older ones, the tools they need to control their data and customise their interactions with AI systems. This might include options that allow users or parents of young users to opt out of AI-powered suggestions or data collection. Governments and regulatory agencies play an important role to establish clear rules and regulations for AI development and use. The European Union plans to propose its first such law this summer. The long-awaited EU AI Act puts many of these data protection and ethical concerns into action. This is a positive start.