Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label ChatGPT. Show all posts

ChatGPT Outage in the UK: OpenAI Faces Reliability Concerns Amid Growing AI Dependence

 


ChatGPT Outage: OpenAI Faces Service Disruption in the UK

On Thursday, OpenAI’s ChatGPT experienced a significant outage in the UK, leaving thousands of users unable to access the popular AI chatbot. The disruption, which began around 11:00 GMT, saw users encountering a “bad gateway error” message when attempting to use the platform. According to Downdetector, a website that tracks service interruptions, over 10,000 users reported issues during the outage, which persisted for several hours and caused widespread frustration.

OpenAI acknowledged the issue on its official status page, confirming that a fix was implemented by 15:09 GMT. The company assured users that it was monitoring the situation closely, but no official explanation for the cause of the outage has been provided so far. This lack of transparency has fueled speculation among users, with theories ranging from server overload to unexpected technical failures.

User Reactions: From Frustration to Humor

As the outage unfolded, affected users turned to social media to voice their concerns and frustrations. On X (formerly Twitter), one user humorously remarked, “ChatGPT is down again? During the workday? So you’re telling me I have to… THINK?!” While some users managed to find humor in the situation, others raised serious concerns about the reliability of AI services, particularly those who depend on ChatGPT for professional tasks such as content creation, coding assistance, and research.

ChatGPT has become an indispensable tool for millions since its launch in November 2022. OpenAI CEO Sam Altman recently revealed that by December 2024, the platform had reached over 300 million weekly users, highlighting its rapid adoption as one of the most widely used AI tools globally. However, the incident has raised questions about service reliability, especially among paying customers. OpenAI’s premium plans, which offer enhanced features, cost up to $200 per month, prompting some users to question whether they are getting adequate value for their investment.

The outage comes at a time of rapid advancements in AI technology. OpenAI and other leading tech firms have pledged significant investments into AI infrastructure, with a commitment of $500 billion toward AI development in the United States. While these investments aim to bolster the technology’s capabilities, incidents like this serve as a reminder of the growing dependence on AI tools and the potential risks associated with their widespread adoption.

The disruption highlights the importance of robust technical systems to ensure uninterrupted service, particularly for users who rely heavily on AI for their daily tasks. Despite restoring services relatively quickly, OpenAI’s ability to maintain user trust and satisfaction may hinge on its efforts to improve its communication strategy and technical resilience. Paying customers, in particular, expect transparency and proactive measures to prevent such incidents in the future.

As artificial intelligence becomes more deeply integrated into everyday life, service disruptions like the ChatGPT outage underline both the potential and limitations of the technology. Users are encouraged to stay informed through OpenAI’s official channels for updates on any future service interruptions or maintenance activities.

Moving forward, OpenAI may need to implement backup systems and alternative solutions to minimize the impact of outages on its user base. Clearer communication during disruptions and ongoing efforts to enhance technical infrastructure will be key to ensuring the platform’s reliability and maintaining its position as a leader in the AI industry.

The Rise of Agentic AI: How Autonomous Intelligence Is Redefining the Future

 


The Evolution of AI: From Generative Models to Agentic Intelligence

Artificial intelligence is rapidly advancing beyond its current capabilities, transitioning from tools that generate content to systems capable of making autonomous decisions and pursuing long-term objectives. This next frontier, known as Agentic AI, has the potential to revolutionize how machines interact with the world by functioning independently and adapting to complex environments.

Generative AI vs. Agentic AI: A Fundamental Shift

Generative AI models, such as ChatGPT and Google Gemini, analyze patterns in vast datasets to generate responses based on user prompts. These systems are highly versatile and assist with a wide range of tasks but remain fundamentally reactive, requiring human input to function. In contrast, agentic AI introduces autonomy, allowing machines to take initiative, set objectives, and perform tasks without continuous human oversight.

The key distinction lies in their problem-solving approaches. Generative AI acts as a responsive assistant, while agentic AI serves as an independent collaborator, capable of analyzing its environment, recognizing priorities, and making proactive decisions. By enabling machines to work autonomously, agentic AI offers the potential to optimize workflows, adapt to dynamic situations, and manage complex objectives over time.

Agentic AI systems leverage advanced planning modules, memory retention, and sophisticated decision-making frameworks to achieve their goals. These capabilities allow them to:

  • Break down complex objectives into manageable tasks
  • Monitor progress and maintain context over time
  • Adjust strategies dynamically based on changing circumstances

By incorporating these features, agentic AI ensures continuity and efficiency in executing long-term projects, distinguishing it from its generative counterparts.

Applications of Agentic AI

The potential impact of agentic AI spans multiple industries and applications. For example:

  • Business: Automating routine tasks, identifying inefficiencies, and optimizing workflows without human intervention.
  • Manufacturing: Overseeing production processes, responding to disruptions, and optimizing resource allocation autonomously.
  • Healthcare: Managing patient care plans, identifying early warning signs, and recommending proactive interventions.

Major AI companies are already exploring agentic capabilities. Reports suggest that OpenAI is working on projects aimed at enhancing AI autonomy, potentially enabling systems to control digital environments with minimal human input. These advancements highlight the growing importance of autonomous systems in shaping the future of technology.

Challenges and Ethical Considerations

Despite its transformative potential, agentic AI raises several challenges that must be addressed:

  • Transparency: Ensuring users understand how decisions are made.
  • Ethical Boundaries: Defining the level of autonomy granted to these systems.
  • Alignment: Maintaining alignment with human values and objectives to foster trust and widespread adoption.

Thoughtful development and robust regulation will be essential to ensure that agentic AI operates ethically and responsibly, mitigating potential risks while unlocking its full benefits.

The transition from generative to agentic AI represents a significant leap in artificial intelligence. By integrating autonomous capabilities, these systems can transform industries, enhance productivity, and redefine human-machine relationships. However, achieving this vision requires a careful balance between innovation and regulation. As AI continues to evolve, agentic intelligence stands poised to usher in a new era of technological progress, fundamentally reshaping how we interact with the world.

Ensuring Governance and Control Over Shadow AI

 


AI has become almost ubiquitous in software development, as a GitHub survey shows, 92 per cent of developers in the United States use artificial intelligence as part of their everyday coding. This has led many individuals to participate in what is termed “shadow AI,” which involves leveraging the technology without the knowledge or approval of their organization’s Information Technology department and/or Chief Information Security Officer (CISO). 

This has increased their productivity. In light of this, it should not come as a surprise to learn that motivated employees will seek out the technology that can maximize their value potential as well as minimize repetitive tasks that interfere with more creative, challenging endeavours. It is not uncommon for companies to be curious about new technologies, especially those that can be used to make work easier and more efficient, such as artificial intelligence (AI) and automation tools. 

Despite the increasing amount of ingenuity, some companies remain reluctant to adopt technology at their first, or even second, glances. Nevertheless, resisting change does not necessarily mean employees will stop secretly using AI in a non-technical way, especially since tools such as Microsoft Copilot, ChatGPT, and Claude make these technologies more accessible to non-technical employees.

Known as shadow AI, shadow AI is a growing phenomenon that has gained popularity across many different sectors. There is a concept known as shadow AI, which is the use of artificial intelligence tools or systems without the official approval or oversight of the organization's information technology or security department. These tools are often adopted to solve immediate problems or boost efficiency within an organization. 

If these tools are not properly governed, they can lead to data breaches, legal violations, or regulatory non-compliance, which could pose significant risks to businesses. When Shadow AI is not properly managed, it can introduce vulnerabilities into users' infrastructure that can lead to unauthorized access to sensitive data. In a world where artificial intelligence is becoming increasingly ubiquitous, organizations should take proactive measures to make sure their operations are protected. 

Shadow generative AI poses specific and substantial risks to an organization's integrity and security, and poses significant threats to both of them. A non-regulated use of artificial intelligence can lead to decisions and actions that could undermine regulatory and corporate compliance. Particularly in industries with very strict data handling protocols, such as finance and healthcare, where strict data handling protocols are essential. 

As a result of the bias inherent in the training data, generative AI models can perpetuate these biases, generate outputs that breach copyrights, or generate code that violates licensing agreements. The untested code may cause the software to become unstable or error-prone, which can increase maintenance costs and cause operational disruptions. In addition, such code may contain undetected malicious elements, which increases the risk of data breach and system downtime, as well.

It is important to recognize that the mismanagement of Artificial Intelligence interactions in customer-facing applications can result in regulatory non-compliance, reputational damage, as well as ethical concerns, particularly when the outputs adversely impact the customer experience. Consequently, organization leaders must ensure that their organizations are protected from unintended and adverse consequences when utilizing generative AI by implementing robust governance measures to mitigate these risks. 

In recent years, AI technology, including generative and conversational AI, has seen incredible growth in popularity, leading to widespread grassroots adoption of these technologies. The accessibility of consumer-facing AI tools, which require little to no technical expertise, combined with a lack of formal AI governance, has enabled employees to utilize unvetted AI solutions, The 2025 CX Trends Report highlights a 250% year-over-year increase in shadow AI usage in some industries, exposing organizations to heightened risks related to data security, compliance, and business ethics. 

There are many reasons why employees turn to shadow AI for personal or team productivity enhancement because they are dissatisfied with their existing tools, because of the ease of access, and because they want to enhance the ability to accomplish specific tasks. In the future, this gap will grow as CX Traditionalists delay the development of AI solutions due to limitations in budget, a lack of knowledge, or an inability to get internal support from their teams. 

As a result, CX Trendsetters are taking steps to address this challenge by adopting approved artificial intelligence solutions like AI agents and customer experience automation, as well as ensuring the appropriate oversight and governance are in place. Identifying AI Implementations: CISOs and security teams, must determine who will be introducing AI throughout the software development lifecycle (SDLC), assess their security expertise, and evaluate the steps taken to minimize risks associated with AI deployment. 

In training programs, it is important to raise awareness among developers of the importance and potential of AI-assisted code as well as develop their skills to address these vulnerabilities. To identify vulnerable phases of the software development life cycle, the security team needs to analyze each phase of the SDLC and identify if any are vulnerable to unauthorized uses of AI. 

Fostering a Security-First Culture: By promoting a proactive protection mindset, organizations can reduce the need for reactive fixes by emphasizing the importance of securing their systems from the onset, thereby saving time and money. In addition to encouraging developers to prioritize safety and transparency over convenience, a robust security-first culture, backed by regular training, encourages a commitment to security. 

CISOs are responsible for identifying and managing risks associated with new tools and respecting decisions made based on thorough evaluations. This approach builds trust, ensures tools are properly vetted before deployment, and safeguards the company's reputation. Incentivizing Success: There is great value in having developers who contribute to bringing AI usage into compliance with their organizations. 

For this reason, these individuals should be promoted, challenged, and given measurable benchmarks to demonstrate their security skills and practices. As organizations reward these efforts, they create a culture in which AI deployment is considered a critical, marketable skill that can be acquired and maintained. If these strategies are implemented effectively, a CISO and development teams can collaborate to manage AI risks the right way, ensuring faster, safer, and more effective software production while avoiding the pitfalls caused by shadow AI. 

As an alternative to setting up sensitive alerts to make sure that confidential data isn't accidentally leaked, it is also possible to set up tools using artificial intelligence, for example, to help detect when a model of artificial intelligence incorrectly inputs or processes personal data, financial information, or other proprietary information. 

It is possible to identify and mitigate security breaches in real-time by providing real-time alerts in real-time, and by enabling management to reduce these breaches before they escalate into a full-blown security incident, adding a layer of security protection, in this way. 

When an API strategy is executed well, it is possible to give employees the freedom to use GenAI tools productively while safeguarding the company's data, ensuring that AI usage is aligned with internal policies, and protecting the company from fraud. To increase innovation and productivity, one must strike a balance between securing control and ensuring that security is not compromised.

Dutch Authority Flags Concerns Over AI Standardization Delays

 


As the Dutch privacy watchdog DPA announced on Wednesday, it was concerned that software developers developing artificial intelligence (AI) might use personal data. To get more information about this, DPA sent a letter to Microsoft-backed OpenAI. The Dutch Data Protection Authority (Dutch DPA) imposed a fine of 30.5 million euros on Clearview AI and ordered that they be subject to a penalty of up to 5 million euros if they fail to comply. 

As a result of the company's illegal database of billions of photographs of faces, including Dutch people, Clearview is an American company that offers facial recognition services. They have built an illegal database. According to their website, the Dutch DPA warns that Clearview's services are also prohibited. In light of the rapid growth of OpenAI's ChatGPT consumer app, governments, including those of the European Union, are considering how to regulate the technology. 

There is a senior official from the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP), who told Euronews that the process of developing artificial intelligence standards will need to take place faster, in light of the AI Act. Introducing the EU AI Act, which is the first comprehensive AI law in the world. The regulation aims to address health and safety risks, as well as fundamental human rights issues, as well as democracy, the rule of law, and environmental protection. 

By adopting artificial intelligence systems, there is a strong possibility to benefit society, contribute to economic growth, enhance EU innovation and competitiveness as well as enhance EU innovation and global leadership. However, in some cases, the specific characteristics of certain AI systems may pose new risks relating to user safety, including physical safety and fundamental rights. 

There have even been instances where some of these powerful AI models could pose systemic risks if they are widely used. Since there is a lack of trust, this creates legal uncertainty and may result in a slower adoption of AI technologies by businesses, citizens, and public authorities due to legal uncertainties. Regulatory responses by national governments that are disparate could fragment the internal market. 

To address these challenges, legislative action was required to ensure that both the benefits and risks of AI systems were adequately addressed to ensure that the internal market functioned well. As for the standards, they are a way for companies to be reassured, and to demonstrate that they are complying with the regulations, but there is still a great deal of work to be done before they are available, and of course, time is running out,” said Sven Stevenson, who is the agency's director of coordination and supervision for algorithms. 

CEN-CELENEC and ETSI were tasked by the European Commission in May last year to compile the underlying standards for the industry, which are still being developed and this process continues to be carried out. This data protection authority, which also oversees the General Data Protection Regulation (GDPR), is likely to have the shared responsibility of checking the compliance of companies with the AI Act with other authorities, such as the Dutch regulator for digital infrastructure, the RDI, with which they will likely share this responsibility. 

By August next year, all EU member states will have to select their AI regulatory agency, and it appears that in most EU countries, national data protection authorities will be an excellent choice. The AP has already dealt with cases in which companies' artificial intelligence tools were found to be in breach of GDPR in its capacity as a data regulator. 

A US facial recognition company known as Clearview AI was fined €30.5 million in September for building an illegal database of photos and unique biometric codes linked to Europeans in September, which included photos, unique biometric codes, and other information. The AI Act will be complementary to GDPR, since it focuses primarily on data processing, and would have an impact in the sense that it pertains to product safety in future cases. Increasingly, the Dutch government is promoting the development of new technologies, including artificial intelligence, to promote the adoption of these technologies. 

The deployment of such technologies could have a major impact on public values like privacy, equality in the law, and autonomy. This became painfully evident when the scandal over childcare benefits in the Netherlands was brought to public attention in September 2018. The scandal in question concerns thousands of parents who were falsely accused of fraud by the Dutch tax authorities because of discriminatory self-learning algorithms that were applied while attempting to regulate the distribution of childcare benefits while being faced with discriminatory self-learning algorithms. 

It has been over a year since the Amsterdam scandal raised a great deal of controversy in the Netherlands, and there has been an increased emphasis on the supervision of new technologies, and in particular artificial intelligence, as a result, the Netherlands intentionally emphasizes and supports a "human-centred approach" to artificial intelligence. Taking this approach means that AI should be designed and used in a manner that respects human rights as the basis of its purpose, design, and use. AI should not weaken or undermine public values and human rights but rather reinforce them rather than weaken them. 

During the last few months, the Commission has established the so-called AI Pact, which provides workshops and joint commitments to assist businesses in getting ready for the upcoming AI Act. On a national level, the AP has also been organizing pilot projects and sandboxes with the Ministry of RDI and Economic Affairs so that companies can become familiar with the rules as they become more aware of them. 

Further, the Dutch government has also published an algorithm register as of December 2022, which is a public record of algorithms used by the government, which is intended to ensure transparency and explain the results of algorithms, and the administration wants these algorithms to be legally checked for discrimination and arbitrariness.

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI

Generative AI Fuels Financial Fraud

 


According to the FBI, criminals are increasingly using generative artificial intelligence (AI) to make their fraudulent schemes more convincing. This technology enables fraudsters to produce large amounts of realistic content with minimal time and effort, increasing the scale and sophistication of their operations.

Generative AI systems work by synthesizing new content based on patterns learned from existing data. While creating or distributing synthetic content is not inherently illegal, such tools can be misused for activities like fraud, extortion, and misinformation. The accessibility of generative AI raises concerns about its potential for exploitation.

AI offers significant benefits across industries, including enhanced operational efficiency, regulatory compliance, and advanced analytics. In the financial sector, it has been instrumental in improving product customization and streamlining processes. However, alongside these benefits, vulnerabilities have emerged, including third-party dependencies, market correlations, cyber risks, and concerns about data quality and governance.

The misuse of generative AI poses additional risks to financial markets, such as facilitating financial fraud and spreading false information. Misaligned or poorly calibrated AI models may result in unintended consequences, potentially impacting financial stability. Long-term implications, including shifts in market structures, macroeconomic conditions, and energy consumption, further underscore the importance of responsible AI deployment.

Fraudsters have increasingly turned to generative AI to enhance their schemes, using AI-generated text and media to craft convincing narratives. These include social engineering tactics, spear-phishing, romance scams, and investment frauds. Additionally, AI can generate large volumes of fake social media profiles or deepfake videos, which are used to manipulate victims into divulging sensitive information or transferring funds. Criminals have even employed AI-generated audio to mimic voices, misleading individuals into believing they are interacting with trusted contacts.

In one notable incident reported by the FBI, a North Korean cybercriminal used a deepfake video to secure employment with an AI-focused company, exploiting the position to access sensitive information. Similarly, Russian threat actors have been linked to fake videos aimed at influencing elections. These cases highlight the broad potential for misuse of generative AI across various domains.

To address these challenges, the FBI advises individuals to take several precautions. These include establishing secret codes with trusted contacts to verify identities, minimizing the sharing of personal images or voice data online, and scrutinizing suspicious content. The agency also cautions against transferring funds, purchasing gift cards, or sending cryptocurrency to unknown parties, as these are common tactics employed in scams.

Generative AI tools have been used to improve the quality of phishing messages by reducing grammatical errors and refining language, making scams more convincing. Fraudulent websites have also employed AI-powered chatbots to lure victims into clicking harmful links. To reduce exposure to such threats, individuals are advised to avoid sharing sensitive personal information online or over the phone with unverified sources.

By remaining vigilant and adopting these protective measures, individuals can mitigate their risk of falling victim to fraud schemes enabled by emerging AI technologies.

Malicious Python Packages Target Developers Using AI Tools





The rise of generative AI (GenAI) tools like OpenAI’s ChatGPT and Anthropic’s Claude has created opportunities for attackers to exploit unsuspecting developers. Recently, two Python packages falsely claiming to provide free API access to these chatbot platforms were found delivering a malware known as "JarkaStealer" to their victims.


Exploiting Developers’ Interest in AI

Free and free-ish generative AI platforms are gaining popularity, but the benefits of most of their advanced features cost money. This led certain developers to look for free alternatives, many of whom didn't really check the source to be sure. Cybercrime follows trends and the trend is that malicious code is being inserted into open-source software packages that at least initially may appear legitimate.

As George Apostopoulos, a founding engineer at Endor Labs, describes, attackers target less cautious developers, lured by free access to popular AI tools. "Many people don't know better and fall for these offers," he says.


The Harmful Python Packages

Two evil Python packages, "gptplus" and "claudeai-eng," were uploaded to the Python Package Index, PyPI, one of the official repositories of open-source Python projects. The GPT-4 Turbo model by OpenAI and Claude chatbot by Anthropic were promised by API integrations from the user "Xeroline.".

While the packages seemed to work by connecting users to a demo version of ChatGPT, their true functionality was much nastier. The code also contained an ability to drop a Java archive (JAR) file which delivered the JarkaStealer malware to unsuspecting victims' systems.


What Is JarkaStealer?

The JarkaStealer is an infostealer malware that can extract sensitive information from infected systems. It has been sold on the Dark Web for as little as $20, but its more elaborate features can be bought for a few dollars more, which is designed to steal browser data and session tokens along with credentials for apps like Telegram, Discord, and Steam. It can also take screenshots of the victim's system, often revealing sensitive information.

Though the malware's effectiveness is highly uncertain, it is cheap enough and readily available to many attackers as an attractive tool. Its source code is even freely accessible on platforms like GitHub for an even wider reach.


Lessons for Developers

This incident points to risks in downloading unverified packages of open source, more so when handling emerging technologies such as AI. Development firms should screen all software sources to avoid shortcuts that seek free premium tools. Taking precautionary measures can save individuals and organizations from becoming victims of such attacks.

With regard to caution and best practices, developers are protected from malicious actors taking advantage of the GenAI boom.

PyPI Attack: Hackers Use AI Models to Deliver JarkaStealer via Python Libraries

PyPI Attack: Hackers Use AI Models to Deliver JarkaStealer via Python Libraries

Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. 

The supply chain campaign shows the advancement of cyber threats attacking developers and the urgent need for caution in open-source activities. 

Experts have found two malicious packages uploaded to the Python Index (PyPI) repository pretending to be popular artificial intelligence (AI) models like OpenAI Chatgpt and Anthropic Claude to distribute an information stealer known as JarkaStealer. 

About attack vector

Called gptplus and claudeai-eng, the packages were uploaded by a user called "Xeroline" last year, resulting in 1,748 and 1,826 downloads. The two libraries can't be downloaded from PyPI. According to Kaspersky, the malicious packages were uploaded to the repository by one author and differed only in name and description. 

Experts believe the package offered a way to access GPT-4 Turbo and Claude AI API but contained malicious code that, upon installation, started the installation of malware. 

Particularly, the "__init__.py" file in these packages included Base64-encoded data that included code to download a Java archive file ("JavaUpdater.jar") from a GitHub repository, also downloading the Java Runtime Environment (JRE) from a Dropbox URL in case Java isn't already deployed on the host, before running the JAR file.

The impact

Based on information stealer JarkaStealer, the JAR file can steal a variety of sensitive data like web browser data, system data, session tokens, and screenshots from a wide range of applications like Steam, Telegram, and Discord. 

In the last step, the stolen data is archived, sent to the attacker's server, and then removed from the target's machine.JarkaStealer is known to offer under a malware-as-a-service (MaaS) model through a Telegram channel for a cost between $20 and $50, however, the source code has been leaked on GitHub. 

ClickPy stats suggest packages were downloaded over 3,500 times, primarily by users in China, the U.S., India, Russia, Germany, and France. The attack was part of an all-year supply chain attack campaign. 

How JarkaStealer steals

  • Steals web browser data- cookies, browsing history, and saved passwords. 
  • Compromises system data and setals OS details and user login details.
  • Steals session tokens from apps like Discord, Telegram, and Steam.
  • Captures real-time desktop activity through screenshots.

The stolen information is compressed and transmitted to a remote server controlled by the hacker, where it is removed from the target’s device.

OpenAI's Latest AI Model Faces Diminishing Returns

 

OpenAI's latest AI model is yielding diminishing results while managing the demands of recent investments. 

The Information claims that OpenAI's upcoming AI model, codenamed Orion, is outperforming its predecessors in terms of performance gains. In staff testing, Orion reportedly achieved the GPT-4 performance level after only 20% of its training. 

However, the shift from GPT-4 to the upcoming GPT-5 is expected to result in fewer quality gains than the jump from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” noted employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

AI training often yields the biggest improvements in performance in the early stages and smaller gains in subsequent phases. As a result, the remaining 80% of training is unlikely to provide breakthroughs comparable to earlier generational improvements. This predicament with its latest AI model comes at a critical juncture for OpenAI, following a recent investment round that raised $6.6 billion.

With this financial backing, investors' expectations rise, as do technical hurdles that confound typical AI scaling approaches. If these early versions do not live up to expectations, OpenAI's future fundraising chances may not be as attractive. The report's limitations underscore a major difficulty for the entire AI industry: the decreasing availability of high-quality training data and the need to remain relevant in an increasingly competitive environment.

A June research (PDF) predicts that between 2026 and 2032, AI companies will exhaust the supply of publicly accessible human-generated text data. Developers have "largely squeezed as much out of" the data that has been utilised to enable the tremendous gains in AI that we have witnessed in recent years, according to The Information. OpenAI is fundamentally rethinking its approach to AI development in order to meet these challenges. 

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” states The Information.

Want to Make the Most of ChatGPT? Here Are Some Go-To Tips

 







Within a year and a half, ChatGPT has grown from an AI prototype to a broad productivity assistant, even sporting its text and code editor called Canvas. Soon, OpenAI will add direct web search capability to ChatGPT, putting the platform at the same table as Google's iconic search. With these fast updates, ChatGPT is now sporting quite a few features that may not be noticed at first glance but are deepening the user experience if one knows where to look.

This is the article that will teach you how to tap into ChatGPT, features from customization settings to unique prompting techniques, and not only five must-know tips will be useful in unlocking the full range of abilities of ChatGPT to any kind of task, small or big.


1. Rename Chats for Better Organisation

A new conversation with ChatGPT begins as a new thread, meaning that it will remember all details concerning that specific exchange but "forget" all the previous ones. This way, you can track the activities of current projects or specific topics because you can name your chats. The chat name that it might try to suggest is related to the flow of the conversation, and these are mostly overlooked contexts that users need to recall again. Renaming your conversations is one simple yet powerful means of staying organised if you rely on ChatGPT for various tasks.

To give a name to a conversation, tap the three dots next to the name in the sidebar. You can also archive older chats to remove them from the list without deleting them entirely, so you don't lose access to the conversations that are active.


2. Customise ChatGPT through Custom Instructions

Custom Instructions in ChatGPT is a chance to make your answers more specific to your needs because you will get to share your information and preferences with the AI. This is a two-stage personalization where you are explaining to ChatGPT what you want to know about yourself and, in addition, how you would like it to be returned. For instance, if you ask ChatGPT for coding advice several times a week, you can let the AI know what programming languages you are known in or would like to be instructed in so it can fine-tune the responses better. Or, you should be able to ask for ChatGPT to provide more verbose descriptions or to skip steps in order to make more intuitive knowledge of a topic.

To set up personal preferences, tap the profile icon on the upper right, and then from the menu, "Customise ChatGPT," and then fill out your preferences. Doing this will enable you to get responses tailored to your interests and requirements.


3. Choose the Right Model for Your Use

If you are a subscriber to ChatGPT Plus, you have access to one of several AI models each tailored to different tasks. The default model for most purposes is GPT-4-turbo (GPT-4o), which tends to strike the best balance between speed and functionality and even supports other additional features, including file uploads, web browsing, and dataset analysis.

However, other models are useful when one needs to describe a rather complex project with substantial planning. You may initiate a project using o1-preview that requires deep research and then shift the discussion to GPT-4-turbo to get quick responses. To switch models, you can click on the model dropdown at the top of your screen or type in a forward slash (/) in the chat box to get access to more available options including web browsing and image creation.


4. Look at what the GPT Store has available in the form of Mini-Apps

Custom GPTs, and the GPT Store enable "mini-applications" that are able to extend the functionality of the platform. The Custom GPTs all have some inbuilt prompts and workflows and sometimes even APIs to extend the AI capability of GPT. For instance, with Canva's GPT, you are able to create logos, social media posts, or presentations straight within the ChatGPT portal by linking up the Canva tool. That means you can co-create visual content with ChatGPT without having to leave the portal.

And if there are some prompts you often need to apply, or some dataset you upload most frequently, you can easily create your Custom GPT. This would be really helpful to handle recipes, keeping track of personal projects, create workflow shortcuts and much more. Go to the GPT Store by the "Explore GPTs" button in the sidebar. Your recent and custom GPTs will appear in the top tab, so find them easily and use them as necessary.


5. Manage Conversations with a Fresh Approach

For the best benefit of using ChatGPT, it is key to understand that every new conversation is an independent document with its "memory." It does recall enough from previous conversations, though generally speaking, its answers depend on what is being discussed in the immediate chat. This made chats on unrelated projects or topics best started anew for clarity.

For long-term projects, it might even be logical to go on with a single thread so that all relevant information is kept together. For unrelated topics, it might make more sense to start fresh each time to avoid confusion. Another way in which archiving or deleting conversations you no longer need can help free up your interface and make access to active threads easier is


What Makes AI Unique Compared to Other Software?

AI performs very differently from other software in that it responds dynamically, at times providing responses or "backtalk" and does not simply do what it is told to do. Such a property leads to some trial and error to obtain the desired output. For instance, one might prompt ChatGPT to review its own output as demonstrated by replacing single quote characters by double quote characters to generate more accurate results. This is similar to how a developer optimises an AI model, guiding ChatGPT to "think" through something in several steps.

ChatGPT Canvas and other features like Custom GPTs make the AI behave more like software in the classical sense—although, of course, with personality and learning. If ChatGPT continues to grow in this manner, features such as these may make most use cases easier and more delightful.

Following these five tips should help you make the most of ChatGPT as a productivity tool and keep pace with the latest developments. From renaming chats to playing around with Custom GPTs, all of them add to a richer and more customizable user experience.


OpenAI’s Disruption of Foreign Influence Campaigns Using AI

 

Over the past year, OpenAI has successfully disrupted over 20 operations by foreign actors attempting to misuse its AI technologies, such as ChatGPT, to influence global political sentiments and interfere with elections, including in the U.S. These actors utilized AI for tasks like generating fake social media content, articles, and malware scripts. Despite the rise in malicious attempts, OpenAI’s tools have not yet led to any significant breakthroughs in these efforts, according to Ben Nimmo, a principal investigator at OpenAI. 

The company emphasizes that while foreign actors continue to experiment, AI has not substantially altered the landscape of online influence operations or the creation of malware. OpenAI’s latest report highlights the involvement of countries like China, Russia, Iran, and others in these activities, with some not directly tied to government actors. Past findings from OpenAI include reports of Russia and Iran trying to leverage generative AI to influence American voters. More recently, Iranian actors in August 2024 attempted to use OpenAI tools to generate social media comments and articles about divisive topics such as the Gaza conflict and Venezuelan politics. 

A particularly bold attack involved a Chinese-linked network using OpenAI tools to generate spearphishing emails, targeting OpenAI employees. The attack aimed to plant malware through a malicious file disguised as a support request. Another group of actors, using similar infrastructure, utilized ChatGPT to answer scripting queries, search for software vulnerabilities, and identify ways to exploit government and corporate systems. The report also documents efforts by Iran-linked groups like CyberAveng3rs, who used ChatGPT to refine malicious scripts targeting critical infrastructure. These activities align with statements from U.S. intelligence officials regarding AI’s use by foreign actors ahead of the 2024 U.S. elections. 

However, these nations are still facing challenges in developing sophisticated AI models, as many commercial AI tools now include safeguards against malicious use. While AI has enhanced the speed and credibility of synthetic content generation, it has not yet revolutionized global disinformation efforts. OpenAI has invested in improving its threat detection capabilities, developing AI-powered tools that have significantly reduced the time needed for threat analysis. The company’s position at the intersection of various stages in influence operations allows it to gain unique insights and complement the work of other service providers, helping to counter the spread of online threats.

ChatGPT Vulnerability Exploited: Hacker Demonstrates Data Theft via ‘SpAIware

 

A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware. ChatGPT initially only stored memory within an active conversation session, resetting once the chat ended. This limited the potential for hackers to exploit data, as the information wasn’t saved long-term. 

However, earlier this year, OpenAI introduced a new feature allowing ChatGPT to retain memory between different conversations. This update, meant to personalize the user experience, also created an unexpected opportunity for cybercriminals to manipulate the chatbot’s memory retention. Rehberger identified that through prompt injection, hackers could insert malicious commands into ChatGPT’s memory. This allowed the chatbot to continuously send a user’s conversation history to a remote server, even across different sessions. 

Once a hacker successfully inserted this prompt into ChatGPT’s long-term memory, the user’s data would be collected each time they interacted with the AI tool. This makes the attack particularly dangerous, as most users wouldn’t notice anything suspicious while their information is being stolen in the background. What makes this attack even more alarming is that the hacker doesn’t require direct access to a user’s device to initiate the injection. The payload could be embedded within a website or image, and all it would take is for the user to interact with this media and prompt ChatGPT to engage with it. 

For instance, if a user asked ChatGPT to scan a malicious website, the hidden command would be stored in ChatGPT’s memory, enabling the hacker to exfiltrate data whenever the AI was used in the future. Interestingly, this exploit appears to be limited to the macOS app, and it doesn’t work on ChatGPT’s web version. When Rehberger first reported his discovery, OpenAI dismissed the issue as a “safety” concern rather than a security threat. However, once he built a proof-of-concept demonstrating the vulnerability, OpenAI took action, issuing a partial fix. This update prevents ChatGPT from sending data to remote servers, which mitigates some of the risks. 

However, the bot still accepts prompts from untrusted sources, meaning hackers can still manipulate the AI’s long-term memory. The implications of this exploit are significant, especially for users who rely on ChatGPT for handling sensitive data or important business tasks. It’s crucial that users remain vigilant and cautious, as these prompt injections could lead to severe privacy breaches. For example, any saved conversations containing confidential information could be accessed by cybercriminals, potentially resulting in financial loss, identity theft, or data leaks. To protect against such vulnerabilities, users should regularly review ChatGPT’s memory settings, checking for any unfamiliar entries or prompts. 

As demonstrated in Rehberger’s video, users can manually delete suspicious entries, ensuring that the AI’s long-term memory doesn’t retain harmful data. Additionally, it’s essential to be cautious about the sources from which they ask ChatGPT to retrieve information, avoiding untrusted websites or files that could contain hidden commands. While OpenAI is expected to continue addressing these security issues, this incident serves as a reminder that even advanced AI tools like ChatGPT are not immune to cyber threats. As AI technology continues to evolve, so do the tactics used by hackers to exploit these systems. Staying informed, vigilant, and cautious while using AI tools is key to minimizing potential risks.

ChatGPT Vulnerability Exposes Users to Long-Term Data Theft— Researcher Proves It

 



Independent security researcher Johann Rehberger found a flaw in the memory feature of ChatGPT. Hackers can manipulate the stored information that gets extracted to steal user data by exploiting the long-term memory setting of ChatGPT. This is actually an "issue related to safety, rather than security" as OpenAI termed the problem, showing how this feature allows storing of false information and captures user data over time.

Rehberger had initially reported the incident to OpenAI. The point was that the attackers could fill the AI's memory settings with false information and malicious commands. OpenAI's memory feature, in fact, allows the user's information from previous conversations to be put in that memory so during a future conversation, the AI can recall the age, preferences, or any other relevant details of that particular user without having been fed the same data repeatedly.

But what Rehberger had highlighted was the vulnerability that hackers capitalised on to permanently store false memories through a technique known as prompt injection. Essentially, it occurs when an attacker manipulates the AI by malicious content attached to emails, documents, or images. For example, he demonstrated how he could get ChatGPT to believe he was 102 and living in a virtual reality of sorts. Once these false memories were implanted, they could haunt and influence all subsequent interaction with the AI.


How Hackers Can Use ChatGPT's Memory to Steal Data

In proof of concept, Rehberger demonstrated how this vulnerability can be exploited in real-time for the theft of user inputs. In chat, hackers can send a link or even open an image that hooks ChatGPT into a malicious link and redirects all conversations along with the user data to a server owned by the hacker. Such attacks would not have to be stopped because the memory of the AI holds the instructions planted even after starting a new conversation.

Although OpenAI has issued partial fixes to prevent memory feature exploitation, the underlying mechanism of prompt injection remains. Attackers can still compromise ChatGPT's memory by embedding knowledge in their long-term memory that may have been seeded through unauthorised channels.


What Users Can Do

There are also concerns for users who care about what ChatGPT is going to remember about them in terms of data. Users need to monitor the chat session for any unsolicited shift in memory updates and screen regularly what is saved into and deleted from the memory of ChatGPT. OpenAI has put out guidance on how to manage the memory feature of the tool and how users may intervene in determining what is kept or deleted.

Though OpenAI did its best to address the issue, such an incident brings out a fact that continues to show how vulnerable AI systems remain when it comes to safety issues concerning user data and memory. Regarding AI development, safety regarding the protected sensitive information will always continue to raise concerns from developers to the users themselves.

Therefore, the weakness revealed by Rehberger shows how risky the introduction of AI memory features might be. The users need to be always alert about what information is stored and avoid any contacts with any content they do not trust. OpenAI is certainly able to work out security problems as part of its user safety commitment, but in this case, it also turns out that even the best solutions without active management on the side of a user will lead to breaches of data.




Slack Fixes AI Security Flaw After Expert Warning


 

Slack, the popular communication platform used by businesses worldwide, has recently taken action to address a potential security flaw related to its AI features. The company has rolled out an update to fix the issue and reassured users that there is no evidence of unverified access to their data. This move follows reports from cybersecurity experts who identified a possible weakness in Slack's AI capabilities that could be exploited by malicious actors.

The security concern was first brought to attention by PromptArmor, a cybersecurity firm that specialises in identifying vulnerabilities in AI systems. The firm raised alarms over the potential misuse of Slack’s AI functions, particularly those involving ChatGPT. These AI tools were intended to improve user experience by summarising discussions and assisting with quick replies. However, PromptArmor warned that these features could also be manipulated to access private conversations through a method known as "prompt injection."

Prompt injection is a technique where an attacker tricks the AI into executing harmful commands that are hidden within seemingly harmless instructions. According to PromptArmor, this could allow unauthorised individuals to gain access to private messages and even conduct phishing attacks. The firm also noted that Slack's AI could potentially be coerced into revealing sensitive information, such as API keys, which could then be sent to external locations without the knowledge of the user.

PromptArmor outlined a scenario in which an attacker could create a public Slack channel and embed a malicious prompt within it. This prompt could instruct the AI to replace specific words with sensitive data, such as an API key, and send that information to an external site. Alarmingly, this type of attack could be executed without the attacker needing to be a part of the private channel where the sensitive data is stored.

Further complicating the issue, Slack’s AI has the ability to pull data from both file uploads and direct messages. This means that even private files could be at risk if the AI is manipulated using prompt injection techniques.

Upon receiving the report, Slack immediately began investigating the issue. The company confirmed that, under specific and rare circumstances, an attacker could use the AI to gather certain data from other users in the same workspace. To address this, Slack quickly deployed a patch designed to fix the vulnerability. The company also assured its users that, at this time, there is no evidence indicating any customer data has been compromised.

In its official communication, Slack emphasised the limited nature of the threat and the quick action taken to resolve it. The update is now in place, and the company continues to monitor the situation to prevent any future incidents.

There are potential risks that come with integrating AI into workplace tools that need to be construed well. While AI has many upsides, including improved efficiency and streamlined communication, it also opens up new opportunities for cyber threats. It is crucial for organisations using AI to remain vigilant and address any security concerns that arise promptly.

Slack’s quick response to this issue stresses upon how imperative it is to stay proactive in a rapidly changing digital world.


AI Minefield: Risks of Gen AI in Your Personal Sphere

AI Minefield: Risks of Gen AI in Your Personal Sphere

Many customers are captivated by Gen AI, employing new technologies for a variety of personal and corporate purposes. 

However, many people ignore the serious privacy implications.

Is Generative AI all sunshine and rainbows?

Consumer AI products, such as OpenAI's ChatGPT, Google's Gemini, Microsoft Copilot software, and the new Apple Intelligence, are widely available and growing. However, the programs have various privacy practices in terms of how they use and retain user data. In many circumstances, users are unaware of how their data is or may be utilized.

This is where being an informed consumer becomes critical. According to Jodi Daniels, chief executive and privacy expert of Red Clover Advisors, which advises businesses on privacy issues, the granularity of what you can regulate varies depending on the technology. Daniels explained that there is no uniform opt-out for all technologies.

Privacy concerns

The rise of AI technologies, and their incorporation into so much of what customers do on their personal computers and cellphones, makes these problems much more pressing. A few months ago, for example, Microsoft introduced its first Surface PCs with a dedicated Copilot button on the keyboard for rapid access to the chatbot, fulfilling a promise made several months previously. 

Apple, for its part, presented its AI vision last month, which centered around numerous smaller models that operate on the company's devices and chips. Company officials have spoken publicly about the significance of privacy, which can be an issue with AI models.

Here are many approaches for consumers to secure their privacy in the new era of generative AI.

1. Use opt-outs provided by OpenAI and Google

Each generation AI tool has its own privacy policy, which may include opt-out choices. Gemini, for example, lets customers choose a retention time and erase certain data, among other activity limits.

ChatGPT allows users to opt out of having their data used for model training. To do so, click the profile symbol in the bottom-left corner of the page and then pick Data Controls from the Settings header. They must then disable the feature labeled "Improve the model for everyone." According to a FAQ on OpenAI's website, if this is disabled, fresh talks will not be utilized to train ChatGPT's models.

2. Opt-in, but for good reasons

Companies are incorporating modern AI into personal and professional solutions, like as Microsoft Copilot. Opt-in only for valid reasons. Copilot for Microsoft 365, for example, integrates with Word, Excel, and PowerPoint to assist users with activities such as analytics, idea development, and organization.

Microsoft claims that it does not share consumer data with third parties without permission, nor does it utilize customer data to train Copilot or other AI features without consent. 

Users can, however, opt in if they like by logging into the Power Platform admin portal, selecting settings, and tenant settings, and enabling data sharing for Dynamics 365 Copilot and Power Platform Copilot AI Features. They facilitate data sharing and saving.

3. Gen AI search: Setting retention period

Consumers may not think much before seeking information using AI, treating it like a search engine to create information and ideas. However, looking for specific types of information utilizing gen AI might be intrusive to a person's privacy, hence there are best practices for using such tools. Hoffman-Andrews recommends setting a short retention period for the generation AI tool. 

And, if possible, erase chats once you've gathered the desired information. Companies still keep server logs, but they can assist lessen the chance of a third party gaining access to your account, he explained. It may also limit the likelihood of sensitive information becoming part of the model training. "It really depends on the privacy settings of the particular site."

Investing in AI? Don’t Forget the Cyber Locks! VCs Advice.


The OpenAI Data Breach: A Wake-Up Call for Seed VCs

Security breaches are common in the current industry of artificial intelligence (AI) and machine learning (ML). However, when a prominent player like OpenAI falls victim to such an incident, it sends shockwaves through the tech community. This blog post delves into the recent OpenAI data breach and explores its impact on seed venture capitalists (VCs).

The Incident

OpenAI, known for its cutting-edge research in AI and its development of powerful language models, recently disclosed a security breach. Hackers gained unauthorized access to some of OpenAI’s internal systems, raising concerns about data privacy and security. While OpenAI assured users that no sensitive information was compromised, the incident highlights the vulnerability of AI companies to cyber threats.

Seed VCs on High Alert

Seed VCs, who invest in early-stage startups, should pay close attention to this breach. Here’s why:

Dependency on AI Companies

Seed VCs often collaborate with AI companies, providing funding and mentorship. As AI technologies become integral to various industries, VCs increasingly invest in startups leveraging AI/ML. The OpenAI breach underscores the need for due diligence when partnering with such firms.

Data Privacy Risks

Startups working with AI models generate and handle vast amounts of data. Seed VCs must assess the data security practices of their portfolio companies. A breach could harm the startup and impact the VC’s reputation and relationships with other investors.

Intellectual Property Concerns

Seed VCs invest in innovative ideas and technologies. If a startup’s IP is compromised due to lax security practices, it affects the VC’s investment. VCs should encourage startups to prioritize security and protect their intellectual assets.

Mitigating Risks: Seed VCs can take proactive steps

1. Due Diligence: Before investing, thoroughly evaluate a startup’s security protocols. Understand how they handle data, who has access, and their response plan in case of a breach.

2. Collaboration with AI Firms: Engage in open conversations with AI companies about security measures. VCs can influence best practices by advocating for robust security standards.

3. Education: Educate portfolio companies about security hygiene. Regular audits and training sessions can help prevent breaches.

OpenAI Hack Exposes Hidden Risks in AI's Data Goldmine


A recent security incident at OpenAI serves as a reminder that AI companies have become prime targets for hackers. Although the breach, which came to light following comments by former OpenAI employee Leopold Aschenbrenner, appears to have been limited to an employee discussion forum, it underlines the steep value of data these companies hold and the growing threats they face.

The New York Times detailed the hack after Aschenbrenner labelled it a “major security incident” on a podcast. However, anonymous sources within OpenAI clarified that the breach did not extend beyond an employee forum. While this might seem minor compared to a full-scale data leak, even superficial breaches should not be dismissed lightly. Unverified access to internal discussions can provide valuable insights and potentially lead to more severe vulnerabilities being exploited.

AI companies like OpenAI are custodians of incredibly valuable data. This includes high-quality training data, bulk user interactions, and customer-specific information. These datasets are crucial for developing advanced models and maintaining competitive edges in the AI ecosystem.

Training data is the cornerstone of AI model development. Companies like OpenAI invest vast amounts of resources to curate and refine these datasets. Contrary to the belief that these are just massive collections of web-scraped data, significant human effort is involved in making this data suitable for training advanced models. The quality of these datasets can impact the performance of AI models, making them highly coveted by competitors and adversaries.

OpenAI has amassed billions of user interactions through its ChatGPT platform. This data provides deep insights into user behaviour and preferences, much more detailed than traditional search engine data. For instance, a conversation about purchasing an air conditioner can reveal preferences, budget considerations, and brand biases, offering invaluable information to marketers and analysts. This treasure trove of data highlights the potential for AI companies to become targets for those seeking to exploit this information for commercial or malicious purposes.

Many organisations use AI tools for various applications, often integrating them with their internal databases. This can range from simple tasks like searching old budget sheets to more sensitive applications involving proprietary software code. The AI providers thus have access to critical business information, making them attractive targets for cyberattacks. Ensuring the security of this data is paramount, but the evolving nature of AI technology means that standard practices are still being established and refined.

AI companies, like other SaaS providers, are capable of implementing robust security measures to protect their data. However, the inherent value of the data they hold means they are under constant threat from hackers. The recent breach at OpenAI, despite being limited, should serve as a warning to all businesses interacting with AI firms. Security in the AI industry is a continuous, evolving challenge, compounded by the very AI technologies these companies develop, which can be used both for defence and attack.

The OpenAI breach, although seemingly minor, highlights the critical need for heightened security in the AI industry. As AI companies continue to amass and utilise vast amounts of valuable data, they will inevitably become more attractive targets for cyberattacks. Businesses must remain vigilant and ensure robust security practices when dealing with AI providers, recognising the gravity of the risks and responsibilities involved.


Breaking the Silence: The OpenAI Security Breach Unveiled

Breaking the Silence: The OpenAI Security Breach Unveiled

In April 2023, OpenAI, a leading artificial intelligence research organization, faced a significant security breach. A hacker gained unauthorized access to the company’s internal messaging system, raising concerns about data security, transparency, and the protection of intellectual property. 

In this blog, we delve into the incident, its implications, and the steps taken by OpenAI to prevent such breaches in the future.

The OpenAI Breach

The breach targeted an online forum where OpenAI employees discussed upcoming technologies, including features for the popular chatbot. While the actual GPT code and user data remained secure, the hacker obtained sensitive information related to AI designs and research. 

While Open AI shared the information with its staff and board members last year, it did not tell the public or the FBI about the breach, stating that doing so was unnecessary because no user data was stolen. 

OpenAI does not regard the attack as a national security issue and believes the attacker was a single individual with no links to foreign powers. OpenAI’s decision not to disclose the breach publicly sparked debate within the tech community.

Breach Impact

Leopold Aschenbrenner, a former OpenAI employee, had expressed worries about the company's security infrastructure and warned that its systems could be accessible to hostile intelligence services such as China. The company abruptly fired Aschenbrenner, although OpenAI spokesperson Liz Bourgeois told the New York Times that his dismissal had nothing to do with the document.

Similar Attacks and Open AI’s Response

This is not the first time OpenAI has had a security lapse. Since its launch in November 2022, ChatGPT has been continuously attacked by malicious actors, frequently resulting in data leaks. A separate attack exposed user names and passwords in February of this year. 

In March of last year, OpenAI had to take ChatGPT completely down to fix a fault that exposed customers' payment information to other active users, including their first and last names, email IDs, payment addresses, credit card info, and the last four digits of their card number. 

Last December, security experts found that they could convince ChatGPT to release pieces of its training data by prompting the system to endlessly repeat the word "poem."

OpenAI has taken steps to enhance security since then, including additional safety measures and a Safety and Security Committee.

AI-Generated Exam Answers Outperform Real Students, Study Finds

 

In a recent study, university exams taken by fictitious students using artificial intelligence (AI) outperformed those by real students and often went undetected by examiners. Researchers at the University of Reading created 33 fake students and employed the AI tool ChatGPT to generate answers for undergraduate psychology degree module exams.

The AI-generated responses scored, on average, half a grade higher than those of actual students. Remarkably, 94% of the AI essays did not raise any suspicion among markers, with only a 6% detection rate, which the study suggests is likely an overestimate. These findings, published in the journal Plos One, highlight a significant concern: "AI submissions robustly gained higher grades than real student submissions," indicating that students could use AI to cheat undetected and achieve better grades than their honest peers.

Associate Professor Peter Scarfe and Professor Etienne Roesch, who led the study, emphasized the need for educators globally to take note of these findings. Dr. Scarfe noted, "Many institutions have moved away from traditional exams to make assessment more inclusive. Our research shows it is of international importance to understand how AI will affect the integrity of educational assessments. We won’t necessarily go back fully to handwritten exams - but the global education sector will need to evolve in the face of AI."

In the study, the AI-generated answers and essays were submitted for first-, second-, and third-year modules without the knowledge of the markers. The AI students outperformed real undergraduates in the first two years, but in the third-year exams, human students scored better. This result aligns with the idea that current AI struggles with more abstract reasoning. The study is noted as the largest and most robust blind study of its kind to date.

Academics have expressed concerns about the impact of AI on education. For instance, Glasgow University recently reinstated in-person exams for one course. Additionally, a study reported by the Guardian earlier this year found that most undergraduates used AI programs to assist with their essays, but only 5% admitted to submitting unedited AI-generated text in their assessments.