Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label OpenAI. Show all posts

OpenAI's O3 Achieves Breakthrough in Artificial General Intelligence

 



 
In recent times, the rapid development of artificial intelligence took a significant turn when OpenAI introduced its O3 model, a system demonstrating human-level performance on tests designed to measure “general intelligence.” This achievement has reignited discussions on artificial intelligence, with a focus on understanding what makes O3 unique and how it could shape the future of AI.

Performance on the ARC-AGI Test 
 
OpenAI's O3 model showcased its exceptional capabilities by matching the average human score on the ARC-AGI test. This test evaluates an AI system's ability to solve abstract grid problems with minimal examples, measuring how effectively it can generalize information and adapt to new scenarios. Key highlights include:
  • Test Outcomes: O3 not only matched human performance but set a new benchmark in Artificial General Intelligence (AGI) development.
  • Adaptability: The model demonstrated the ability to draw generalized rules from limited examples, a critical capability for AGI progress.
Breakthrough in Science Problem-Solving 
 
Beyond the ARC-AGI test, the O3 model excelled in solving complex scientific questions. It achieved an impressive score of 87.7% compared to the 70% score of PhD-level experts, underscoring its advanced reasoning abilities. 
 
While OpenAI has not disclosed the specifics of O3’s development, its performance suggests the use of simple yet effective heuristics similar to AlphaGo’s training process. By evaluating patterns and applying generalized thought processes, O3 efficiently solves complex problems, redefining AI capabilities. An example rule demonstrates its approach.

“Any shape containing a salient line will be moved to the end of that line and will cover all the overlapping shapes in its new position.”
 
O3 and O3 Mini models represent a significant leap in AI, combining unmatched performance with general learning capabilities. However, their potential brings challenges related to cost, security, and ethical adoption that must be addressed for responsible use. As technology advances into this new frontier, the focus must remain on harnessing AI advancements to facilitate progress and drive positive change. With O3, OpenAI has ushered in a new era of opportunity, redefining the boundaries of what is possible in artificial intelligence.

Dutch Authority Flags Concerns Over AI Standardization Delays

 


As the Dutch privacy watchdog DPA announced on Wednesday, it was concerned that software developers developing artificial intelligence (AI) might use personal data. To get more information about this, DPA sent a letter to Microsoft-backed OpenAI. The Dutch Data Protection Authority (Dutch DPA) imposed a fine of 30.5 million euros on Clearview AI and ordered that they be subject to a penalty of up to 5 million euros if they fail to comply. 

As a result of the company's illegal database of billions of photographs of faces, including Dutch people, Clearview is an American company that offers facial recognition services. They have built an illegal database. According to their website, the Dutch DPA warns that Clearview's services are also prohibited. In light of the rapid growth of OpenAI's ChatGPT consumer app, governments, including those of the European Union, are considering how to regulate the technology. 

There is a senior official from the Dutch privacy watchdog Autoriteit Persoonsgegevens (AP), who told Euronews that the process of developing artificial intelligence standards will need to take place faster, in light of the AI Act. Introducing the EU AI Act, which is the first comprehensive AI law in the world. The regulation aims to address health and safety risks, as well as fundamental human rights issues, as well as democracy, the rule of law, and environmental protection. 

By adopting artificial intelligence systems, there is a strong possibility to benefit society, contribute to economic growth, enhance EU innovation and competitiveness as well as enhance EU innovation and global leadership. However, in some cases, the specific characteristics of certain AI systems may pose new risks relating to user safety, including physical safety and fundamental rights. 

There have even been instances where some of these powerful AI models could pose systemic risks if they are widely used. Since there is a lack of trust, this creates legal uncertainty and may result in a slower adoption of AI technologies by businesses, citizens, and public authorities due to legal uncertainties. Regulatory responses by national governments that are disparate could fragment the internal market. 

To address these challenges, legislative action was required to ensure that both the benefits and risks of AI systems were adequately addressed to ensure that the internal market functioned well. As for the standards, they are a way for companies to be reassured, and to demonstrate that they are complying with the regulations, but there is still a great deal of work to be done before they are available, and of course, time is running out,” said Sven Stevenson, who is the agency's director of coordination and supervision for algorithms. 

CEN-CELENEC and ETSI were tasked by the European Commission in May last year to compile the underlying standards for the industry, which are still being developed and this process continues to be carried out. This data protection authority, which also oversees the General Data Protection Regulation (GDPR), is likely to have the shared responsibility of checking the compliance of companies with the AI Act with other authorities, such as the Dutch regulator for digital infrastructure, the RDI, with which they will likely share this responsibility. 

By August next year, all EU member states will have to select their AI regulatory agency, and it appears that in most EU countries, national data protection authorities will be an excellent choice. The AP has already dealt with cases in which companies' artificial intelligence tools were found to be in breach of GDPR in its capacity as a data regulator. 

A US facial recognition company known as Clearview AI was fined €30.5 million in September for building an illegal database of photos and unique biometric codes linked to Europeans in September, which included photos, unique biometric codes, and other information. The AI Act will be complementary to GDPR, since it focuses primarily on data processing, and would have an impact in the sense that it pertains to product safety in future cases. Increasingly, the Dutch government is promoting the development of new technologies, including artificial intelligence, to promote the adoption of these technologies. 

The deployment of such technologies could have a major impact on public values like privacy, equality in the law, and autonomy. This became painfully evident when the scandal over childcare benefits in the Netherlands was brought to public attention in September 2018. The scandal in question concerns thousands of parents who were falsely accused of fraud by the Dutch tax authorities because of discriminatory self-learning algorithms that were applied while attempting to regulate the distribution of childcare benefits while being faced with discriminatory self-learning algorithms. 

It has been over a year since the Amsterdam scandal raised a great deal of controversy in the Netherlands, and there has been an increased emphasis on the supervision of new technologies, and in particular artificial intelligence, as a result, the Netherlands intentionally emphasizes and supports a "human-centred approach" to artificial intelligence. Taking this approach means that AI should be designed and used in a manner that respects human rights as the basis of its purpose, design, and use. AI should not weaken or undermine public values and human rights but rather reinforce them rather than weaken them. 

During the last few months, the Commission has established the so-called AI Pact, which provides workshops and joint commitments to assist businesses in getting ready for the upcoming AI Act. On a national level, the AP has also been organizing pilot projects and sandboxes with the Ministry of RDI and Economic Affairs so that companies can become familiar with the rules as they become more aware of them. 

Further, the Dutch government has also published an algorithm register as of December 2022, which is a public record of algorithms used by the government, which is intended to ensure transparency and explain the results of algorithms, and the administration wants these algorithms to be legally checked for discrimination and arbitrariness.

Big Tech's Interest in LLM Could Be Overkill

 

AI models are like babies: continuous growth spurts make them more fussy and needy. As the AI race heats up, frontrunners such as OpenAI, Google, and Microsoft are throwing billions at massive foundational AI models comprising hundreds of billions of parameters. However, they may be losing the plot. 

Size matters 

Big tech firms are constantly striving to make AI models bigger. OpenAI recently introduced GPT-4o, a huge multimodal model that "can reason across audio, vision, and text in real time." Meanwhile, Meta and Google both developed new and enhanced LLMs, while Microsoft built its own, known as MAI-1.

And these companies aren't cutting corners. Microsoft's capital investment increased to $14 billion in the most recent quarter, and the company expects that figure to rise further. Meta cautioned that its spending could exceed $40 billion. Google's concepts may be even more costly.

Demis Hassabis, CEO of Google DeepMind, has stated that the company plans to invest more than $100 billion in AI development over time. Many people are chasing the elusive dream of artificial generative intelligence (AGI), which allows an AI model to self-teach and perform jobs it wasn't prepared for. 

However, Nick Frosst, co-founder of AI firm Cohere, believes that such an achievement may not be attainable with a single high-powered chatbot.

“We don’t think AGI is achievable through (large language models) alone, and as importantly, we think it’s a distraction. The industry has lost sight of the end-user experience with the current trajectory of model development with some suggesting the next generation of models will cost billions to train,” Frosst stated. 

Aside from the cost, huge AI models pose security issues and require a significant amount of energy. Furthermore, after a given amount of growth, studies have shown that AI models might reach a point of diminishing returns.

However, Bob Rogers, PhD, co-founder of BeeKeeperAI and CEO of Oii.ai, told The Daily Upside that creating large, all-encompassing AI models is sometimes easier than creating smaller ones. Focussing on capability rather than efficiency is "the path of least resistance," he claims. 

Some tech businesses are already investigating the advantages of going small: Google and Microsoft both announced their own small language models earlier this year; however, they do not seem to be at the top of earnings call transcripts.

The Future of Artificial Intelligence: Progress and Challenges



Artificial intelligence (AI) is rapidly transforming the world, and by 2025, its growth is set to reach new heights. While the advancements in AI promise to reshape industries and improve daily lives, they also bring a series of challenges that need careful navigation. From enhancing workplace productivity to revolutionizing robotics, AI's journey forward is as complex as it is exciting.

In recent years, AI has evolved from basic applications like chatbots to sophisticated systems capable of assisting with diverse tasks such as drafting emails or powering robots for household chores. Companies like OpenAI and Google’s DeepMind are at the forefront of creating AI systems with the potential to match human intelligence. Despite these achievements, the path forward isn’t without obstacles.

One major challenge in AI development lies in the diminishing returns from scaling up AI models. Previously, increasing the size of AI models drove progress, but developers are now focusing on maximizing computing power to tackle complex problems. While this approach enhances AI's capabilities, it also raises costs, limiting accessibility for many users. Additionally, training data has become a bottleneck. Many of the most valuable datasets have already been utilized, leading companies to rely on AI-generated data. This practice risks introducing biases into systems, potentially resulting in inaccurate or unfair outcomes. Addressing these issues is critical to ensuring that AI remains effective and equitable.

The integration of AI into robotics is another area of rapid advancement. Robots like Tesla’s Optimus, which can perform household chores, and Amazon’s warehouse automation systems showcase the potential of AI-powered robotics. However, making such technologies affordable and adaptable remains a significant hurdle. AI is also transforming workplaces by automating repetitive tasks like email management and scheduling. While these tools promise increased efficiency, businesses must invest in training employees to use them effectively.

Regulation plays a crucial role in guiding AI’s development. Countries like those in Europe and Australia are already implementing laws to ensure the safe and ethical use of AI, particularly to mitigate its risks. Establishing global standards for AI regulation is essential to prevent misuse and steer its growth responsibly.

Looking ahead, AI is poised to continue its evolution, offering immense potential to enhance productivity, drive innovation, and create opportunities across industries. While challenges such as rising costs, data limitations, and the need for ethical oversight persist, addressing these issues thoughtfully will pave the way for AI to benefit society responsibly and sustainably.

The Privacy Risks of ChatGPT and AI Chatbots

 


AI chatbots like ChatGPT have captured widespread attention for their remarkable conversational abilities, allowing users to engage on diverse topics with ease. However, while these tools offer convenience and creativity, they also pose significant privacy risks. The very technology that powers lifelike interactions can also store, analyze, and potentially resurface user data, raising critical concerns about data security and ethical use.

The Data Behind AI's Conversational Skills

Chatbots like ChatGPT rely on Large Language Models (LLMs) trained on vast datasets to generate human-like responses. This training often includes learning from user interactions. Much like how John Connor taught the Terminator quirky catchphrases in Terminator 2: Judgment Day, these systems refine their capabilities through real-world inputs. However, this improvement process comes at a cost: personal data shared during conversations may be stored and analyzed, often without users fully understanding the implications.

For instance, OpenAI’s terms and conditions explicitly state that data shared with ChatGPT may be used to improve its models. Unless users actively opt-out through privacy settings, all shared information—from casual remarks to sensitive details like financial data—can be logged and analyzed. Although OpenAI claims to anonymize and aggregate user data for further study, the risk of unintended exposure remains.

Real-World Privacy Breaches

Despite assurances of data security, breaches have occurred. In May 2023, hackers exploited a vulnerability in ChatGPT’s Redis library, compromising the personal data of around 101,000 users. This breach underscored the risks associated with storing chat histories, even when companies emphasize their commitment to privacy. Similarly, companies like Samsung faced internal crises when employees inadvertently uploaded confidential information to chatbots, prompting some organizations to ban generative AI tools altogether.

Governments and industries are starting to address these risks. For instance, in October 2023, President Joe Biden signed an executive order focusing on privacy and data protection in AI systems. While this marks a step in the right direction, legal frameworks remain unclear, particularly around the use of user data for training AI models without explicit consent. Current practices are often classified as “fair use,” leaving consumers exposed to potential misuse.

Protecting Yourself in the Absence of Clear Regulations

Until stricter regulations are implemented, users must take proactive steps to safeguard their privacy while interacting with AI chatbots. Here are some key practices to consider:

  1. Avoid Sharing Sensitive Information
    Treat chatbots as advanced algorithms, not confidants. Avoid disclosing personal, financial, or proprietary information, no matter how personable the AI seems.
  2. Review Privacy Settings
    Many platforms offer options to opt out of data collection. Regularly review and adjust these settings to limit the data shared with AI

OpenAI's Latest AI Model Faces Diminishing Returns

 

OpenAI's latest AI model is yielding diminishing results while managing the demands of recent investments. 

The Information claims that OpenAI's upcoming AI model, codenamed Orion, is outperforming its predecessors in terms of performance gains. In staff testing, Orion reportedly achieved the GPT-4 performance level after only 20% of its training. 

However, the shift from GPT-4 to the upcoming GPT-5 is expected to result in fewer quality gains than the jump from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” noted employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

AI training often yields the biggest improvements in performance in the early stages and smaller gains in subsequent phases. As a result, the remaining 80% of training is unlikely to provide breakthroughs comparable to earlier generational improvements. This predicament with its latest AI model comes at a critical juncture for OpenAI, following a recent investment round that raised $6.6 billion.

With this financial backing, investors' expectations rise, as do technical hurdles that confound typical AI scaling approaches. If these early versions do not live up to expectations, OpenAI's future fundraising chances may not be as attractive. The report's limitations underscore a major difficulty for the entire AI industry: the decreasing availability of high-quality training data and the need to remain relevant in an increasingly competitive environment.

A June research (PDF) predicts that between 2026 and 2032, AI companies will exhaust the supply of publicly accessible human-generated text data. Developers have "largely squeezed as much out of" the data that has been utilised to enable the tremendous gains in AI that we have witnessed in recent years, according to The Information. OpenAI is fundamentally rethinking its approach to AI development in order to meet these challenges. 

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” states The Information.

How OpenAI’s New AI Agents Are Shaping the Future of Coding

 


OpenAI is taking the challenge of bringing into existence the very first powerful AI agents designed specifically to revolutionise the future of software development. It became so advanced that it could interpret in plain language instructions and generate complex code, hoping to make it achievable to complete tasks that would take hours in only minutes. This is the biggest leap forward AI has had up to date, promising a future in which developers can have a more creative and less repetitive target while coding.

Transforming Software Development

These AI agents represent a major change in the type of programming that's created and implemented. Beyond typical coding assistants, which may use suggestions to complete lines, OpenAI's agents produce fully formed, functional code from scratch based on relatively simple user prompts. It is theoretically possible that developers could do their work more efficiently, automating repetitive coding and focusing more on innovation and problem solving on more complicated issues. The agents are, in effect, advanced assistants capable of doing more helpful things than the typical human assistant with anything from far more complex programming requirements.


Competition from OpenAI with Anthropic

As OpenAI makes its moves, it faces stiff competition from Anthropic-an AI company whose growth rate is rapidly taking over. Having developed the first released AI models focused on advancing coding, Anthropic continues to push OpenAI to even further refinement in their agents. This rivalry is more than a race between firms; it is infusing quick growth that works for the whole industry because both companies are setting new standards by working on AI-powered coding tools. As both compete, developers and users alike stand to benefit from the high-quality, innovative tools that will be implied from the given race.


Privacy and Security Issues

The AI agents also raise privacy issues. Concerns over the issue of data privacy and personal privacy arise if these agents can gain access to user devices. Secure integration of the agents will require utmost care because developers rely on the unassailability of their systems. Balancing AI's powerful benefits with needed security measures will be a key determinant of their success in adoption. Also, planning will be required for the integration of these agents into the current workflows without causing undue disruptions to the established standards and best practices in security coding.


Changing Market and Skills Environment

OpenAI and Anthropic are among the leaders in many of the changes that will remake both markets and skills in software engineering. As AI becomes more central to coding, this will change the industry and create new sorts of jobs as it requires the developer to adapt toward new tools and technologies. The extensive reliance on AI in code creation would also invite fresh investments in the tech sector and accelerate broadening the AI market.


The Future of AI in Coding

Rapidly evolving AI agents by OpenAI mark the opening of a new chapter for the intersection of AI and software development, promising to accelerate coding, making it faster, more efficient, and accessible to a wider audience of developers who will enjoy assisted coding towards self-writing complex instructions. The further development by OpenAI will most definitely continue to shape the future of this field, representing exciting opportunities and serious challenges capable of changing the face of software engineering in the foreseeable future.




UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

ChatGPT Vulnerability Exploited: Hacker Demonstrates Data Theft via ‘SpAIware

 

A recent cyber vulnerability in ChatGPT’s long-term memory feature was exposed, showing how hackers could use this AI tool to steal user data. Security researcher Johann Rehberger demonstrated this issue through a concept he named “SpAIware,” which exploited a weakness in ChatGPT’s macOS app, allowing it to act as spyware. ChatGPT initially only stored memory within an active conversation session, resetting once the chat ended. This limited the potential for hackers to exploit data, as the information wasn’t saved long-term. 

However, earlier this year, OpenAI introduced a new feature allowing ChatGPT to retain memory between different conversations. This update, meant to personalize the user experience, also created an unexpected opportunity for cybercriminals to manipulate the chatbot’s memory retention. Rehberger identified that through prompt injection, hackers could insert malicious commands into ChatGPT’s memory. This allowed the chatbot to continuously send a user’s conversation history to a remote server, even across different sessions. 

Once a hacker successfully inserted this prompt into ChatGPT’s long-term memory, the user’s data would be collected each time they interacted with the AI tool. This makes the attack particularly dangerous, as most users wouldn’t notice anything suspicious while their information is being stolen in the background. What makes this attack even more alarming is that the hacker doesn’t require direct access to a user’s device to initiate the injection. The payload could be embedded within a website or image, and all it would take is for the user to interact with this media and prompt ChatGPT to engage with it. 

For instance, if a user asked ChatGPT to scan a malicious website, the hidden command would be stored in ChatGPT’s memory, enabling the hacker to exfiltrate data whenever the AI was used in the future. Interestingly, this exploit appears to be limited to the macOS app, and it doesn’t work on ChatGPT’s web version. When Rehberger first reported his discovery, OpenAI dismissed the issue as a “safety” concern rather than a security threat. However, once he built a proof-of-concept demonstrating the vulnerability, OpenAI took action, issuing a partial fix. This update prevents ChatGPT from sending data to remote servers, which mitigates some of the risks. 

However, the bot still accepts prompts from untrusted sources, meaning hackers can still manipulate the AI’s long-term memory. The implications of this exploit are significant, especially for users who rely on ChatGPT for handling sensitive data or important business tasks. It’s crucial that users remain vigilant and cautious, as these prompt injections could lead to severe privacy breaches. For example, any saved conversations containing confidential information could be accessed by cybercriminals, potentially resulting in financial loss, identity theft, or data leaks. To protect against such vulnerabilities, users should regularly review ChatGPT’s memory settings, checking for any unfamiliar entries or prompts. 

As demonstrated in Rehberger’s video, users can manually delete suspicious entries, ensuring that the AI’s long-term memory doesn’t retain harmful data. Additionally, it’s essential to be cautious about the sources from which they ask ChatGPT to retrieve information, avoiding untrusted websites or files that could contain hidden commands. While OpenAI is expected to continue addressing these security issues, this incident serves as a reminder that even advanced AI tools like ChatGPT are not immune to cyber threats. As AI technology continues to evolve, so do the tactics used by hackers to exploit these systems. Staying informed, vigilant, and cautious while using AI tools is key to minimizing potential risks.

Growing Focus on Data Privacy Among GenAI Professionals in 2024

 


Recent reports published by Deloitte and Deloitte Consulting, highlighting the significance of data privacy as it pertains to Generative Artificial Intelligence (GenAI), have been widely cited. As the survey found, there has been a significant increase in professionals' concerns about data privacy; only 22% ranked it as their top concern at the beginning of 2023, and the number will rise to 72% by the end of 2024. 

Technology is advancing at an exponential rate, and as a result, there is a growing awareness of its potential risks. There has been a surge in concerns over data privacy caused by generative AI across several industries, according to a new report by Deloitte. Only 22% of professionals ranked it as among their top three concerns last year, these numbers have risen to 72% this year, according to a recent study. 

There was also strong concern regarding data provenance and transparency among professionals, with 47% and 40% informing us that they considered them to be among their top three ethical GenAI concerns for this year, respectively. The proportion of respondents concerned about job displacement, however, was only 16%. It is becoming increasingly common for staff to be curious about how AI technology operates, especially when it comes to sensitive data. 

Almost half of security professionals surveyed by HackerOne in September believe AI is risky, with many of them believing leaks of training data threaten their networks' security. It is noteworthy that 78% of business leaders ranked "safe and secure" as one of their top three ethical technology principles. This represents a 37% increase from the year 2023, which shows the importance of security to businesses today.

As a result of Deloitte's 2024 "State of Ethics and Trust in Technology " report, the results of the survey were reported in a report which surveyed over 1,800 business and technical professionals worldwide, asking them to rate the ethical principles that they apply to technological processes and, specifically, to their use of GenAI. It is becoming increasingly important for technological leaders to carefully examine the talent needs of their organizations, as they assist in guiding the adoption of generative AI. There are also ethical considerations that should be included on this checklist as well. 

A Deloitte report highlights the effectiveness of GenAI in eliminating the "expertise barrier": more people will be able to make more use of their data more happily and cost-effectively, according to Sachin Kulkarni, managing director, of risk and brand protection, at Deloitte. There may be a benefit to this, though as a result there may be an increased risk of data leaks as a result of this action." 

Furthermore, there has been concern expressed about the effects of generative AI on transparency, data provenance, intellectual property ownership, and hallucinations among professionals. Even though job displacement is often listed as a top concern by respondents, only 16% of those asked are reporting job displacement to be true. As a result of the assessment of emerging technology categories, business and IT professionals have concluded that cognitive technologies, which include large language models, artificial intelligence, neural networks, and generative AI, among others, pose the greatest ethical challenges.  

This category had a significant achievement over other technology verticals, including virtual reality, autonomous vehicles, and robotics. However, respondents stated that they considered cognitive technologies to be the most likely to bring about social good in the future. Flexential's survey published earlier this month found that several executives, in light of the huge reliance on data, are concerned about how generative AI tools can increase cybersecurity risks by extending their attack surface as a result, according to the report. 

In Deloitte's annual report, however, the percentage of professionals reporting that they use GenAI internally grew by 20% between last year and this year, reflecting an increase in the use of GenAI by their employees over the previous year. 94% of the respondents said they had incorporated it into their organization's processes in some way or another. Nevertheless, most respondents indicated that these technologies are either still in the pilot phase or are limited in their usage, with only 12% saying that they are used extensively. 

Gartner research published last year also found that about 80% of GenAI projects fail to make it past proof-of-concept as a result of a lack of resources. Europe has been impacted by the recent EU Artificial Intelligence Act and 34% of European respondents have reported that their organizations have taken action over the past year to change their use of AI to adapt to the Act's requirements. 

According to the survey results, however, the impact of the Act is more widespread, with 26% of respondents from the South Asian region changing their lifestyles because of it, and 16% of those from the North and South American regions did the same. The survey also revealed that 20 per cent of respondents based in the U.S. had altered the way their organization was operating as a result of the executive order. According to the survey, 25% of South Asians, 21% of South Americans, and 12% of Europeans surveyed had the same perspective. 

The report explains that "Cognitive technologies such as artificial intelligence (AI) have the potential to provide society with the greatest benefits, but are also the most vulnerable to misuse," according to its authors. The accelerated adoption of GenAI technology is overtaking the capacity of organizations to effectively govern it at a rapid pace. GenAI tools can provide a great deal of help to businesses in a range of areas, from choosing which use cases to apply them to quality assurance, to implementing ethical standards. 

Companies should prioritize both of these areas." Despite artificial intelligence being widely used, policymakers want to make sure that they won't get themselves into trouble with its use, especially when it comes to legislation because any use of it can lead to a lot of problems. 34% of respondents reported that regulatory compliance was their most important reason for implementing ethics policies and guidelines to comply with regulations, while regulatory penalties topped the list of concerns about not complying with such policies and guidelines. 

A new piece of legislation in the EU, known as the Artificial Intelligence Act, entered into force on August 1. The Act, which takes effect today, is intended to ensure that artificial intelligence systems that are used in high-risk environments are safe, transparent, and ethical. If a company does not comply with the regulations, it could face financial penalties ranging from €35 million ($38 million), which is equivalent to 7% of global turnover, to €7.5 million ($8.1 million), which is equivalent to 1.5% of global turnover. 

Over a hundred companies have already signed the EU Artificial Intelligence Pact, with Amazon, Google, Microsoft, and OpenAI among them; they have also volunteered to begin implementing the requirements of the bill before any deadlines established by law. Both of these actions demonstrate that they are committed to the responsible implementation of artificial intelligence in society, and also help them to avoid future legal challenges in the future. 

The United States released a similar executive order in October 2023 with broad guidelines regarding the protection and enhancement of military, civil, and personal privacy as well as protecting the security of government agencies while fostering AI innovation and competition across the entire country. Even though this is not a law, many companies operating in the U.S. have made policy changes to ensure compliance with regulatory changes and comply with public expectations regarding the privacy and security of AI.

Project Strawberry: Advancing AI with Q-learning, A* Algorithms, and Dual-Process Theory

Project Strawberry, initially known as Q*, has quickly become a focal point of excitement and discussion within the AI community. The project aims to revolutionize artificial intelligence by enhancing its self-learning and reasoning capabilities, crucial steps toward achieving Artificial General Intelligence (AGI). By incorporating advanced algorithms and theories, Project Strawberry pushes the boundaries of what AI can accomplish, making it a topic of intense interest and speculation. 

At the core of Project Strawberry are several foundational algorithms that enable AI systems to learn and make decisions more effectively. The project utilizes Q-learning, a reinforcement learning technique that allows AI to determine optimal actions through trial and error, helping it navigate complex environments. Alongside this, the A* search algorithm provides efficient pathfinding capabilities, ensuring AI can find the best solutions to problems quickly and accurately. 

Additionally, the dual-process theory, inspired by human cognitive processes, is used to balance quick, intuitive judgments with thorough, deliberate analysis, enhancing decision-making abilities. Despite the project’s promising advancements, it also raises several concerns. One of the most significant risks involves encryption cracking, where advanced AI could potentially break encryption codes, posing a severe security threat. 

Furthermore, the issue of “AI hallucinations”—errors in AI outputs—remains a critical challenge that needs to be addressed to ensure accurate and trustworthy AI responses. Another concern is the high computational demands of Project Strawberry, which may lead to increased costs and energy consumption. Efficient resource management and optimization will be crucial to maintaining the project’s scalability and sustainability. The ultimate goal of Project Strawberry is to pave the way for AGI, where AI systems can perform any intellectual task a human can. 

Achieving AGI would revolutionize problem-solving across various fields, enabling AI to tackle long-term and complex challenges with advanced reasoning capabilities. OpenAI envisions developing “reasoners” that exhibit human-like intelligence, pushing the frontiers of AI research even further. While Project Strawberry represents a significant step forward in AI development, it also presents complex challenges that must be carefully navigated. 

The project’s potential has fueled widespread excitement and anticipation within the AI community, with many eagerly awaiting further updates and breakthroughs. As OpenAI continues to refine and develop Project Strawberry, it could set the stage for a new era in AI, bringing both remarkable possibilities and significant responsibilities.

OpenAI's Tool Can Detect Chat-GPT Written Texts

OpenAI's Tool Can Detect Chat-GPT Written Texts

OpenAI to Release AI Detection Tool

OpenAI has been at the forefront of the evolving AI landscape, doing wonders with its machine learning and natural language processing capabilities. One of its best creations, ChatGPT, is known for creating human-like text. But as they say, with great power comes great responsibility. OpenAI is aware of the potential misuse and has built a tool that can catch students who use ChatGPT to cheat on their assignments. According to experts, however, there has yet to be a final release date.

As per OpenAI’s spokesperson, the company is in the research phase of the watermarking method explained in the Journal’s story, stressing that it doesn’t want to take any chances. 

Is there a need for AI detection tools?

The abundance of AI-generated information has raised questions about originality and authority. AI chatbots like ChatGPT have become advanced. Today, it is a challenge to differentiate between human and AI-generated texts, such as the resemblance. This can impact various sectors like education, cybersecurity, and journalism. It can be helpful in instances where we can detect AI-generated texts and check academic honesty, address misinformation, and improve digital communications security.

About the Tech

OpenAI uses a watermarking technique to detect AI-generated texts, by altering the way ChatGPT uses words, attaching an invisible watermark with the AI-generated information. The watermark can be detected later, letting users know if the texts are Chat-GPT written. The tech is said to be advanced, making it difficult for the cheaters to escape the detection process.

Ethical Concerns and Downsides

OpenAI proceeds with caution, even with the potential benefits. The main reason is potential misuse if the tool gets into the wrong hands, bad actors can use it to target users based on the content they write. Another problem is the tool’s ability to be effective for different dialects and languages. OpenAI has accepted that non-English speakers can be impacted differently, because the watermarking nuances may not be seamlessly translated across languages.

Another problem can be the chances of false positives. If the AI detection tool mistakes in detecting human-written text as AI-generated, it can cause unwanted consequences for the involved users.

OpenAI Hack Exposes Hidden Risks in AI's Data Goldmine


A recent security incident at OpenAI serves as a reminder that AI companies have become prime targets for hackers. Although the breach, which came to light following comments by former OpenAI employee Leopold Aschenbrenner, appears to have been limited to an employee discussion forum, it underlines the steep value of data these companies hold and the growing threats they face.

The New York Times detailed the hack after Aschenbrenner labelled it a “major security incident” on a podcast. However, anonymous sources within OpenAI clarified that the breach did not extend beyond an employee forum. While this might seem minor compared to a full-scale data leak, even superficial breaches should not be dismissed lightly. Unverified access to internal discussions can provide valuable insights and potentially lead to more severe vulnerabilities being exploited.

AI companies like OpenAI are custodians of incredibly valuable data. This includes high-quality training data, bulk user interactions, and customer-specific information. These datasets are crucial for developing advanced models and maintaining competitive edges in the AI ecosystem.

Training data is the cornerstone of AI model development. Companies like OpenAI invest vast amounts of resources to curate and refine these datasets. Contrary to the belief that these are just massive collections of web-scraped data, significant human effort is involved in making this data suitable for training advanced models. The quality of these datasets can impact the performance of AI models, making them highly coveted by competitors and adversaries.

OpenAI has amassed billions of user interactions through its ChatGPT platform. This data provides deep insights into user behaviour and preferences, much more detailed than traditional search engine data. For instance, a conversation about purchasing an air conditioner can reveal preferences, budget considerations, and brand biases, offering invaluable information to marketers and analysts. This treasure trove of data highlights the potential for AI companies to become targets for those seeking to exploit this information for commercial or malicious purposes.

Many organisations use AI tools for various applications, often integrating them with their internal databases. This can range from simple tasks like searching old budget sheets to more sensitive applications involving proprietary software code. The AI providers thus have access to critical business information, making them attractive targets for cyberattacks. Ensuring the security of this data is paramount, but the evolving nature of AI technology means that standard practices are still being established and refined.

AI companies, like other SaaS providers, are capable of implementing robust security measures to protect their data. However, the inherent value of the data they hold means they are under constant threat from hackers. The recent breach at OpenAI, despite being limited, should serve as a warning to all businesses interacting with AI firms. Security in the AI industry is a continuous, evolving challenge, compounded by the very AI technologies these companies develop, which can be used both for defence and attack.

The OpenAI breach, although seemingly minor, highlights the critical need for heightened security in the AI industry. As AI companies continue to amass and utilise vast amounts of valuable data, they will inevitably become more attractive targets for cyberattacks. Businesses must remain vigilant and ensure robust security practices when dealing with AI providers, recognising the gravity of the risks and responsibilities involved.


Hacker Breaches OpenAI, Steals Sensitive AI Tech Details


 

Earlier this year, a hacker successfully breached OpenAI's internal messaging systems, obtaining sensitive details about the company's AI technologies. The incident, initially kept under wraps by OpenAI, was not reported to authorities as it was not considered a threat to national security. The breach was revealed through sources cited by The New York Times, which highlighted that the hacker accessed discussions in an online forum used by OpenAI employees to discuss their latest technologies.

The breach was disclosed to OpenAI employees during an April 2023 meeting at their San Francisco office, and the board of directors was also informed. According to sources, the hacker did not penetrate the systems where OpenAI develops and stores its artificial intelligence. Consequently, OpenAI executives decided against making the breach public, as no customer or partner information was compromised.

Despite the decision to withhold the information from the public and authorities, the breach sparked concerns among some employees about the potential risks posed by foreign adversaries, particularly China, gaining access to AI technology that could threaten U.S. national security. The incident also brought to light internal disagreements over OpenAI's security measures and the broader implications of their AI technology.

In the aftermath of the breach, Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the company's board of directors. In his memo, Aschenbrenner criticised OpenAI's security measures, arguing that the company was not doing enough to protect its secrets from foreign adversaries. He emphasised the need for stronger security to prevent the theft of crucial AI technologies.

Aschenbrenner later claimed that he was dismissed from OpenAI in the spring for leaking information outside the company, which he argued was a politically motivated decision. He hinted at the breach during a recent podcast, but the specific details had not been previously reported.

In response to Aschenbrenner's allegations, OpenAI spokeswoman Liz Bourgeois acknowledged his contributions and concerns but refuted his claims regarding the company's security practices. Bourgeois stated that OpenAI addressed the incident and shared the details with the board before Aschenbrenner joined the company. She emphasised that Aschenbrenner's separation from the company was unrelated to the concerns he raised about security.

While the company deemed the incident not to be a national security threat, the internal debate it sparked highlights the ongoing challenges in safeguarding advanced technological developments from potential threats.


Breaking the Silence: The OpenAI Security Breach Unveiled

Breaking the Silence: The OpenAI Security Breach Unveiled

In April 2023, OpenAI, a leading artificial intelligence research organization, faced a significant security breach. A hacker gained unauthorized access to the company’s internal messaging system, raising concerns about data security, transparency, and the protection of intellectual property. 

In this blog, we delve into the incident, its implications, and the steps taken by OpenAI to prevent such breaches in the future.

The OpenAI Breach

The breach targeted an online forum where OpenAI employees discussed upcoming technologies, including features for the popular chatbot. While the actual GPT code and user data remained secure, the hacker obtained sensitive information related to AI designs and research. 

While Open AI shared the information with its staff and board members last year, it did not tell the public or the FBI about the breach, stating that doing so was unnecessary because no user data was stolen. 

OpenAI does not regard the attack as a national security issue and believes the attacker was a single individual with no links to foreign powers. OpenAI’s decision not to disclose the breach publicly sparked debate within the tech community.

Breach Impact

Leopold Aschenbrenner, a former OpenAI employee, had expressed worries about the company's security infrastructure and warned that its systems could be accessible to hostile intelligence services such as China. The company abruptly fired Aschenbrenner, although OpenAI spokesperson Liz Bourgeois told the New York Times that his dismissal had nothing to do with the document.

Similar Attacks and Open AI’s Response

This is not the first time OpenAI has had a security lapse. Since its launch in November 2022, ChatGPT has been continuously attacked by malicious actors, frequently resulting in data leaks. A separate attack exposed user names and passwords in February of this year. 

In March of last year, OpenAI had to take ChatGPT completely down to fix a fault that exposed customers' payment information to other active users, including their first and last names, email IDs, payment addresses, credit card info, and the last four digits of their card number. 

Last December, security experts found that they could convince ChatGPT to release pieces of its training data by prompting the system to endlessly repeat the word "poem."

OpenAI has taken steps to enhance security since then, including additional safety measures and a Safety and Security Committee.

Researchers Find ChatGPT’s Latest Bot Behaves Like Humans

 

A team led by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, used psychology and behavioural economics tools to characterise the personality and behaviour of ChatGPT's popular AI-driven bots in a paper published in the Proceedings of the National Academy of Sciences on June 12. 

This study found that the most recent version of the chatbot, version 4, was indistinguishable from its human counterparts. When the bot picked less common human behaviours, it behaved more cooperatively and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” stated Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research. 

In the study, the research team presented a widely known personality test to ChatGPT versions 3 and 4 and asked the chatbots to describe their moves in a series of behavioural games that can predict real-world economic and ethical behaviours. The games included pre-determined exercises in which players had to select whether to inform on a partner in crime or how to share money with changing incentives. The bots' responses were compared to those of over 100,000 people from 50 nations. 

The study is one of the first in which an artificial intelligence source has passed a rigorous Turing test. A Turing test, named after British computing pioneer Alan Turing, can consist of any job assigned to a machine to determine whether it performs like a person. If the machine seems to be human, it passes the test. 

Chatbot personality quirks

The researchers assessed the bots' personality qualities using the OCEAN Big-5, a popular personality exam that evaluates respondents on five fundamental characteristics that influence behaviour. In the study, ChatGPT's version 4 performed within normal ranges for the five qualities but was only as agreeable as the lowest third of human respondents. The bot passed the Turing test, but it wouldn't have made many friends. 

Version 4 outperformed version 3 in terms of chip and motherboard performance. The previous version, with which many internet users may have interacted for free, was only as appealing to the bottom fifth of human responders. Version 3 was likewise less open to new ideas and experiences than all but a handful of the most stubborn people. 

Human-AI interactions 

Much of the public's concern about AI stems from their failure to understand how bots make decisions. It can be difficult to trust a bot's advice if you don't know what it's designed to accomplish. Jackson's research shows that even when researchers cannot scrutinise AI's inputs and algorithms, they can discover potential biases by meticulously examining outcomes. 

As a behavioural economist who has made significant contributions to our knowledge of how human social structures and interactions influence economic decision-making, Jackson is concerned about how human behaviour may evolve in response to AI.

“It’s important for us to understand how interactions with AI are going to change our behaviors and how that will change our welfare and our society,” Jackson concluded. “The more we understand early on—the more we can understand where to expect great things from AI and where to expect bad things—the better we can do to steer things in a better direction.”

OpenAI and Stack Overflow Partnership: A Controversial Collaboration

OpenAI and Stack Overflow Partnership: A Controversial Collaboration

The Partnership Details

OpenAI and Stack Overflow are collaborating through OverflowAPI access to provide OpenAI users and customers with the correct and validated data foundation that AI technologies require to swiftly solve an issue, allowing engineers to focus on critical tasks. 

OpenAI will additionally share validated technical knowledge from Stack Overflow directly in ChatGPT, allowing users to quickly access trustworthy, credited, correct, and highly technical expertise and code backed by millions of developers who have contributed to the Stack Overflow platform over the last 15 years.

User Protests and Concerns

However, several Stack Overflow users were concerned about this partnership since they felt it was unethical for OpenAI to profit from their content without authorization.

Following the news, some users wished to erase their responses, including those with the most votes. However, StackCommerce does not often enable the deletion of posts if the question has any answers.

Ben, Epic Games' user interface designer, stated that he attempted to change his highest-rated responses and replace them with a message criticizing the relationship with OpenAI.

Stack Overflow won't let you erase questions with acceptable answers and high upvotes because this would remove knowledge from the community. Ben posted on Mastodon.

Instead, he changed his top-rated responses to a protest message. Within an hour, the moderators had changed the questions and banned Ben's account for seven days.

Ben, however, uploaded a screenshot showing Stack Overflow suspending his account after rolling back the modified messages to the original response. 

Stack Overflow’s Stance

Moderators on Stack Overflow clarified in an email that Ben shared that users are not able to remove posts because they negatively impact the community as a whole.

It is not appropriate to remove posts that could be helpful to others unless there are particular circumstances. The basic principle of Stack Exchange is that knowledge is helpful to others who might encounter similar issues in the future, even if the post's original author can no longer use it, replied Stack Exchange moderators to users on mail.

GDPR Considerations:

Article 17 of the GDPR rules grants users in the EU the “right to be forgotten,” allowing them to request the removal of personal data.

However, Article 17(3) states that websites have the right not to delete data necessary for “exercising the right of freedom of expression and information.”

Stack Overflow cited this provision when explaining why they do not allow users to remove posts

The partnership between OpenAI and Stack Overflow has sparked controversy, with users expressing concerns about data usage and freedom of expression. Stack Overflow’s decision to suspend users who altered their answers in protest highlights the challenges of balancing privacy rights and community knowledge

OpenAI Bolsters Data Security with Multi-Factor Authentication for ChatGPT

 

OpenAI has recently rolled out a new security feature aimed at addressing one of the primary concerns surrounding the use of generative AI models such as ChatGPT: data security. In light of the growing importance of safeguarding sensitive information, OpenAI's latest update introduces an additional layer of protection for ChatGPT and API accounts.

The announcement, made through an official post by OpenAI, introduces users to the option of enabling multi-factor authentication (MFA), commonly referred to as 2FA. This feature is designed to fortify security measures and thwart unauthorized access attempts.

For those unfamiliar with multi-factor authentication, it's essentially a security protocol that requires users to provide two or more forms of verification before gaining access to their accounts. By incorporating this additional step into the authentication process, OpenAI aims to bolster the security posture of its platforms. Users are guided through the process via a user-friendly video tutorial, which demonstrates the steps in a clear and concise manner.

To initiate the setup process, users simply need to navigate to their profile settings by clicking on their name, typically located in the bottom left-hand corner of the screen. From there, it's just a matter of selecting the "Settings" option and toggling on the "Multi-factor authentication" feature.

Upon activation, users may be prompted to re-authenticate their account to confirm the changes or redirected to a dedicated page titled "Secure your Account." Here, they'll find step-by-step instructions on how to proceed with setting up multi-factor authentication.

The next step involves utilizing a smartphone to scan a QR code using a preferred authenticator app, such as Google Authenticator or Microsoft Authenticator. Once the QR code is scanned, users will receive a one-time code that they'll need to input into the designated text box to complete the setup process.

It's worth noting that multi-factor authentication adds an extra layer of security without introducing unnecessary complexity. In fact, many experts argue that it's a highly effective deterrent against unauthorized access attempts. As ZDNet's Ed Bott aptly puts it, "Two-factor authentication will stop most casual attacks dead in their tracks."

Given the simplicity and effectiveness of multi-factor authentication, there's little reason to hesitate in enabling this feature. Moreover, when it comes to safeguarding sensitive data, a proactive approach is always preferable. 

Generative AI Worms: Threat of the Future?

Generative AI worms

The generative AI systems of the present are becoming more advanced due to the rise in their use, such as Google's Gemini and OpenAI's ChatGPT. Tech firms and startups are making AI bits and ecosystems that can do mundane tasks on your behalf, think about blocking a calendar or product shopping. But giving more freedom to these things tools comes at the cost of risking security. 

Generative AI worms: Threat in the future

In the latest study, researchers have made the first "generative AI worms" that can spread from one device to another, deploying malware or stealing data in the process.  

Nassi, in collaboration with fellow academics Stav Cohen and Ron Bitton, developed the worm, which they named Morris II in homage to the 1988 internet debacle caused by the first Morris computer worm. The researchers demonstrate how the AI worm may attack a generative AI email helper to steal email data and send spam messages, circumventing several security measures in ChatGPT and Gemini in the process, in a research paper and website.

Generative AI worms in the lab

The study, conducted in test environments rather than on a publicly accessible email assistant, coincides with the growing multimodal nature of large language models (LLMs), which can produce images and videos in addition to text.

Prompts are language instructions that direct the tools to answer a question or produce an image. This is how most generative AI systems operate. These prompts, nevertheless, can also be used as a weapon against the system. 

Prompt injection attacks can provide a chatbot with secret instructions, while jailbreaks can cause a system to ignore its security measures and spew offensive or harmful content. For instance, a hacker might conceal text on a website instructing an LLM to pose as a con artist and request your bank account information.

The researchers used a so-called "adversarial self-replicating prompt" to develop the generative AI worm. According to the researchers, this prompt causes the generative AI model to output a different prompt in response. 

The email system to spread worms

The researchers connected ChatGPT, Gemini, and open-source LLM, LLaVA, to develop an email system that could send and receive messages using generative AI to demonstrate how the worm may function. They then discovered two ways to make use of the system: one was to use a self-replicating prompt that was text-based, and the other was to embed the question within an image file.

A video showcasing the findings shows the email system repeatedly forwarding a message. Also, according to the experts, data extraction from emails is possible. According to Nassi, "It can be names, phone numbers, credit card numbers, SSNs, or anything else that is deemed confidential."

Generative AI worms to be a major threat soon

Nassi and the other researchers report that they expect to see generative AI worms in the wild within the next two to three years in a publication that summarizes their findings. According to the research paper, "many companies in the industry are massively developing GenAI ecosystems that integrate GenAI capabilities into their cars, smartphones, and operating systems."