The rise of generative AI (GenAI) tools like OpenAI’s ChatGPT and Anthropic’s Claude has created opportunities for attackers to exploit unsuspecting developers. Recently, two Python packages falsely claiming to provide free API access to these chatbot platforms were found delivering a malware known as "JarkaStealer" to their victims.
Exploiting Developers’ Interest in AI
Free and free-ish generative AI platforms are gaining popularity, but the benefits of most of their advanced features cost money. This led certain developers to look for free alternatives, many of whom didn't really check the source to be sure. Cybercrime follows trends and the trend is that malicious code is being inserted into open-source software packages that at least initially may appear legitimate.
As George Apostopoulos, a founding engineer at Endor Labs, describes, attackers target less cautious developers, lured by free access to popular AI tools. "Many people don't know better and fall for these offers," he says.
The Harmful Python Packages
Two evil Python packages, "gptplus" and "claudeai-eng," were uploaded to the Python Package Index, PyPI, one of the official repositories of open-source Python projects. The GPT-4 Turbo model by OpenAI and Claude chatbot by Anthropic were promised by API integrations from the user "Xeroline.".
While the packages seemed to work by connecting users to a demo version of ChatGPT, their true functionality was much nastier. The code also contained an ability to drop a Java archive (JAR) file which delivered the JarkaStealer malware to unsuspecting victims' systems.
What Is JarkaStealer?
The JarkaStealer is an infostealer malware that can extract sensitive information from infected systems. It has been sold on the Dark Web for as little as $20, but its more elaborate features can be bought for a few dollars more, which is designed to steal browser data and session tokens along with credentials for apps like Telegram, Discord, and Steam. It can also take screenshots of the victim's system, often revealing sensitive information.
Though the malware's effectiveness is highly uncertain, it is cheap enough and readily available to many attackers as an attractive tool. Its source code is even freely accessible on platforms like GitHub for an even wider reach.
Lessons for Developers
This incident points to risks in downloading unverified packages of open source, more so when handling emerging technologies such as AI. Development firms should screen all software sources to avoid shortcuts that seek free premium tools. Taking precautionary measures can save individuals and organizations from becoming victims of such attacks.
With regard to caution and best practices, developers are protected from malicious actors taking advantage of the GenAI boom.
The supply chain campaign shows the advancement of cyber threats attacking developers and the urgent need for caution in open-source activities.
Experts have found two malicious packages uploaded to the Python Index (PyPI) repository pretending to be popular artificial intelligence (AI) models like OpenAI Chatgpt and Anthropic Claude to distribute an information stealer known as JarkaStealer.
Called gptplus and claudeai-eng, the packages were uploaded by a user called "Xeroline" last year, resulting in 1,748 and 1,826 downloads. The two libraries can't be downloaded from PyPI. According to Kaspersky, the malicious packages were uploaded to the repository by one author and differed only in name and description.
Experts believe the package offered a way to access GPT-4 Turbo and Claude AI API but contained malicious code that, upon installation, started the installation of malware.
Particularly, the "__init__.py" file in these packages included Base64-encoded data that included code to download a Java archive file ("JavaUpdater.jar") from a GitHub repository, also downloading the Java Runtime Environment (JRE) from a Dropbox URL in case Java isn't already deployed on the host, before running the JAR file.
Based on information stealer JarkaStealer, the JAR file can steal a variety of sensitive data like web browser data, system data, session tokens, and screenshots from a wide range of applications like Steam, Telegram, and Discord.
In the last step, the stolen data is archived, sent to the attacker's server, and then removed from the target's machine.JarkaStealer is known to offer under a malware-as-a-service (MaaS) model through a Telegram channel for a cost between $20 and $50, however, the source code has been leaked on GitHub.
ClickPy stats suggest packages were downloaded over 3,500 times, primarily by users in China, the U.S., India, Russia, Germany, and France. The attack was part of an all-year supply chain attack campaign.
The stolen information is compressed and transmitted to a remote server controlled by the hacker, where it is removed from the target’s device.
Artificial intelligence is entering a groundbreaking phase that could drastically change the way we work. For years, AI prediction and content creation have been utilised, but the spotlight has shifted toward the most advanced: agentic AI. Such intelligent systems are not merely human tools but can act, decide, and bring order to complex tasks on their own. The third wave of AI could take the workplaces by a storm; hence, understanding what's coming into existence is important.
A Quick History of AI Evolution
To grasp the significance of agentic AI, let’s revisit AI’s journey. The first wave, predictive AI, helped businesses forecast trends and make data-based decisions. Then came generative AI, which allowed machines to create content and have human-like conversations. Now, we’re in the third wave: agentic AI. Unlike its predecessors, this AI can perform tasks on its own, interact with other AI systems, and even collaborate without constant human supervision.
What makes agentic AI special
Imagine agentic AI as an upgrade to the norm. The traditional AI systems follow prompts-they are there to respond to questions or generate text. Agentic AI, however, takes initiative. Agents are capable of handling a whole task, say solving problems for customers or organising schedules, but within set rules. They can even collaborate with other AI agents to deliver the result much more efficiently. For instance, in customer service, an AI that is agentic can answer questions, process returns, and help users without some human stepping in.
How Will Workplaces Change?
Agentic AI introduces a new way of working. Imagine an office where AI agents manage distinct tasks, like analysing data or communicating with clients; humans will supervise. Such a change is already generating new jobs, like the role of the AI trainer and coordinator, coaching those systems to improve their performance. It can either be a fully automatic job or a transformed one that will bring humans and AI together to deliver something.
Real-Life Applications
Agentic AI is already doing so much for many areas. It can, for example, help compile a patient summary in healthcare or solve claims in finance. Imagine an assistant AI negotiating with a company's AI for the best car rental deal. It could participate in meetings alongside colleagues, suggesting insights and ideas based on what it knows. The possibilities are endless, and humans could redefine efficiency in combination with their AI counterparts.
Challenges and Responsibilities
With great power comes great responsibility. If an AI agent comes to the wrong decision, results might be dire. Therefore, with substantial power, companies set substantial bounds on what these systems can do and cannot do. Critical decisions will be approved by a human to ensure safety and trust are achieved. Furthermore, transparency will be ensured— one must know if they are interacting with an AI rather than a human.
Adapting the Future
With the rise of agentic AI, it's not just a question of new technology, but the way in which work will change. Professionals will need to acquire new competencies, such as how to manage and cooperate with agents, while organisations need to re-design workflows to include these intelligent systems. This shift promises to benefit early adopters more than laggards.
Agentic AI represents more than just a technological breakthrough; rather it's an opportunity to make workplaces smarter, more innovative, and highly efficient. Are we ready for this future? Only time will tell.
A prominent DNA-testing company, Atlas Biomed, appears to have ceased operations without informing customers about the fate of their sensitive genetic data. The London-based firm previously offered insights into genetic profiles and predispositions to illnesses, but users can no longer access their online reports. Efforts by the BBC to contact the company have gone unanswered.
Signal, the encrypted messaging service, has included new features to make it easier to join group calls, through personalised links. A blog post recently announced the update on the messaging app, setting out to simplify the way of conducting and administering a group call on its service.
Group Calls via Custom Link Easily Accessible
In the past, a group call on Signal began by first making a group chat. Signal recently added features that included automatically creating and sharing a direct link for group calls. Users no longer have to go through that annoying group chat setup just to make the call. To create a call link, one has to open the app and go to the links tab to tap to start a new call link. All links can be given a user-friendly name and include the ability to require approval of any new invitees prior to them joining, adding yet another layer of control.
The call links are also reusable, which is very useful for those who meet regularly, such as weekly team calls. Signal group calling has now been expanded to 50 participants, expanding its utilisation for larger groups.
More Call Control
This update also introduces better management tools for group calls. Users can remove participants if needed and even block them from rejoining if it is needed. That gives hosts more power when it comes to who should have access to the call, which would improve safety and participant management.
New Interactive Features for Group Calls
Besides call links, Signal has also integrated some interactive tools for consumers during group calls. Signal has included a "raise hand" button to enable participants to indicate whether they would want to speak, which makes further efforts to organise group discussions. It also allows support through emoji reactions in calls. The user can continue participating and not interrupt another caller.
Signal has also improved the call control interface so that more manoeuvres are available to mute or unmute a microphone, or turn cameras on or off. This is to ensure more fluidity and efficiency in its use.
Rollout Across Multiple Platforms
The new features are now rolled out gradually across Signal's desktop, iOS, and Android versions. The updated app is available on the App Store for iPhone and iPad users free of charge. In order to enjoy the new features regarding group calling functions, users should update their devices with the latest version of Signal.
Signal has recently added new features to make group calling easier, more organised, and intuitive. It has given the user more freedom to control the calls for both personal use and professional calls.
Every day, Microsoft's customers endure more than 600 million cyberattacks, targeting individuals, corporations, and critical infrastructure. The rise in cyber threats is driven by the convergence of cybercriminal and nation-state activities, further accelerated by advancements in technologies such as artificial intelligence.
Monitoring over 78 trillion signals daily, Microsoft tracks activity from nearly 1,500 threat actor groups, including 600 nation-state groups. The report reveals an expanding threat landscape dominated by multifaceted attack types like phishing, ransomware, DDoS attacks, and identity-based intrusions.
Despite the widespread adoption of multifactor authentication (MFA), password-based attacks remain a dominant threat, making up more than 99% of all identity-related cyber incidents. Attackers use methods like password spraying, breach replays, and brute force attacks to exploit weak or reused passwords1. Microsoft blocks an average of 7,000 password attacks per second, but the rise of adversary-in-the-middle (AiTM) phishing attacks, which bypass MFA, is a growing concern.
One of the most alarming trends is the blurred lines between nation-state actors and cybercriminals. Nation-state groups are increasingly enlisting cybercriminals to fund operations, carry out espionage, and attack critical infrastructure1. This collusion has led to a surge in cyberattacks, with global cybercrime costs projected to reach $10.5 trillion annually by 2025.
Microsoft's unique vantage point, serving billions of customers globally, allows it to aggregate security data from a broad spectrum of companies, organizations, and consumers. The company has reassigned 34,000 full-time equivalent engineers to security initiatives, focusing on enhancing defenses and developing phishing-resistant MFA. Additionally, Microsoft collaborates with 15,000 partners with specialized security expertise to strengthen the security ecosystem.
For retro gaming fans, playing classic video games from decades past is a dream, but it’s tough to do legally. This is where game emulation comes in — a way to recreate old consoles in software, letting people play vintage games on modern devices. Despite opposition from big game companies, emulation developers put in years of work to make these games playable.
Malvertising refers to the use of online advertisements to spread malware. Unlike traditional phishing attacks, which typically rely on deceiving the user into clicking a malicious link, malvertising can compromise users simply by visiting a site where a malicious ad is displayed. This can lead to a range of cyber threats, including ransomware, data breaches, and financial theft.
The prevalence of malvertising is alarming. Cybercriminals leverage the vast reach of digital ads to target a large number of victims, often without their knowledge. According to NCSC, the complexity of the advertising ecosystem, which involves multiple intermediaries, exacerbates the issue. This makes identifying and blocking malicious ads challenging before they reach the end user.
To combat malvertising, NCSC recommends adopting a defense-in-depth approach. Here are some best practices that organizations can implement:
Several organizations have successfully implemented these best practices and seen significant reductions in malvertising incidents. For example, a major online retailer partnered with a top-tier ad network and implemented comprehensive ad verification tools. As a result, they were able to block over 90% of malicious ads before they reached their customers.
OpenAI is taking the challenge of bringing into existence the very first powerful AI agents designed specifically to revolutionise the future of software development. It became so advanced that it could interpret in plain language instructions and generate complex code, hoping to make it achievable to complete tasks that would take hours in only minutes. This is the biggest leap forward AI has had up to date, promising a future in which developers can have a more creative and less repetitive target while coding.
Transforming Software Development
These AI agents represent a major change in the type of programming that's created and implemented. Beyond typical coding assistants, which may use suggestions to complete lines, OpenAI's agents produce fully formed, functional code from scratch based on relatively simple user prompts. It is theoretically possible that developers could do their work more efficiently, automating repetitive coding and focusing more on innovation and problem solving on more complicated issues. The agents are, in effect, advanced assistants capable of doing more helpful things than the typical human assistant with anything from far more complex programming requirements.
Competition from OpenAI with Anthropic
As OpenAI makes its moves, it faces stiff competition from Anthropic-an AI company whose growth rate is rapidly taking over. Having developed the first released AI models focused on advancing coding, Anthropic continues to push OpenAI to even further refinement in their agents. This rivalry is more than a race between firms; it is infusing quick growth that works for the whole industry because both companies are setting new standards by working on AI-powered coding tools. As both compete, developers and users alike stand to benefit from the high-quality, innovative tools that will be implied from the given race.
Privacy and Security Issues
The AI agents also raise privacy issues. Concerns over the issue of data privacy and personal privacy arise if these agents can gain access to user devices. Secure integration of the agents will require utmost care because developers rely on the unassailability of their systems. Balancing AI's powerful benefits with needed security measures will be a key determinant of their success in adoption. Also, planning will be required for the integration of these agents into the current workflows without causing undue disruptions to the established standards and best practices in security coding.
Changing Market and Skills Environment
OpenAI and Anthropic are among the leaders in many of the changes that will remake both markets and skills in software engineering. As AI becomes more central to coding, this will change the industry and create new sorts of jobs as it requires the developer to adapt toward new tools and technologies. The extensive reliance on AI in code creation would also invite fresh investments in the tech sector and accelerate broadening the AI market.
The Future of AI in Coding
Rapidly evolving AI agents by OpenAI mark the opening of a new chapter for the intersection of AI and software development, promising to accelerate coding, making it faster, more efficient, and accessible to a wider audience of developers who will enjoy assisted coding towards self-writing complex instructions. The further development by OpenAI will most definitely continue to shape the future of this field, representing exciting opportunities and serious challenges capable of changing the face of software engineering in the foreseeable future.