Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Technology. Show all posts

Fastest Supercomputer Advances Manhattan Project Simulations

 


Over the last few decades, the cryptocurrency industry has been afraid of the day when computers will have the capability of cracking blockchains, and taking down networks like Bitcoin and Ethereum. However, this day may be closer than they think, but even at the current speeds of supercomputers, only quantum computers could possess the capability. 

Scientists from Lawrence Livermore National Laboratory have announced that their latest supercomputer, El Capitan, can complete 2.79 quadrillion calculations in one second, making it the fastest supercomputer in the world. This is a magnitude of 2.79 followed by 15 zeroes for you to grasp its magnitude. To put El Capitan's performance into perspective, more than a million iPhones or iPads would need to be working at the same time on one calculation to equal what El Capitan is capable of in a second, according to Jeremy Thomas of the Lawrence Livermore National Laboratory. 

"That stack of phones is over five miles high. That is an enormous amount of phones." There was a big announcement made on Monday during the annual SC Conference in Atlanta, Georgia, a conference that focuses on high-performance computing and focuses on the very latest developments related to it. Among the top 500 most powerful supercomputers in the world, El Capitan has been named among the top 100 in the Top 500 Project's bi-annual list of the 500 most powerful supercomputers. 

Lawrence Livermore National Laboratory, which is located in Livermore, California, developed El Capitan in collaboration with Hewlett-Packard Enterprise, AMD and the Department of Energy, among other companies. Obviously, supercomputers are geared towards running complex tasks such as simulations, artificial intelligence development, research, and development while operating at much higher speeds than an average computer, as the name implies. 

A computer such as El Capitan, for example, is capable of performing 2.7 quadrillion operations per second, which is up to 5.4 million times faster than the average home computer, which performs a few operations a second. Thomas compared the computational power of the El Capitan supercomputer to a staggering human effort, estimating that it would require the combined work of over 8 billion people operating simultaneously for eight years to achieve what El Capitan accomplishes in a single second. 

The extraordinary capabilities of El Capitan have sparked discussions about its potential implications for industries reliant on robust cryptographic systems, particularly blockchain technology. The blockchain ecosystem, which depends heavily on secure encryption methods, has raised concerns about whether such a powerful machine could undermine its foundational security principles. 

Despite these apprehensions, experts in blockchain encryption have reassured that the fears are largely unfounded. Yannik Schrade, CEO and co-founder of Arcium explained to Decrypt that overcoming the security of blockchain systems would require an overwhelming computational feat. “An attacker would need to brute-force every possible private key,” Schrade noted. 

To put it into perspective, with a private key length of 256 bits, an attacker attempting to compromise transactions would need to exhaustively test all 256-bit key combinations. This level of computation, even with the power of El Capitan, remains practically unachievable within a reasonable timeframe, reaffirming the resilience of blockchain cryptographic systems against potential threats from even the most advanced technologies. 

These insights emphasize the sophistication and continued reliability of cryptographic standards in safeguarding blockchain security, even as computational technologies advance to unprecedented levels.

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

Malicious Python Packages Target Developers Using AI Tools





The rise of generative AI (GenAI) tools like OpenAI’s ChatGPT and Anthropic’s Claude has created opportunities for attackers to exploit unsuspecting developers. Recently, two Python packages falsely claiming to provide free API access to these chatbot platforms were found delivering a malware known as "JarkaStealer" to their victims.


Exploiting Developers’ Interest in AI

Free and free-ish generative AI platforms are gaining popularity, but the benefits of most of their advanced features cost money. This led certain developers to look for free alternatives, many of whom didn't really check the source to be sure. Cybercrime follows trends and the trend is that malicious code is being inserted into open-source software packages that at least initially may appear legitimate.

As George Apostopoulos, a founding engineer at Endor Labs, describes, attackers target less cautious developers, lured by free access to popular AI tools. "Many people don't know better and fall for these offers," he says.


The Harmful Python Packages

Two evil Python packages, "gptplus" and "claudeai-eng," were uploaded to the Python Package Index, PyPI, one of the official repositories of open-source Python projects. The GPT-4 Turbo model by OpenAI and Claude chatbot by Anthropic were promised by API integrations from the user "Xeroline.".

While the packages seemed to work by connecting users to a demo version of ChatGPT, their true functionality was much nastier. The code also contained an ability to drop a Java archive (JAR) file which delivered the JarkaStealer malware to unsuspecting victims' systems.


What Is JarkaStealer?

The JarkaStealer is an infostealer malware that can extract sensitive information from infected systems. It has been sold on the Dark Web for as little as $20, but its more elaborate features can be bought for a few dollars more, which is designed to steal browser data and session tokens along with credentials for apps like Telegram, Discord, and Steam. It can also take screenshots of the victim's system, often revealing sensitive information.

Though the malware's effectiveness is highly uncertain, it is cheap enough and readily available to many attackers as an attractive tool. Its source code is even freely accessible on platforms like GitHub for an even wider reach.


Lessons for Developers

This incident points to risks in downloading unverified packages of open source, more so when handling emerging technologies such as AI. Development firms should screen all software sources to avoid shortcuts that seek free premium tools. Taking precautionary measures can save individuals and organizations from becoming victims of such attacks.

With regard to caution and best practices, developers are protected from malicious actors taking advantage of the GenAI boom.

PyPI Attack: Hackers Use AI Models to Deliver JarkaStealer via Python Libraries

PyPI Attack: Hackers Use AI Models to Deliver JarkaStealer via Python Libraries

Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. 

The supply chain campaign shows the advancement of cyber threats attacking developers and the urgent need for caution in open-source activities. 

Experts have found two malicious packages uploaded to the Python Index (PyPI) repository pretending to be popular artificial intelligence (AI) models like OpenAI Chatgpt and Anthropic Claude to distribute an information stealer known as JarkaStealer. 

About attack vector

Called gptplus and claudeai-eng, the packages were uploaded by a user called "Xeroline" last year, resulting in 1,748 and 1,826 downloads. The two libraries can't be downloaded from PyPI. According to Kaspersky, the malicious packages were uploaded to the repository by one author and differed only in name and description. 

Experts believe the package offered a way to access GPT-4 Turbo and Claude AI API but contained malicious code that, upon installation, started the installation of malware. 

Particularly, the "__init__.py" file in these packages included Base64-encoded data that included code to download a Java archive file ("JavaUpdater.jar") from a GitHub repository, also downloading the Java Runtime Environment (JRE) from a Dropbox URL in case Java isn't already deployed on the host, before running the JAR file.

The impact

Based on information stealer JarkaStealer, the JAR file can steal a variety of sensitive data like web browser data, system data, session tokens, and screenshots from a wide range of applications like Steam, Telegram, and Discord. 

In the last step, the stolen data is archived, sent to the attacker's server, and then removed from the target's machine.JarkaStealer is known to offer under a malware-as-a-service (MaaS) model through a Telegram channel for a cost between $20 and $50, however, the source code has been leaked on GitHub. 

ClickPy stats suggest packages were downloaded over 3,500 times, primarily by users in China, the U.S., India, Russia, Germany, and France. The attack was part of an all-year supply chain attack campaign. 

How JarkaStealer steals

  • Steals web browser data- cookies, browsing history, and saved passwords. 
  • Compromises system data and setals OS details and user login details.
  • Steals session tokens from apps like Discord, Telegram, and Steam.
  • Captures real-time desktop activity through screenshots.

The stolen information is compressed and transmitted to a remote server controlled by the hacker, where it is removed from the target’s device.

How Agentic AI Will Change the Way You Work



Artificial intelligence is entering a groundbreaking phase that could drastically change the way we work. For years, AI prediction and content creation have been utilised, but the spotlight has shifted toward the most advanced: agentic AI. Such intelligent systems are not merely human tools but can act, decide, and bring order to complex tasks on their own. The third wave of AI could take the workplaces by a storm; hence, understanding what's coming into existence is important.


A Quick History of AI Evolution

To grasp the significance of agentic AI, let’s revisit AI’s journey. The first wave, predictive AI, helped businesses forecast trends and make data-based decisions. Then came generative AI, which allowed machines to create content and have human-like conversations. Now, we’re in the third wave: agentic AI. Unlike its predecessors, this AI can perform tasks on its own, interact with other AI systems, and even collaborate without constant human supervision.


What makes agentic AI special

Imagine agentic AI as an upgrade to the norm. The traditional AI systems follow prompts-they are there to respond to questions or generate text. Agentic AI, however, takes initiative. Agents are capable of handling a whole task, say solving problems for customers or organising schedules, but within set rules. They can even collaborate with other AI agents to deliver the result much more efficiently. For instance, in customer service, an AI that is agentic can answer questions, process returns, and help users without some human stepping in.


How Will Workplaces Change?

Agentic AI introduces a new way of working. Imagine an office where AI agents manage distinct tasks, like analysing data or communicating with clients; humans will supervise. Such a change is already generating new jobs, like the role of the AI trainer and coordinator, coaching those systems to improve their performance. It can either be a fully automatic job or a transformed one that will bring humans and AI together to deliver something.


Real-Life Applications

Agentic AI is already doing so much for many areas. It can, for example, help compile a patient summary in healthcare or solve claims in finance. Imagine an assistant AI negotiating with a company's AI for the best car rental deal. It could participate in meetings alongside colleagues, suggesting insights and ideas based on what it knows. The possibilities are endless, and humans could redefine efficiency in combination with their AI counterparts.


Challenges and Responsibilities

With great power comes great responsibility. If an AI agent comes to the wrong decision, results might be dire. Therefore, with substantial power, companies set substantial bounds on what these systems can do and cannot do. Critical decisions will be approved by a human to ensure safety and trust are achieved. Furthermore, transparency will be ensured— one must know if they are interacting with an AI rather than a human.


Adapting the Future

With the rise of agentic AI, it's not just a question of new technology, but the way in which work will change. Professionals will need to acquire new competencies, such as how to manage and cooperate with agents, while organisations need to re-design workflows to include these intelligent systems. This shift promises to benefit early adopters more than laggards.

Agentic AI represents more than just a technological breakthrough; rather it's an opportunity to make workplaces smarter, more innovative, and highly efficient. Are we ready for this future? Only time will tell.

 

AI-Powered Dark Patterns: What's Up Next?

 

The rapid growth of generative AI (artificial intelligence) highlights how urgent it is to address privacy and ethical issues related to the use of these technologies across a range of sectors. Over the past year, data protection conferences have repeatedly emphasised AI's expanding role in the privacy and data protection domains as well as the pressing necessity for Data Protection Officers (DPOs) to handle the issues it presents for their businesses. 

These issues include the creation of deepfakes and synthetic content that could sway public opinion or threaten specific individuals as well as the public at large, the leakage of sensitive personal information in model outputs, the inherent bias in generative algorithms, and the overestimation of AI capabilities that results in inaccurate output (also known as AI hallucinations), which often refer to real individuals. 

So, what are the AI-driven dark patterns? These are deceptive UI strategies that use AI to influence application users into making decisions that favour the company rather than the user. These designs employ user psychology and behaviour in more sophisticated ways than typical dark patterns. 

Imagine getting a video call from your bank manager (created by a deepfake) informing you of some suspicious activity on your account. The AI customises the call for your individual bank branch, your bank manager's vocal patterns, and even their look, making it quite convincing. This deepfake call could tempt you to disclose sensitive data or click on suspicious links. 

Another alarming example of AI-driven dark patterns may be hostile actors creating highly targeted social media profiles that exploit your child's flaws. The AI can analyse your child's online conduct and create fake friendships or relationships that could trick the child into disclosing personal information or even their location to these people. Thus, the question arises: what can we do now to minimise these ills? How do we prevent future scenarios in which cyber criminals and even ill-intentioned organisations contact us and our loved ones via technologies on which we have come to rely for daily activities? 

Unfortunately, the solution is not simple. Mitigating AI-driven dark patterns necessitates a multifaceted approach that includes consumers, developers, and regulatory organisations. The globally recognised privacy principles of data quality, data collection limitation, purpose specification, use limitation, security, transparency, accountability, and individual participation are universally applicable to all systems that handle personal data, including training algorithms and generative AI. We must now test these principles to discover if they can actually protect us from this new, and often thrilling, technology.

Prevention tips 

First and foremost, we must educate people on AI-driven dark trends and fraudulent techniques. This can be accomplished by public awareness campaigns, educational tools at all levels of the education system, and the incorporation of warnings into user interfaces, particularly on social media platforms popular with young people. Cigarette firms must disclose the risks of their products, as should AI-powered services to which our children are exposed.

We should also look for ways to encourage users, particularly young and vulnerable users, to be critical consumers of information they come across online, especially when dealing with AI systems. In the twenty-first century, our educational systems should train members of society to question (far more) the source and intent of AI-generated content. 

Give the younger generation, and even the older ones, the tools they need to control their data and customise their interactions with AI systems. This might include options that allow users or parents of young users to opt out of AI-powered suggestions or data collection. Governments and regulatory agencies play an important role to establish clear rules and regulations for AI development and use. The European Union plans to propose its first such law this summer. The long-awaited EU AI Act puts many of these data protection and ethical concerns into action. This is a positive start.

Improving GPS Technology with Insights from Android Phones

 


The effect of navigation apps drifting off course may be caused by a region 50-200 miles overhead called the ionosphere, which is a region of the Earth’s atmosphere that is responsible for such drifts. There are various levels of free electrons in this layer that, under certain conditions, can be extremely concentrated, thereby slowing down the processing of GPS signals when they are travelling between satellites and devices. 

A delay, like a delay that would occur from navigating through a crowded city street without being able to get to your place of work on time, is a major contributor to navigation system errors. As reported in Nature this week, a team of Google researchers demonstrated they had been able to use GPS signal measurements collected from millions of anonymous Android mobile devices to map the ionosphere by using GPS data from those devices. 

There are several reasons why a single mobile device signal cannot tell researchers so much about the ionosphere with only one device, but this problem is minimized when there are many other devices to compare with. Finally, the researchers have been able to use the vast network of Android phones to map out the ionosphere in an extremely precise way, matching or exceeding the accuracy of monitoring stations, using the huge network of Android phones. This technique was far more accurate in areas like India and Central Africa, compared to the accuracy of listening stations alone, where the Android technique was used. 

The total electron content (TEC) referred to as ionospheric traffic is a measure of the number of electrons in the ionosphere used within a cellular telephone network. Satellites and ground stations are used to measure this amount of electrons in the ionosphere. These detection tools are indeed effective, but they are also relatively expensive and difficult to build and maintain, which means that they are not used as commonly in developing regions of the world. 

The fact that monitoring stations are not accessible equally leads to disparities in the accuracy of the global ionospheric maps. However, Google researchers did not address one issue. They chose to use something that more than half of the world's population already possessed: mobile phones. In an interview with Popular Science, Google researcher Brian Williams discussed how changes in the ionosphere have been hindering GPS capabilities when working on Android products.

If the ionosphere were to change shortly, this may undermine GPS capabilities. Aside from contributing to scientific advances, he sees this project as an opportunity to improve accuracy and provide a more useful service to mobile device users regularly.  Rather than considering ionosphere interference with GPS positioning as an obstacle, the right thing to do is to flip the idea and imagine that GPS receiver is an instrument to measure the ionosphere, not as an obstacle," Williams commented.

The ionosphere can be seen in a completely different light by combining the measurements made by millions of phones, as compared to what would otherwise be possible." Thousands of Android phones, already known as 'distributed sensor networks', have become a part of the internet. GPS receivers are integrated into most smartphones to measure radio signals beamed from satellites orbiting approximately 1,200 miles above us in medium Earth orbit (MEO).

A receiver determines your location by calculating the distance from yourself to the satellite and then using the distance to locate you, with an accuracy of approximately 15 feet. The ionosphere acts as a barrier that prevents these signals from travelling normally through space until they reach the Earth. In terms of GPS accuracy errors, many factors contribute to the GPS measurement error, including variables like the season, time of day, and distance from the equator, all of which can affect the quality of the GPS measurement. 

There is usually a correctional model built into most phone receivers that can be used to reduce the estimated error by around half, usually because these receivers provide a correctional model.  Google researchers wanted to see if measurements taken from receivers that are built into Android smartphones could replicate the ionosphere mapping process that takes place in more advanced monitoring stations by combining measurements taken directly from the phone. 

There is no doubt that monitoring stations have a clear advantage over mobile phones in terms of value per pound. The first difference between mobile phones and cellular phones is that cellular phones have much larger antennas. Also, the fact that they sit under clear open skies makes them a much better choice than mobile phones, which are often obscured by urban buildings or the pockets of the user's jeans.

In addition, every single phone has a customized measurement bias that can be off by several microseconds depending on the phone. Even so, there is no denying the fact that the sheer number of phones makes up for what they are lacking in individual complexity.  As well as these very immediate benefits, the Android ionosphere maps are also able to provide other less immediate benefits. According to the researchers, analyzing Android receiving measurements revealed that they could detect a signal of electromagnetic activity that matched a pair of powerful solar storms that had occurred earlier this year. 

According to the researchers, one storm occurred in North America between May 10 and 11, 2024. During the time of the peak activity, the ionosphere of that area was measured by smartphones and it showed a clear spike in activity followed by a quick depletion once again. The study highlights that while monitoring stations detected the storm, phone-based measurements of the ionosphere in regions lacking such stations could provide critical insights into solar storms and geomagnetic activity that might otherwise go unnoticed. This additional data offers a valuable opportunity for scientists to enhance their understanding of these atmospheric phenomena and improve preparation and response strategies for potentially hazardous events.

According to Williams, the ionosphere maps generated using phone-based measurements reveal dynamics in certain locations with a level of detail previously unattainable. This advanced perspective could significantly aid scientific efforts to understand the impact of geomagnetic storms on the ionosphere. By integrating data from mobile devices, researchers can bridge gaps left by traditional monitoring methods, offering a more comprehensive understanding of the ionosphere’s behaviour. This approach not only paves the way for advancements in atmospheric science but also strengthens humanity’s ability to anticipate and mitigate the effects of geomagnetic disturbances, fostering greater resilience against these natural occurrences.

OpenAI's Latest AI Model Faces Diminishing Returns

 

OpenAI's latest AI model is yielding diminishing results while managing the demands of recent investments. 

The Information claims that OpenAI's upcoming AI model, codenamed Orion, is outperforming its predecessors in terms of performance gains. In staff testing, Orion reportedly achieved the GPT-4 performance level after only 20% of its training. 

However, the shift from GPT-4 to the upcoming GPT-5 is expected to result in fewer quality gains than the jump from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” noted employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

AI training often yields the biggest improvements in performance in the early stages and smaller gains in subsequent phases. As a result, the remaining 80% of training is unlikely to provide breakthroughs comparable to earlier generational improvements. This predicament with its latest AI model comes at a critical juncture for OpenAI, following a recent investment round that raised $6.6 billion.

With this financial backing, investors' expectations rise, as do technical hurdles that confound typical AI scaling approaches. If these early versions do not live up to expectations, OpenAI's future fundraising chances may not be as attractive. The report's limitations underscore a major difficulty for the entire AI industry: the decreasing availability of high-quality training data and the need to remain relevant in an increasingly competitive environment.

A June research (PDF) predicts that between 2026 and 2032, AI companies will exhaust the supply of publicly accessible human-generated text data. Developers have "largely squeezed as much out of" the data that has been utilised to enable the tremendous gains in AI that we have witnessed in recent years, according to The Information. OpenAI is fundamentally rethinking its approach to AI development in order to meet these challenges. 

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” states The Information.

DNA Testing Firm Atlas Biomed Vanishes, Leaving Customers in the Dark About Sensitive Data

A prominent DNA-testing company, Atlas Biomed, appears to have ceased operations without informing customers about the fate of their sensitive genetic data. The London-based firm previously offered insights into genetic profiles and predispositions to illnesses, but users can no longer access their online reports. Efforts by the BBC to contact the company have gone unanswered.

Customers describe the situation as "very alarming," with one stating they are worried about the handling of their "most personal information." The Information Commissioner’s Office (ICO) confirmed it is investigating a complaint about the company. “People have the right to expect that organisations will handle their personal information securely and responsibly,” the ICO said.

Several customers shared troubling experiences. Lisa Topping, from Essex, paid £100 for her genetic report, which she accessed periodically online—until the site vanished. “I don’t know how comfortable I feel that they have just disappeared,” she said.

Another customer, Kate Lake from Kent, paid £139 in 2023 for a report that was never delivered. Despite being promised a refund, the company went silent. “What happens now to that information they have got? I would like to hear some answers,” she said.

Attempts to reach Atlas Biomed have been fruitless. Phone lines are inactive, its London office is vacant, and social media accounts have been dormant since mid-2023.

The firm is still registered as active with Companies House but has not filed accounts since December 2022. Four officers have resigned, and two current officers share a Moscow address with a Russian billionaire who is a former director. Cybersecurity expert Prof. Alan Woodward called the Russian links “odd,” stating, “If people knew the provenance of this company and how it operates, they might not trust them with their DNA.”

Experts highlight the risks associated with DNA testing. Prof. Carissa Veliz, author of Privacy is Power, warned, “DNA is uniquely yours; you can’t change it. When you give your data to a company, you are completely at their mercy.”

Although no evidence of misuse has been found, concerns remain over what has become of the company’s DNA database. Prof. Veliz emphasized, “We shouldn’t have to wait until something happens.”

Join Group Calls Easily on Signal with New Custom Link Feature





Signal, the encrypted messaging service, has included new features to make it easier to join group calls, through personalised links. A blog post recently announced the update on the messaging app, setting out to simplify the way of conducting and administering a group call on its service.


Group Calls via Custom Link Easily Accessible


In the past, a group call on Signal began by first making a group chat. Signal recently added features that included automatically creating and sharing a direct link for group calls. Users no longer have to go through that annoying group chat setup just to make the call. To create a call link, one has to open the app and go to the links tab to tap to start a new call link. All links can be given a user-friendly name and include the ability to require approval of any new invitees prior to them joining, adding yet another layer of control.


The call links are also reusable, which is very useful for those who meet regularly, such as weekly team calls. Signal group calling has now been expanded to 50 participants, expanding its utilisation for larger groups.


More Call Control


This update also introduces better management tools for group calls. Users can remove participants if needed and even block them from rejoining if it is needed. That gives hosts more power when it comes to who should have access to the call, which would improve safety and participant management.


New Interactive Features for Group Calls


Besides call links, Signal has also integrated some interactive tools for consumers during group calls. Signal has included a "raise hand" button to enable participants to indicate whether they would want to speak, which makes further efforts to organise group discussions. It also allows support through emoji reactions in calls. The user can continue participating and not interrupt another caller.


Signal has also improved the call control interface so that more manoeuvres are available to mute or unmute a microphone, or turn cameras on or off. This is to ensure more fluidity and efficiency in its use.


Rollout Across Multiple Platforms


The new features are now rolled out gradually across Signal's desktop, iOS, and Android versions. The updated app is available on the App Store for iPhone and iPad users free of charge. In order to enjoy the new features regarding group calling functions, users should update their devices with the latest version of Signal.


Signal has recently added new features to make group calling easier, more organised, and intuitive. It has given the user more freedom to control the calls for both personal use and professional calls.

The Evolution of Search Engines: AI's Role in Shaping the Future of Online Search

 

The search engine Google has been a cornerstone of the internet, processing over 8.5 billion daily search queries. Its foundational PageRank algorithm, developed by founders Larry Page and Sergey Brin, ranked search results based on link quality and quantity. According to Google's documentation, the system estimated a website's importance through the number and quality of links directed toward it, reshaping how users accessed information online.

Generative AI tools have introduced a paradigm shift, with major players like Google, Microsoft, and Baidu incorporating AI capabilities into their platforms. These tools aim to enhance user experience by providing context-rich, summarized responses. However, whether this innovation will secure their dominance remains uncertain as competitors explore hybrid models blending traditional and AI-driven approaches.

Search engines such as Lycos, AltaVista, and Yahoo once dominated, using directory-based systems to categorize websites. As the internet grew, automated web crawlers and indexing transformed search into a faster, more efficient process. Mobile-first development and responsive design further fueled this evolution, leading to disciplines like SEO and search engine marketing.

Google’s focus on relevance, speed, and simplicity enabled it to outpace competitors. As noted in multiple studies, its minimalistic interface, vast data advantage, and robust indexing capabilities made it the market leader, holding an average 85% share between 2014 and 2024.

AI-based platforms, including OpenAI's SearchGPT and Perplexity, have redefined search by contextualizing information. OpenAI’s ChatGPT Search, launched in 2024, summarizes data and presents organized results, enhancing user experience. Similarly, Perplexity combines proprietary and large language models to deliver precise answers, excelling in complex queries such as guides and summarizations.

Unlike traditional engines that return a list of links, Perplexity generates summaries annotated with source links for verification. This approach provides a streamlined alternative for research but remains less effective for navigational queries and real-time information needs, such as weather updates or sports scores.

While AI-powered engines excel at summarization, traditional search engines like Google remain integral for navigation and instant results. Innovations such as Google’s “Answer Box,” offering quick snippets above organic search results, demonstrate efforts to enhance user experience while retaining their core functionality.

The future may lie in hybrid models combining the strengths of AI and traditional search, providing both comprehensive answers and navigational efficiency. Whether these tools converge or operate as distinct entities will depend on how they meet user demands and navigate challenges in a rapidly evolving digital landscape.

AI-driven advancements are undoubtedly reshaping the search engine ecosystem, but traditional platforms continue to play a vital role. As technologies evolve, striking a balance between innovation and usability will determine the future of online search.

Volt Typhoon rebuilds malware botnet following FBI disruption

 


There has recently been a rise in the botnet activity created by the Chinese threat group Volt Typhoon, which leverages similar techniques and infrastructure as those previously created by the group. SecurityScorecard reports that the botnet has recently made a comeback and is now active again. It was only in May of 2023 that Microsoft discovered that the Volt Typhoon was stealing data from critical infrastructure organizations in Guam, which it linked to the Chinese government. This knowledge came as a result of a spy observing the threat actor stealing data from critical infrastructure organizations on US territory. 

Several Cisco and Netgear routers have been compromised by Chinese state-backed cyber espionage operation Volt Typhoon since September, to rebuild its KV-Botnet malware, which had previously been disrupted by the FBI and was unsuccessfully revived in January, reports said. A report by Lumen Technologies' Black Lotus Labs released in December 2023 revealed that outdated devices mostly powered Volt Typhoon's botnet from Cisco, Netgear, and Fortinet. 

The botnet was used to transfer covert data and communicate over unsecured networks. The US government recently announced that the Volt Typhoon botnet had been neutralized and would cease to operate. Leveraging the botnet's C&C mechanisms, the FBI remotely removed the malware from the routers and changed the router's IP address to a port that is not accessible to the botnet. 

Earlier this month, in response to a law enforcement operation aimed at disrupting the KV-Botnet malware botnet, Volt Typhoon, which is widely believed to be sponsored by the Chinese state, has begun to rebuild its malware botnet after law enforcement officials disrupted it in January. Among other networks around the world, Volt Typhoon is considered one of the most important cyberespionage threat groups and is believed to have infiltrated critical U.S. infrastructure at least for the past five years. 

To accomplish their objectives, they hack into SOHO routers and networking devices, such as Netgear ProSAFE firewalls, Cisco RV320s, DrayTek Vigor routers, and Axis IP cameras, and install proprietary malware that establishes covert communication channels and proxies, as well as maintain persistent access to targeted networks through persistent access. 

Volt Typhoon was a malicious botnet created by a large collection of Cisco and Netgear routers that were older than five years, and, therefore, were not receiving security updates as they were near the end of their life cycle as a result of having reached end-of-life (EOL) status. This attack was initiated by infecting devices with the KV Botnet malware and using them to hide the origin of follow-up attacks targeting critical national infrastructure (CNI) operations located in the US and abroad. 

There has been no significant change in Volt Typhoon's activity in the nine months since SecurityScorecard said they observed signs of it returning, which makes it seem that it is not only present again but also "more sophisticated and determined". Strike team members at SecurityScorecard have been poring over millions of data points collected from the organization's wider risk management infrastructure as part of its investigation into the debacle and have come to the conclusion that the organization is now adapting and digging in in a new way after licking its wounds in the wake of the attack. 

In their findings, the Strike Team highlighted the growing danger that the Volt Typhoon poses to the environment. To combat the spread of the botnet and its deepening tactics, governments and corporations are urgently needed to address weaknesses in legacy systems, public cloud infrastructures, and third-party networks, says Ryan Sherstobitoff, the senior vice president of SecurityScorecard's threat research and intelligence. "Volt Typhoon is not only a botnet that has resilience, but it also serves as a warning computer virus. 

In the absence of decisive action, this silent threat could trigger a critical infrastructure crisis driven by unresolved vulnerabilities, leading to a critical infrastructure disaster." It has been observed that Volt Typhoon has recently set up new command servers to evade the authorities through the use of hosting services such as Digital Ocean, Quadranet, and Vultr. Afresh SSL certificates have also been registered to evade the authorities as well. 

The group has escalated its attacks by exploiting legacy Cisco RV320/325 and Netgear ProSafe router vulnerabilities. According to Sherstobitoff, even in the short period that it took for the operation to be carried out, 30 per cent of the visible Cisco RV320/325 network equipment around the world was compromised. According to SecurityScorecard, which has been monitoring this matter for BleepingComputer, the reason behind this choice is likely to be based on geographical factors by the threat actors.

It would seem that the Volt Typhoon botnet will return to global operations soon; although the size of the botnet is nowhere near its previous size, it is unlikely that China's hackers will give up on their mission to eradicate the botnet. As a preventative measure, older routers should be replaced with more current models and placed behind firewalls. Remote access to admin panels should not be made open to the internet, and passwords for admin accounts should be changed to ensure that this threat is not created. 

To prevent exploitation of known vulnerabilities, it is highly recommended that you use SOHO routers that are not too old to install the latest firmware when it becomes available. Among the areas in which the security firm has found similarities between the previous Volt Typhoon campaigns and the new version of the botnet are its fundamental infrastructure and techniques. A vulnerability in the VPN of a remote access point located on the small Pacific island of New Caledonia was found by SecurityScorecard's analysis. As the network was previously shut down, researchers observed it being used once again to route traffic between the regions of Asia-Pacific and America, although the system had been taken down previously. 

600 Million Daily Cyberattacks: Microsoft Warns of Escalating Risks in 2024


Microsoft emphasized in its 2024 annual Digital Defense report that the cyber threat landscape remains both "dangerous and complex," posing significant risks to organizations, users, and devices worldwide.

The Expanding Threat Landscape

Every day, Microsoft's customers endure more than 600 million cyberattacks, targeting individuals, corporations, and critical infrastructure. The rise in cyber threats is driven by the convergence of cybercriminal and nation-state activities, further accelerated by advancements in technologies such as artificial intelligence.

Monitoring over 78 trillion signals daily, Microsoft tracks activity from nearly 1,500 threat actor groups, including 600 nation-state groups. The report reveals an expanding threat landscape dominated by multifaceted attack types like phishing, ransomware, DDoS attacks, and identity-based intrusions.

Password-Based Attacks and MFA Evasion

Despite the widespread adoption of multifactor authentication (MFA), password-based attacks remain a dominant threat, making up more than 99% of all identity-related cyber incidents. Attackers use methods like password spraying, breach replays, and brute force attacks to exploit weak or reused passwords1. Microsoft blocks an average of 7,000 password attacks per second, but the rise of adversary-in-the-middle (AiTM) phishing attacks, which bypass MFA, is a growing concern.

Blurred Lines Between Nation-State Actors and Cybercriminals

One of the most alarming trends is the blurred lines between nation-state actors and cybercriminals. Nation-state groups are increasingly enlisting cybercriminals to fund operations, carry out espionage, and attack critical infrastructure1. This collusion has led to a surge in cyberattacks, with global cybercrime costs projected to reach $10.5 trillion annually by 2025.

The Role of Microsoft in Cyber Defense

Microsoft's unique vantage point, serving billions of customers globally, allows it to aggregate security data from a broad spectrum of companies, organizations, and consumers. The company has reassigned 34,000 full-time equivalent engineers to security initiatives, focusing on enhancing defenses and developing phishing-resistant MFA. Additionally, Microsoft collaborates with 15,000 partners with specialized security expertise to strengthen the security ecosystem.

Game Emulation: Keeping Classic Games Alive Despite Legal Hurdles

 For retro gaming fans, playing classic video games from decades past is a dream, but it’s tough to do legally. This is where game emulation comes in — a way to recreate old consoles in software, letting people play vintage games on modern devices. Despite opposition from big game companies, emulation developers put in years of work to make these games playable. 

Game emulators work by reading game files, called ROMs, and creating a digital version of the console they were designed for. Riley Testut, creator of the Delta emulator, says it’s like opening an image file: the ROM is the data, and the emulator brings it to life with visuals and sound. 

Testut and his team spent years refining Delta, even adding new features like online multiplayer for Nintendo DS games. Some consoles are easy to emulate, while others are a challenge. Older systems like the Game Boy are simpler, but emulating a PlayStation requires recreating multiple processors and intricate hardware functions. Developers use tools like OpenGL or Vulkan to help with complex 3D graphics, especially on mobile devices. 

Emulators like Emudeck, popular on the Steam Deck, make it easy to access multiple games in one place. For those wanting an even more authentic experience, FPGA hardware emulation mimics old consoles precisely, though it’s costly. While game companies often frown on ROMs, some, like Xbox, use emulation to re-release classic games legally. 

However, legal questions remain, and complex licensing issues keep many games out of reach. Despite these challenges, emulation is thriving, driven by fans and developers who want to preserve gaming history. Though legal issues persist, emulation is vital for keeping classic games alive and accessible to new generations.

How to Protect Your Brand from Malvertising: Insights from the NCSC

How to Protect Your Brand from Malvertising: Insights from the NCSC

Advertising is a key driver of revenue for many online platforms. However, it has also become a lucrative target for cybercriminals who exploit ad networks to distribute malicious software, a practice known as malvertising. The National Cyber Security Centre (NCSC) has been at the forefront of combating this growing threat, providing crucial guidance to help brands and advertising partners safeguard their campaigns and protect users.

What is Malvertising?

Malvertising refers to the use of online advertisements to spread malware. Unlike traditional phishing attacks, which typically rely on deceiving the user into clicking a malicious link, malvertising can compromise users simply by visiting a site where a malicious ad is displayed. This can lead to a range of cyber threats, including ransomware, data breaches, and financial theft.

The Scope of the Problem

The prevalence of malvertising is alarming. Cybercriminals leverage the vast reach of digital ads to target a large number of victims, often without their knowledge. According to NCSC, the complexity of the advertising ecosystem, which involves multiple intermediaries, exacerbates the issue. This makes identifying and blocking malicious ads challenging before they reach the end user.

Best Practices for Mitigating Malvertising

To combat malvertising, NCSC recommends adopting a defense-in-depth approach. Here are some best practices that organizations can implement:

  • Partnering with well-established and trusted ad networks can reduce the risk of encountering malicious ads. Reputable networks have stringent security measures and vetting processes in place.
  • Conducting regular security audits of ad campaigns can help identify and mitigate potential threats. This includes scanning for malicious code and ensuring that all ads comply with security standards.
  • Ad verification tools can monitor and block malicious ads in real-time. These tools use machine learning algorithms to detect suspicious activity and prevent ads from being displayed to users.
  • Educating users about the dangers of malvertising and encouraging them to report suspicious ads can help organizations identify and respond to threats more effectively.
  • Ensuring that websites are secure and free from vulnerabilities can prevent cybercriminals from exploiting them to distribute malvertising. This includes regularly updating software and using robust security protocols.

Case Studies of Successful Mitigation

Several organizations have successfully implemented these best practices and seen significant reductions in malvertising incidents. For example, a major online retailer partnered with a top-tier ad network and implemented comprehensive ad verification tools. As a result, they were able to block over 90% of malicious ads before they reached their customers.

ZKP Emerged as the "Must-Have" Component of Blockchain Security.

 

Zero-knowledge proof (ZKP) has emerged as a critical security component in Web3 and blockchain because it ensures data integrity and increases privacy. It accomplishes this by allowing verification without exposing data. ZKP is employed on cryptocurrency exchanges to validate transaction volumes or values while safeguarding the user's personal information.

In addition to ensuring privacy, it protects against fraud. Zero-knowledge cryptography, a class of algorithms that includes ZKP, enables complex interactions and strengthens blockchain security. Data is safeguarded from unauthorised access and modification while it moves through decentralised networks. 

Blockchain users are frequently asked to certify that they have sufficient funds to execute a transaction, but they may not necessarily want to disclose their whole amount. ZKP can verify that users meet the necessary standards during KYC processes on cryptocurrency exchanges without requiring users to share their paperwork. Building on this, Holonym offered Human Keys to ensure security and privacy in Zero Trust situations. 

Each person is given a unique key that they can use to unlock their security and privacy rights. It strengthens individual rights through robust decentralised protocols and configurable privacy. The privacy-preserving principle applies to several elements of Web3 data security. ZKP involves complex cryptographic validations, and any effort to change the data invalidates the proof. 

Trustless data processing eases smart contract developer work 

Smart contract developers are now working with their hands tied, limited to self-referential opcodes that cannot provide the information required to assess blockchain activities. To that end, the Space and Time platform's emphasis on enabling trustless, multichain data processing and strengthening smart contracts is worth mentioning, since it ultimately simplifies developers' work. 

Their SXT Chain, a ZKP data blockchain, is now live on testnet. It combines decentralised data storage and blockchain verification. Conventional blockchains are focused on transactions, however SXT Chain allows for advanced data querying and analysis while preserving data integrity through blockchain technology.

The flagship DeFi generation introduced yield farming and platforms like Aave and Uniswap. The new one includes tokenized real-world assets, blockchain lending with dynamic interest rates, cross-chain derivatives, and increasingly complicated financial products. 

To unlock Web3 use cases, a crypto-native, trustless query engine is required, which allows for more advanced DeFi by providing smart contracts with the necessary context. Space and Time is helping to offer one by extending on Chainlink's aggregated data points with a SQL database, allowing smart contract authors to execute SQL processing on any part of Ethereum's history. 

Effective and fair regulatory model 

ZKP allows for selective disclosure, in which just the information that regulators require is revealed. Web3 projects comply with KYC and AML rules while protecting user privacy. ZKP even opens up the possibility of a tiered regulation mechanism based on existing privacy models. Observers can examine the ledger for unusual variations and report any suspect accounts or transactions to higher-level regulators. 

Higher-level regulators reveal particular transaction data. The process is supported by zero-knowledge SNARKs (Succinct Non-interactive Arguments of Knowledge) and attribute-based encryption. These techniques use ZKP to ensure consistency between transaction and regulatory information, preventing the use of fake information to escape monitoring. 

Additionally, ZK solutions let users withdraw funds in a matter of minutes, whereas optimistic rollups take approximately a week to finalise transactions and process withdrawals.

How OpenAI’s New AI Agents Are Shaping the Future of Coding

 


OpenAI is taking the challenge of bringing into existence the very first powerful AI agents designed specifically to revolutionise the future of software development. It became so advanced that it could interpret in plain language instructions and generate complex code, hoping to make it achievable to complete tasks that would take hours in only minutes. This is the biggest leap forward AI has had up to date, promising a future in which developers can have a more creative and less repetitive target while coding.

Transforming Software Development

These AI agents represent a major change in the type of programming that's created and implemented. Beyond typical coding assistants, which may use suggestions to complete lines, OpenAI's agents produce fully formed, functional code from scratch based on relatively simple user prompts. It is theoretically possible that developers could do their work more efficiently, automating repetitive coding and focusing more on innovation and problem solving on more complicated issues. The agents are, in effect, advanced assistants capable of doing more helpful things than the typical human assistant with anything from far more complex programming requirements.


Competition from OpenAI with Anthropic

As OpenAI makes its moves, it faces stiff competition from Anthropic-an AI company whose growth rate is rapidly taking over. Having developed the first released AI models focused on advancing coding, Anthropic continues to push OpenAI to even further refinement in their agents. This rivalry is more than a race between firms; it is infusing quick growth that works for the whole industry because both companies are setting new standards by working on AI-powered coding tools. As both compete, developers and users alike stand to benefit from the high-quality, innovative tools that will be implied from the given race.


Privacy and Security Issues

The AI agents also raise privacy issues. Concerns over the issue of data privacy and personal privacy arise if these agents can gain access to user devices. Secure integration of the agents will require utmost care because developers rely on the unassailability of their systems. Balancing AI's powerful benefits with needed security measures will be a key determinant of their success in adoption. Also, planning will be required for the integration of these agents into the current workflows without causing undue disruptions to the established standards and best practices in security coding.


Changing Market and Skills Environment

OpenAI and Anthropic are among the leaders in many of the changes that will remake both markets and skills in software engineering. As AI becomes more central to coding, this will change the industry and create new sorts of jobs as it requires the developer to adapt toward new tools and technologies. The extensive reliance on AI in code creation would also invite fresh investments in the tech sector and accelerate broadening the AI market.


The Future of AI in Coding

Rapidly evolving AI agents by OpenAI mark the opening of a new chapter for the intersection of AI and software development, promising to accelerate coding, making it faster, more efficient, and accessible to a wider audience of developers who will enjoy assisted coding towards self-writing complex instructions. The further development by OpenAI will most definitely continue to shape the future of this field, representing exciting opportunities and serious challenges capable of changing the face of software engineering in the foreseeable future.




UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

The Growing Concern Regarding Privacy in Connected Cars

 

Data collection and use raise serious privacy concerns, even though they can improve driving safety, efficiency, and the whole experience. The automotive industry's ability to collect, analyse, and exchange such data outpaces the legislative frameworks intended to protect individuals. In numerous cases, car owners have no information or control over how their data is used, let alone how it is shared with third parties. 

The FIA European Bureau feels it is time to face these challenges straight on. As advocates for driver and car owners' rights, we are calling for clearer, more open policies that restore individuals' control over their data. This is why, in partnership with Privacy4Cars, we are hosting an event called "Driving Data Rights: Enhancing Privacy and Control in Connected Cars" on November 19th in Brussels. The event will bring together policymakers, industry executives, and civil society to explore current gaps in legislation and industry practices, as well as how we can secure enhanced data protection for all. 

Balancing innovation with privacy 

A recent Privacy4Cars report identifies alarming industry patterns, demonstrating that many organisations are not fully compliant with GDPR laws. Data transparency, security, and consent methods are often lacking, exposing consumers to data misuse. These findings highlight the critical need for reforms that allow individuals more control over their data while ensuring that privacy is not sacrificed in the sake of innovation.

The benefits of connected vehicle data are apparent. Data has the potential to alter the automotive industry in a variety of ways, including improved road safety, predictive maintenance, and enhanced driving experiences. However, this should not be at the expense of individual private rights. 

As the automobile sector evolves, authorities and industry stakeholders must strike the correct balance between innovation and privacy protection. Stronger enforcement of existing regulations, as well as the creation of new frameworks that suit the unique needs of connected vehicles, are required. Car owners should have a say in how their data is utilised and be confident that it is managed properly. 

Shaping the future of data privacy in cars 

The forthcoming event on November 19th will provide an opportunity to dig deeper into these concerns. Key stakeholders from the European Commission, the automotive industry, and privacy experts will meet to discuss the present legal landscape and what else can be done to protect persons in this fast changing environment. 

The agenda includes presentations from Privacy4Cars on the most recent findings on automotive privacy practices, a panel discussion with automotive industry experts, and case studies demonstrating real-world examples of data misuse and third-party access. 

Connected cars are the future of mobility, but it must be founded on confidence and transparency. By giving individuals authority over their personal data, we can build a system that benefits everyone—drivers, manufacturers, and society as a whole. The FIA European Bureau is committed to collaborating with all parties to make this happen.