Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Technology. Show all posts

How Agentic AI Will Change the Way You Work



Artificial intelligence is entering a groundbreaking phase that could drastically change the way we work. For years, AI prediction and content creation have been utilised, but the spotlight has shifted toward the most advanced: agentic AI. Such intelligent systems are not merely human tools but can act, decide, and bring order to complex tasks on their own. The third wave of AI could take the workplaces by a storm; hence, understanding what's coming into existence is important.


A Quick History of AI Evolution

To grasp the significance of agentic AI, let’s revisit AI’s journey. The first wave, predictive AI, helped businesses forecast trends and make data-based decisions. Then came generative AI, which allowed machines to create content and have human-like conversations. Now, we’re in the third wave: agentic AI. Unlike its predecessors, this AI can perform tasks on its own, interact with other AI systems, and even collaborate without constant human supervision.


What makes agentic AI special

Imagine agentic AI as an upgrade to the norm. The traditional AI systems follow prompts-they are there to respond to questions or generate text. Agentic AI, however, takes initiative. Agents are capable of handling a whole task, say solving problems for customers or organising schedules, but within set rules. They can even collaborate with other AI agents to deliver the result much more efficiently. For instance, in customer service, an AI that is agentic can answer questions, process returns, and help users without some human stepping in.


How Will Workplaces Change?

Agentic AI introduces a new way of working. Imagine an office where AI agents manage distinct tasks, like analysing data or communicating with clients; humans will supervise. Such a change is already generating new jobs, like the role of the AI trainer and coordinator, coaching those systems to improve their performance. It can either be a fully automatic job or a transformed one that will bring humans and AI together to deliver something.


Real-Life Applications

Agentic AI is already doing so much for many areas. It can, for example, help compile a patient summary in healthcare or solve claims in finance. Imagine an assistant AI negotiating with a company's AI for the best car rental deal. It could participate in meetings alongside colleagues, suggesting insights and ideas based on what it knows. The possibilities are endless, and humans could redefine efficiency in combination with their AI counterparts.


Challenges and Responsibilities

With great power comes great responsibility. If an AI agent comes to the wrong decision, results might be dire. Therefore, with substantial power, companies set substantial bounds on what these systems can do and cannot do. Critical decisions will be approved by a human to ensure safety and trust are achieved. Furthermore, transparency will be ensured— one must know if they are interacting with an AI rather than a human.


Adapting the Future

With the rise of agentic AI, it's not just a question of new technology, but the way in which work will change. Professionals will need to acquire new competencies, such as how to manage and cooperate with agents, while organisations need to re-design workflows to include these intelligent systems. This shift promises to benefit early adopters more than laggards.

Agentic AI represents more than just a technological breakthrough; rather it's an opportunity to make workplaces smarter, more innovative, and highly efficient. Are we ready for this future? Only time will tell.

 

AI-Powered Dark Patterns: What's Up Next?

 

The rapid growth of generative AI (artificial intelligence) highlights how urgent it is to address privacy and ethical issues related to the use of these technologies across a range of sectors. Over the past year, data protection conferences have repeatedly emphasised AI's expanding role in the privacy and data protection domains as well as the pressing necessity for Data Protection Officers (DPOs) to handle the issues it presents for their businesses. 

These issues include the creation of deepfakes and synthetic content that could sway public opinion or threaten specific individuals as well as the public at large, the leakage of sensitive personal information in model outputs, the inherent bias in generative algorithms, and the overestimation of AI capabilities that results in inaccurate output (also known as AI hallucinations), which often refer to real individuals. 

So, what are the AI-driven dark patterns? These are deceptive UI strategies that use AI to influence application users into making decisions that favour the company rather than the user. These designs employ user psychology and behaviour in more sophisticated ways than typical dark patterns. 

Imagine getting a video call from your bank manager (created by a deepfake) informing you of some suspicious activity on your account. The AI customises the call for your individual bank branch, your bank manager's vocal patterns, and even their look, making it quite convincing. This deepfake call could tempt you to disclose sensitive data or click on suspicious links. 

Another alarming example of AI-driven dark patterns may be hostile actors creating highly targeted social media profiles that exploit your child's flaws. The AI can analyse your child's online conduct and create fake friendships or relationships that could trick the child into disclosing personal information or even their location to these people. Thus, the question arises: what can we do now to minimise these ills? How do we prevent future scenarios in which cyber criminals and even ill-intentioned organisations contact us and our loved ones via technologies on which we have come to rely for daily activities? 

Unfortunately, the solution is not simple. Mitigating AI-driven dark patterns necessitates a multifaceted approach that includes consumers, developers, and regulatory organisations. The globally recognised privacy principles of data quality, data collection limitation, purpose specification, use limitation, security, transparency, accountability, and individual participation are universally applicable to all systems that handle personal data, including training algorithms and generative AI. We must now test these principles to discover if they can actually protect us from this new, and often thrilling, technology.

Prevention tips 

First and foremost, we must educate people on AI-driven dark trends and fraudulent techniques. This can be accomplished by public awareness campaigns, educational tools at all levels of the education system, and the incorporation of warnings into user interfaces, particularly on social media platforms popular with young people. Cigarette firms must disclose the risks of their products, as should AI-powered services to which our children are exposed.

We should also look for ways to encourage users, particularly young and vulnerable users, to be critical consumers of information they come across online, especially when dealing with AI systems. In the twenty-first century, our educational systems should train members of society to question (far more) the source and intent of AI-generated content. 

Give the younger generation, and even the older ones, the tools they need to control their data and customise their interactions with AI systems. This might include options that allow users or parents of young users to opt out of AI-powered suggestions or data collection. Governments and regulatory agencies play an important role to establish clear rules and regulations for AI development and use. The European Union plans to propose its first such law this summer. The long-awaited EU AI Act puts many of these data protection and ethical concerns into action. This is a positive start.

Improving GPS Technology with Insights from Android Phones

 


The effect of navigation apps drifting off course may be caused by a region 50-200 miles overhead called the ionosphere, which is a region of the Earth’s atmosphere that is responsible for such drifts. There are various levels of free electrons in this layer that, under certain conditions, can be extremely concentrated, thereby slowing down the processing of GPS signals when they are travelling between satellites and devices. 

A delay, like a delay that would occur from navigating through a crowded city street without being able to get to your place of work on time, is a major contributor to navigation system errors. As reported in Nature this week, a team of Google researchers demonstrated they had been able to use GPS signal measurements collected from millions of anonymous Android mobile devices to map the ionosphere by using GPS data from those devices. 

There are several reasons why a single mobile device signal cannot tell researchers so much about the ionosphere with only one device, but this problem is minimized when there are many other devices to compare with. Finally, the researchers have been able to use the vast network of Android phones to map out the ionosphere in an extremely precise way, matching or exceeding the accuracy of monitoring stations, using the huge network of Android phones. This technique was far more accurate in areas like India and Central Africa, compared to the accuracy of listening stations alone, where the Android technique was used. 

The total electron content (TEC) referred to as ionospheric traffic is a measure of the number of electrons in the ionosphere used within a cellular telephone network. Satellites and ground stations are used to measure this amount of electrons in the ionosphere. These detection tools are indeed effective, but they are also relatively expensive and difficult to build and maintain, which means that they are not used as commonly in developing regions of the world. 

The fact that monitoring stations are not accessible equally leads to disparities in the accuracy of the global ionospheric maps. However, Google researchers did not address one issue. They chose to use something that more than half of the world's population already possessed: mobile phones. In an interview with Popular Science, Google researcher Brian Williams discussed how changes in the ionosphere have been hindering GPS capabilities when working on Android products.

If the ionosphere were to change shortly, this may undermine GPS capabilities. Aside from contributing to scientific advances, he sees this project as an opportunity to improve accuracy and provide a more useful service to mobile device users regularly.  Rather than considering ionosphere interference with GPS positioning as an obstacle, the right thing to do is to flip the idea and imagine that GPS receiver is an instrument to measure the ionosphere, not as an obstacle," Williams commented.

The ionosphere can be seen in a completely different light by combining the measurements made by millions of phones, as compared to what would otherwise be possible." Thousands of Android phones, already known as 'distributed sensor networks', have become a part of the internet. GPS receivers are integrated into most smartphones to measure radio signals beamed from satellites orbiting approximately 1,200 miles above us in medium Earth orbit (MEO).

A receiver determines your location by calculating the distance from yourself to the satellite and then using the distance to locate you, with an accuracy of approximately 15 feet. The ionosphere acts as a barrier that prevents these signals from travelling normally through space until they reach the Earth. In terms of GPS accuracy errors, many factors contribute to the GPS measurement error, including variables like the season, time of day, and distance from the equator, all of which can affect the quality of the GPS measurement. 

There is usually a correctional model built into most phone receivers that can be used to reduce the estimated error by around half, usually because these receivers provide a correctional model.  Google researchers wanted to see if measurements taken from receivers that are built into Android smartphones could replicate the ionosphere mapping process that takes place in more advanced monitoring stations by combining measurements taken directly from the phone. 

There is no doubt that monitoring stations have a clear advantage over mobile phones in terms of value per pound. The first difference between mobile phones and cellular phones is that cellular phones have much larger antennas. Also, the fact that they sit under clear open skies makes them a much better choice than mobile phones, which are often obscured by urban buildings or the pockets of the user's jeans.

In addition, every single phone has a customized measurement bias that can be off by several microseconds depending on the phone. Even so, there is no denying the fact that the sheer number of phones makes up for what they are lacking in individual complexity.  As well as these very immediate benefits, the Android ionosphere maps are also able to provide other less immediate benefits. According to the researchers, analyzing Android receiving measurements revealed that they could detect a signal of electromagnetic activity that matched a pair of powerful solar storms that had occurred earlier this year. 

According to the researchers, one storm occurred in North America between May 10 and 11, 2024. During the time of the peak activity, the ionosphere of that area was measured by smartphones and it showed a clear spike in activity followed by a quick depletion once again. The study highlights that while monitoring stations detected the storm, phone-based measurements of the ionosphere in regions lacking such stations could provide critical insights into solar storms and geomagnetic activity that might otherwise go unnoticed. This additional data offers a valuable opportunity for scientists to enhance their understanding of these atmospheric phenomena and improve preparation and response strategies for potentially hazardous events.

According to Williams, the ionosphere maps generated using phone-based measurements reveal dynamics in certain locations with a level of detail previously unattainable. This advanced perspective could significantly aid scientific efforts to understand the impact of geomagnetic storms on the ionosphere. By integrating data from mobile devices, researchers can bridge gaps left by traditional monitoring methods, offering a more comprehensive understanding of the ionosphere’s behaviour. This approach not only paves the way for advancements in atmospheric science but also strengthens humanity’s ability to anticipate and mitigate the effects of geomagnetic disturbances, fostering greater resilience against these natural occurrences.

OpenAI's Latest AI Model Faces Diminishing Returns

 

OpenAI's latest AI model is yielding diminishing results while managing the demands of recent investments. 

The Information claims that OpenAI's upcoming AI model, codenamed Orion, is outperforming its predecessors in terms of performance gains. In staff testing, Orion reportedly achieved the GPT-4 performance level after only 20% of its training. 

However, the shift from GPT-4 to the upcoming GPT-5 is expected to result in fewer quality gains than the jump from GPT-3 to GPT-4.

“Some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks,” noted employees in the report. “Orion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.”

AI training often yields the biggest improvements in performance in the early stages and smaller gains in subsequent phases. As a result, the remaining 80% of training is unlikely to provide breakthroughs comparable to earlier generational improvements. This predicament with its latest AI model comes at a critical juncture for OpenAI, following a recent investment round that raised $6.6 billion.

With this financial backing, investors' expectations rise, as do technical hurdles that confound typical AI scaling approaches. If these early versions do not live up to expectations, OpenAI's future fundraising chances may not be as attractive. The report's limitations underscore a major difficulty for the entire AI industry: the decreasing availability of high-quality training data and the need to remain relevant in an increasingly competitive environment.

A June research (PDF) predicts that between 2026 and 2032, AI companies will exhaust the supply of publicly accessible human-generated text data. Developers have "largely squeezed as much out of" the data that has been utilised to enable the tremendous gains in AI that we have witnessed in recent years, according to The Information. OpenAI is fundamentally rethinking its approach to AI development in order to meet these challenges. 

“In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,” states The Information.

DNA Testing Firm Atlas Biomed Vanishes, Leaving Customers in the Dark About Sensitive Data

A prominent DNA-testing company, Atlas Biomed, appears to have ceased operations without informing customers about the fate of their sensitive genetic data. The London-based firm previously offered insights into genetic profiles and predispositions to illnesses, but users can no longer access their online reports. Efforts by the BBC to contact the company have gone unanswered.

Customers describe the situation as "very alarming," with one stating they are worried about the handling of their "most personal information." The Information Commissioner’s Office (ICO) confirmed it is investigating a complaint about the company. “People have the right to expect that organisations will handle their personal information securely and responsibly,” the ICO said.

Several customers shared troubling experiences. Lisa Topping, from Essex, paid £100 for her genetic report, which she accessed periodically online—until the site vanished. “I don’t know how comfortable I feel that they have just disappeared,” she said.

Another customer, Kate Lake from Kent, paid £139 in 2023 for a report that was never delivered. Despite being promised a refund, the company went silent. “What happens now to that information they have got? I would like to hear some answers,” she said.

Attempts to reach Atlas Biomed have been fruitless. Phone lines are inactive, its London office is vacant, and social media accounts have been dormant since mid-2023.

The firm is still registered as active with Companies House but has not filed accounts since December 2022. Four officers have resigned, and two current officers share a Moscow address with a Russian billionaire who is a former director. Cybersecurity expert Prof. Alan Woodward called the Russian links “odd,” stating, “If people knew the provenance of this company and how it operates, they might not trust them with their DNA.”

Experts highlight the risks associated with DNA testing. Prof. Carissa Veliz, author of Privacy is Power, warned, “DNA is uniquely yours; you can’t change it. When you give your data to a company, you are completely at their mercy.”

Although no evidence of misuse has been found, concerns remain over what has become of the company’s DNA database. Prof. Veliz emphasized, “We shouldn’t have to wait until something happens.”

Join Group Calls Easily on Signal with New Custom Link Feature





Signal, the encrypted messaging service, has included new features to make it easier to join group calls, through personalised links. A blog post recently announced the update on the messaging app, setting out to simplify the way of conducting and administering a group call on its service.


Group Calls via Custom Link Easily Accessible


In the past, a group call on Signal began by first making a group chat. Signal recently added features that included automatically creating and sharing a direct link for group calls. Users no longer have to go through that annoying group chat setup just to make the call. To create a call link, one has to open the app and go to the links tab to tap to start a new call link. All links can be given a user-friendly name and include the ability to require approval of any new invitees prior to them joining, adding yet another layer of control.


The call links are also reusable, which is very useful for those who meet regularly, such as weekly team calls. Signal group calling has now been expanded to 50 participants, expanding its utilisation for larger groups.


More Call Control


This update also introduces better management tools for group calls. Users can remove participants if needed and even block them from rejoining if it is needed. That gives hosts more power when it comes to who should have access to the call, which would improve safety and participant management.


New Interactive Features for Group Calls


Besides call links, Signal has also integrated some interactive tools for consumers during group calls. Signal has included a "raise hand" button to enable participants to indicate whether they would want to speak, which makes further efforts to organise group discussions. It also allows support through emoji reactions in calls. The user can continue participating and not interrupt another caller.


Signal has also improved the call control interface so that more manoeuvres are available to mute or unmute a microphone, or turn cameras on or off. This is to ensure more fluidity and efficiency in its use.


Rollout Across Multiple Platforms


The new features are now rolled out gradually across Signal's desktop, iOS, and Android versions. The updated app is available on the App Store for iPhone and iPad users free of charge. In order to enjoy the new features regarding group calling functions, users should update their devices with the latest version of Signal.


Signal has recently added new features to make group calling easier, more organised, and intuitive. It has given the user more freedom to control the calls for both personal use and professional calls.

The Evolution of Search Engines: AI's Role in Shaping the Future of Online Search

 

The search engine Google has been a cornerstone of the internet, processing over 8.5 billion daily search queries. Its foundational PageRank algorithm, developed by founders Larry Page and Sergey Brin, ranked search results based on link quality and quantity. According to Google's documentation, the system estimated a website's importance through the number and quality of links directed toward it, reshaping how users accessed information online.

Generative AI tools have introduced a paradigm shift, with major players like Google, Microsoft, and Baidu incorporating AI capabilities into their platforms. These tools aim to enhance user experience by providing context-rich, summarized responses. However, whether this innovation will secure their dominance remains uncertain as competitors explore hybrid models blending traditional and AI-driven approaches.

Search engines such as Lycos, AltaVista, and Yahoo once dominated, using directory-based systems to categorize websites. As the internet grew, automated web crawlers and indexing transformed search into a faster, more efficient process. Mobile-first development and responsive design further fueled this evolution, leading to disciplines like SEO and search engine marketing.

Google’s focus on relevance, speed, and simplicity enabled it to outpace competitors. As noted in multiple studies, its minimalistic interface, vast data advantage, and robust indexing capabilities made it the market leader, holding an average 85% share between 2014 and 2024.

AI-based platforms, including OpenAI's SearchGPT and Perplexity, have redefined search by contextualizing information. OpenAI’s ChatGPT Search, launched in 2024, summarizes data and presents organized results, enhancing user experience. Similarly, Perplexity combines proprietary and large language models to deliver precise answers, excelling in complex queries such as guides and summarizations.

Unlike traditional engines that return a list of links, Perplexity generates summaries annotated with source links for verification. This approach provides a streamlined alternative for research but remains less effective for navigational queries and real-time information needs, such as weather updates or sports scores.

While AI-powered engines excel at summarization, traditional search engines like Google remain integral for navigation and instant results. Innovations such as Google’s “Answer Box,” offering quick snippets above organic search results, demonstrate efforts to enhance user experience while retaining their core functionality.

The future may lie in hybrid models combining the strengths of AI and traditional search, providing both comprehensive answers and navigational efficiency. Whether these tools converge or operate as distinct entities will depend on how they meet user demands and navigate challenges in a rapidly evolving digital landscape.

AI-driven advancements are undoubtedly reshaping the search engine ecosystem, but traditional platforms continue to play a vital role. As technologies evolve, striking a balance between innovation and usability will determine the future of online search.

Volt Typhoon rebuilds malware botnet following FBI disruption

 


There has recently been a rise in the botnet activity created by the Chinese threat group Volt Typhoon, which leverages similar techniques and infrastructure as those previously created by the group. SecurityScorecard reports that the botnet has recently made a comeback and is now active again. It was only in May of 2023 that Microsoft discovered that the Volt Typhoon was stealing data from critical infrastructure organizations in Guam, which it linked to the Chinese government. This knowledge came as a result of a spy observing the threat actor stealing data from critical infrastructure organizations on US territory. 

Several Cisco and Netgear routers have been compromised by Chinese state-backed cyber espionage operation Volt Typhoon since September, to rebuild its KV-Botnet malware, which had previously been disrupted by the FBI and was unsuccessfully revived in January, reports said. A report by Lumen Technologies' Black Lotus Labs released in December 2023 revealed that outdated devices mostly powered Volt Typhoon's botnet from Cisco, Netgear, and Fortinet. 

The botnet was used to transfer covert data and communicate over unsecured networks. The US government recently announced that the Volt Typhoon botnet had been neutralized and would cease to operate. Leveraging the botnet's C&C mechanisms, the FBI remotely removed the malware from the routers and changed the router's IP address to a port that is not accessible to the botnet. 

Earlier this month, in response to a law enforcement operation aimed at disrupting the KV-Botnet malware botnet, Volt Typhoon, which is widely believed to be sponsored by the Chinese state, has begun to rebuild its malware botnet after law enforcement officials disrupted it in January. Among other networks around the world, Volt Typhoon is considered one of the most important cyberespionage threat groups and is believed to have infiltrated critical U.S. infrastructure at least for the past five years. 

To accomplish their objectives, they hack into SOHO routers and networking devices, such as Netgear ProSAFE firewalls, Cisco RV320s, DrayTek Vigor routers, and Axis IP cameras, and install proprietary malware that establishes covert communication channels and proxies, as well as maintain persistent access to targeted networks through persistent access. 

Volt Typhoon was a malicious botnet created by a large collection of Cisco and Netgear routers that were older than five years, and, therefore, were not receiving security updates as they were near the end of their life cycle as a result of having reached end-of-life (EOL) status. This attack was initiated by infecting devices with the KV Botnet malware and using them to hide the origin of follow-up attacks targeting critical national infrastructure (CNI) operations located in the US and abroad. 

There has been no significant change in Volt Typhoon's activity in the nine months since SecurityScorecard said they observed signs of it returning, which makes it seem that it is not only present again but also "more sophisticated and determined". Strike team members at SecurityScorecard have been poring over millions of data points collected from the organization's wider risk management infrastructure as part of its investigation into the debacle and have come to the conclusion that the organization is now adapting and digging in in a new way after licking its wounds in the wake of the attack. 

In their findings, the Strike Team highlighted the growing danger that the Volt Typhoon poses to the environment. To combat the spread of the botnet and its deepening tactics, governments and corporations are urgently needed to address weaknesses in legacy systems, public cloud infrastructures, and third-party networks, says Ryan Sherstobitoff, the senior vice president of SecurityScorecard's threat research and intelligence. "Volt Typhoon is not only a botnet that has resilience, but it also serves as a warning computer virus. 

In the absence of decisive action, this silent threat could trigger a critical infrastructure crisis driven by unresolved vulnerabilities, leading to a critical infrastructure disaster." It has been observed that Volt Typhoon has recently set up new command servers to evade the authorities through the use of hosting services such as Digital Ocean, Quadranet, and Vultr. Afresh SSL certificates have also been registered to evade the authorities as well. 

The group has escalated its attacks by exploiting legacy Cisco RV320/325 and Netgear ProSafe router vulnerabilities. According to Sherstobitoff, even in the short period that it took for the operation to be carried out, 30 per cent of the visible Cisco RV320/325 network equipment around the world was compromised. According to SecurityScorecard, which has been monitoring this matter for BleepingComputer, the reason behind this choice is likely to be based on geographical factors by the threat actors.

It would seem that the Volt Typhoon botnet will return to global operations soon; although the size of the botnet is nowhere near its previous size, it is unlikely that China's hackers will give up on their mission to eradicate the botnet. As a preventative measure, older routers should be replaced with more current models and placed behind firewalls. Remote access to admin panels should not be made open to the internet, and passwords for admin accounts should be changed to ensure that this threat is not created. 

To prevent exploitation of known vulnerabilities, it is highly recommended that you use SOHO routers that are not too old to install the latest firmware when it becomes available. Among the areas in which the security firm has found similarities between the previous Volt Typhoon campaigns and the new version of the botnet are its fundamental infrastructure and techniques. A vulnerability in the VPN of a remote access point located on the small Pacific island of New Caledonia was found by SecurityScorecard's analysis. As the network was previously shut down, researchers observed it being used once again to route traffic between the regions of Asia-Pacific and America, although the system had been taken down previously. 

600 Million Daily Cyberattacks: Microsoft Warns of Escalating Risks in 2024


Microsoft emphasized in its 2024 annual Digital Defense report that the cyber threat landscape remains both "dangerous and complex," posing significant risks to organizations, users, and devices worldwide.

The Expanding Threat Landscape

Every day, Microsoft's customers endure more than 600 million cyberattacks, targeting individuals, corporations, and critical infrastructure. The rise in cyber threats is driven by the convergence of cybercriminal and nation-state activities, further accelerated by advancements in technologies such as artificial intelligence.

Monitoring over 78 trillion signals daily, Microsoft tracks activity from nearly 1,500 threat actor groups, including 600 nation-state groups. The report reveals an expanding threat landscape dominated by multifaceted attack types like phishing, ransomware, DDoS attacks, and identity-based intrusions.

Password-Based Attacks and MFA Evasion

Despite the widespread adoption of multifactor authentication (MFA), password-based attacks remain a dominant threat, making up more than 99% of all identity-related cyber incidents. Attackers use methods like password spraying, breach replays, and brute force attacks to exploit weak or reused passwords1. Microsoft blocks an average of 7,000 password attacks per second, but the rise of adversary-in-the-middle (AiTM) phishing attacks, which bypass MFA, is a growing concern.

Blurred Lines Between Nation-State Actors and Cybercriminals

One of the most alarming trends is the blurred lines between nation-state actors and cybercriminals. Nation-state groups are increasingly enlisting cybercriminals to fund operations, carry out espionage, and attack critical infrastructure1. This collusion has led to a surge in cyberattacks, with global cybercrime costs projected to reach $10.5 trillion annually by 2025.

The Role of Microsoft in Cyber Defense

Microsoft's unique vantage point, serving billions of customers globally, allows it to aggregate security data from a broad spectrum of companies, organizations, and consumers. The company has reassigned 34,000 full-time equivalent engineers to security initiatives, focusing on enhancing defenses and developing phishing-resistant MFA. Additionally, Microsoft collaborates with 15,000 partners with specialized security expertise to strengthen the security ecosystem.

Game Emulation: Keeping Classic Games Alive Despite Legal Hurdles

 For retro gaming fans, playing classic video games from decades past is a dream, but it’s tough to do legally. This is where game emulation comes in — a way to recreate old consoles in software, letting people play vintage games on modern devices. Despite opposition from big game companies, emulation developers put in years of work to make these games playable. 

Game emulators work by reading game files, called ROMs, and creating a digital version of the console they were designed for. Riley Testut, creator of the Delta emulator, says it’s like opening an image file: the ROM is the data, and the emulator brings it to life with visuals and sound. 

Testut and his team spent years refining Delta, even adding new features like online multiplayer for Nintendo DS games. Some consoles are easy to emulate, while others are a challenge. Older systems like the Game Boy are simpler, but emulating a PlayStation requires recreating multiple processors and intricate hardware functions. Developers use tools like OpenGL or Vulkan to help with complex 3D graphics, especially on mobile devices. 

Emulators like Emudeck, popular on the Steam Deck, make it easy to access multiple games in one place. For those wanting an even more authentic experience, FPGA hardware emulation mimics old consoles precisely, though it’s costly. While game companies often frown on ROMs, some, like Xbox, use emulation to re-release classic games legally. 

However, legal questions remain, and complex licensing issues keep many games out of reach. Despite these challenges, emulation is thriving, driven by fans and developers who want to preserve gaming history. Though legal issues persist, emulation is vital for keeping classic games alive and accessible to new generations.

How to Protect Your Brand from Malvertising: Insights from the NCSC

How to Protect Your Brand from Malvertising: Insights from the NCSC

Advertising is a key driver of revenue for many online platforms. However, it has also become a lucrative target for cybercriminals who exploit ad networks to distribute malicious software, a practice known as malvertising. The National Cyber Security Centre (NCSC) has been at the forefront of combating this growing threat, providing crucial guidance to help brands and advertising partners safeguard their campaigns and protect users.

What is Malvertising?

Malvertising refers to the use of online advertisements to spread malware. Unlike traditional phishing attacks, which typically rely on deceiving the user into clicking a malicious link, malvertising can compromise users simply by visiting a site where a malicious ad is displayed. This can lead to a range of cyber threats, including ransomware, data breaches, and financial theft.

The Scope of the Problem

The prevalence of malvertising is alarming. Cybercriminals leverage the vast reach of digital ads to target a large number of victims, often without their knowledge. According to NCSC, the complexity of the advertising ecosystem, which involves multiple intermediaries, exacerbates the issue. This makes identifying and blocking malicious ads challenging before they reach the end user.

Best Practices for Mitigating Malvertising

To combat malvertising, NCSC recommends adopting a defense-in-depth approach. Here are some best practices that organizations can implement:

  • Partnering with well-established and trusted ad networks can reduce the risk of encountering malicious ads. Reputable networks have stringent security measures and vetting processes in place.
  • Conducting regular security audits of ad campaigns can help identify and mitigate potential threats. This includes scanning for malicious code and ensuring that all ads comply with security standards.
  • Ad verification tools can monitor and block malicious ads in real-time. These tools use machine learning algorithms to detect suspicious activity and prevent ads from being displayed to users.
  • Educating users about the dangers of malvertising and encouraging them to report suspicious ads can help organizations identify and respond to threats more effectively.
  • Ensuring that websites are secure and free from vulnerabilities can prevent cybercriminals from exploiting them to distribute malvertising. This includes regularly updating software and using robust security protocols.

Case Studies of Successful Mitigation

Several organizations have successfully implemented these best practices and seen significant reductions in malvertising incidents. For example, a major online retailer partnered with a top-tier ad network and implemented comprehensive ad verification tools. As a result, they were able to block over 90% of malicious ads before they reached their customers.

ZKP Emerged as the "Must-Have" Component of Blockchain Security.

 

Zero-knowledge proof (ZKP) has emerged as a critical security component in Web3 and blockchain because it ensures data integrity and increases privacy. It accomplishes this by allowing verification without exposing data. ZKP is employed on cryptocurrency exchanges to validate transaction volumes or values while safeguarding the user's personal information.

In addition to ensuring privacy, it protects against fraud. Zero-knowledge cryptography, a class of algorithms that includes ZKP, enables complex interactions and strengthens blockchain security. Data is safeguarded from unauthorised access and modification while it moves through decentralised networks. 

Blockchain users are frequently asked to certify that they have sufficient funds to execute a transaction, but they may not necessarily want to disclose their whole amount. ZKP can verify that users meet the necessary standards during KYC processes on cryptocurrency exchanges without requiring users to share their paperwork. Building on this, Holonym offered Human Keys to ensure security and privacy in Zero Trust situations. 

Each person is given a unique key that they can use to unlock their security and privacy rights. It strengthens individual rights through robust decentralised protocols and configurable privacy. The privacy-preserving principle applies to several elements of Web3 data security. ZKP involves complex cryptographic validations, and any effort to change the data invalidates the proof. 

Trustless data processing eases smart contract developer work 

Smart contract developers are now working with their hands tied, limited to self-referential opcodes that cannot provide the information required to assess blockchain activities. To that end, the Space and Time platform's emphasis on enabling trustless, multichain data processing and strengthening smart contracts is worth mentioning, since it ultimately simplifies developers' work. 

Their SXT Chain, a ZKP data blockchain, is now live on testnet. It combines decentralised data storage and blockchain verification. Conventional blockchains are focused on transactions, however SXT Chain allows for advanced data querying and analysis while preserving data integrity through blockchain technology.

The flagship DeFi generation introduced yield farming and platforms like Aave and Uniswap. The new one includes tokenized real-world assets, blockchain lending with dynamic interest rates, cross-chain derivatives, and increasingly complicated financial products. 

To unlock Web3 use cases, a crypto-native, trustless query engine is required, which allows for more advanced DeFi by providing smart contracts with the necessary context. Space and Time is helping to offer one by extending on Chainlink's aggregated data points with a SQL database, allowing smart contract authors to execute SQL processing on any part of Ethereum's history. 

Effective and fair regulatory model 

ZKP allows for selective disclosure, in which just the information that regulators require is revealed. Web3 projects comply with KYC and AML rules while protecting user privacy. ZKP even opens up the possibility of a tiered regulation mechanism based on existing privacy models. Observers can examine the ledger for unusual variations and report any suspect accounts or transactions to higher-level regulators. 

Higher-level regulators reveal particular transaction data. The process is supported by zero-knowledge SNARKs (Succinct Non-interactive Arguments of Knowledge) and attribute-based encryption. These techniques use ZKP to ensure consistency between transaction and regulatory information, preventing the use of fake information to escape monitoring. 

Additionally, ZK solutions let users withdraw funds in a matter of minutes, whereas optimistic rollups take approximately a week to finalise transactions and process withdrawals.

How OpenAI’s New AI Agents Are Shaping the Future of Coding

 


OpenAI is taking the challenge of bringing into existence the very first powerful AI agents designed specifically to revolutionise the future of software development. It became so advanced that it could interpret in plain language instructions and generate complex code, hoping to make it achievable to complete tasks that would take hours in only minutes. This is the biggest leap forward AI has had up to date, promising a future in which developers can have a more creative and less repetitive target while coding.

Transforming Software Development

These AI agents represent a major change in the type of programming that's created and implemented. Beyond typical coding assistants, which may use suggestions to complete lines, OpenAI's agents produce fully formed, functional code from scratch based on relatively simple user prompts. It is theoretically possible that developers could do their work more efficiently, automating repetitive coding and focusing more on innovation and problem solving on more complicated issues. The agents are, in effect, advanced assistants capable of doing more helpful things than the typical human assistant with anything from far more complex programming requirements.


Competition from OpenAI with Anthropic

As OpenAI makes its moves, it faces stiff competition from Anthropic-an AI company whose growth rate is rapidly taking over. Having developed the first released AI models focused on advancing coding, Anthropic continues to push OpenAI to even further refinement in their agents. This rivalry is more than a race between firms; it is infusing quick growth that works for the whole industry because both companies are setting new standards by working on AI-powered coding tools. As both compete, developers and users alike stand to benefit from the high-quality, innovative tools that will be implied from the given race.


Privacy and Security Issues

The AI agents also raise privacy issues. Concerns over the issue of data privacy and personal privacy arise if these agents can gain access to user devices. Secure integration of the agents will require utmost care because developers rely on the unassailability of their systems. Balancing AI's powerful benefits with needed security measures will be a key determinant of their success in adoption. Also, planning will be required for the integration of these agents into the current workflows without causing undue disruptions to the established standards and best practices in security coding.


Changing Market and Skills Environment

OpenAI and Anthropic are among the leaders in many of the changes that will remake both markets and skills in software engineering. As AI becomes more central to coding, this will change the industry and create new sorts of jobs as it requires the developer to adapt toward new tools and technologies. The extensive reliance on AI in code creation would also invite fresh investments in the tech sector and accelerate broadening the AI market.


The Future of AI in Coding

Rapidly evolving AI agents by OpenAI mark the opening of a new chapter for the intersection of AI and software development, promising to accelerate coding, making it faster, more efficient, and accessible to a wider audience of developers who will enjoy assisted coding towards self-writing complex instructions. The further development by OpenAI will most definitely continue to shape the future of this field, representing exciting opportunities and serious challenges capable of changing the face of software engineering in the foreseeable future.




UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.

The Growing Concern Regarding Privacy in Connected Cars

 

Data collection and use raise serious privacy concerns, even though they can improve driving safety, efficiency, and the whole experience. The automotive industry's ability to collect, analyse, and exchange such data outpaces the legislative frameworks intended to protect individuals. In numerous cases, car owners have no information or control over how their data is used, let alone how it is shared with third parties. 

The FIA European Bureau feels it is time to face these challenges straight on. As advocates for driver and car owners' rights, we are calling for clearer, more open policies that restore individuals' control over their data. This is why, in partnership with Privacy4Cars, we are hosting an event called "Driving Data Rights: Enhancing Privacy and Control in Connected Cars" on November 19th in Brussels. The event will bring together policymakers, industry executives, and civil society to explore current gaps in legislation and industry practices, as well as how we can secure enhanced data protection for all. 

Balancing innovation with privacy 

A recent Privacy4Cars report identifies alarming industry patterns, demonstrating that many organisations are not fully compliant with GDPR laws. Data transparency, security, and consent methods are often lacking, exposing consumers to data misuse. These findings highlight the critical need for reforms that allow individuals more control over their data while ensuring that privacy is not sacrificed in the sake of innovation.

The benefits of connected vehicle data are apparent. Data has the potential to alter the automotive industry in a variety of ways, including improved road safety, predictive maintenance, and enhanced driving experiences. However, this should not be at the expense of individual private rights. 

As the automobile sector evolves, authorities and industry stakeholders must strike the correct balance between innovation and privacy protection. Stronger enforcement of existing regulations, as well as the creation of new frameworks that suit the unique needs of connected vehicles, are required. Car owners should have a say in how their data is utilised and be confident that it is managed properly. 

Shaping the future of data privacy in cars 

The forthcoming event on November 19th will provide an opportunity to dig deeper into these concerns. Key stakeholders from the European Commission, the automotive industry, and privacy experts will meet to discuss the present legal landscape and what else can be done to protect persons in this fast changing environment. 

The agenda includes presentations from Privacy4Cars on the most recent findings on automotive privacy practices, a panel discussion with automotive industry experts, and case studies demonstrating real-world examples of data misuse and third-party access. 

Connected cars are the future of mobility, but it must be founded on confidence and transparency. By giving individuals authority over their personal data, we can build a system that benefits everyone—drivers, manufacturers, and society as a whole. The FIA European Bureau is committed to collaborating with all parties to make this happen.

Crypto Bull Market Targeted: The Lottie-Player Security Breach


In an alarming development for the tech community, especially for those immersed in the Web3 ecosystem, a supply chain attack has targeted the popular animation library, Lottie-Player. If users fall for this prompt, it could enable attackers to drain cryptocurrency wallets. 

Given Lottie-Player's impressive tally of over 4 million downloads and its significant presence on many prominent websites for animation embedding, this incident underscores the security vulnerabilities associated with open-source libraries.

Understanding the Attack

The breach initially came to light on GitHub when a user noticed an unusual Web3 wallet prompt while integrating Lottie-Player on their website. Upon closer examination, it was discovered that versions 2.0.5, 2.0.6, and 2.0.7 of Lottie-Player, released between 8:12 PM and 9:57 PM GMT on October 30, 2024, had been tampered with and compromised.

The attack involved the introduction of malicious code into three new versions of the Lottie-Player library, a widely used tool for rendering animations on websites and applications. Threat actors infiltrated the distribution chain, embedding code designed to steal cryptocurrencies from users' wallets. This method of attack is particularly insidious because it leverages the trust developers place in the libraries they use.

The Broader Implications

Once the compromised versions were released, they were integrated into numerous high-profile projects, unknowingly exposing countless users to the threat—the malicious code activated during transactions, redirecting funds to wallets controlled by the attackers. In one notable case, a user reportedly lost 10 Bitcoin (BTC), worth hundreds of thousands of dollars, due to a phishing transaction triggered by the malicious script.

Following the discovery of the attack, the Lottie-Player team swiftly released a clean version, 2.0.8, which developers can use to replace the compromised files. To further contain the breach and limit exposure, versions 2.0.5 through 2.0.7 were promptly removed from npm and CDN providers like unpkg and jsdelivr.

Moving Forward

The attack occurred during a pivotal phase of the crypto bull market, intensifying efforts to steal increasingly valuable tokens. To mitigate risks, it's advisable to connect a wallet only for specific purposes rather than granting full-time permissions for signing transactions. Additionally, being prompted to connect a wallet immediately upon entering a website can serve as a potential warning sign.

LightSpy Update Expands Surveillance on iOS Devices

 


It has been discovered that a newer version of LightSpy spyware, commonly used to target iOS devices, has been enhanced with the capability to compromise the security and stability of the device. LightSpy for macOS was first discovered by ThreatFabric, which published a report in May 2024 in which they described their findings with the malware. 

After a thorough investigation of the LightSpy client and server systems, the analysts discovered that they were using the same server to manage both the macOS and iOS versions of the program. IPhones are undeniably more secure than Android devices, however, Google has been making constant efforts to close the gap, so Apple devices are not immune to attacks. 

The fact that Apple now regularly alerts consumers when the company detects an attack, the fact that a new cyber report just released recently warns that iPhones are under attack from hackers who are equipped with enhanced cyber tools, and the fact that "rebooting an Apple device regularly is a good practice for Apple device owners" is a better practice. LightSpy is a program that many users are familiar with. Several security firms have reported that this spyware has already been identified on multiple occasions. 

The spyware attacks iOS, macOS, and Android devices at the same time. In any case, it has resurfaced in the headlines again, and ThreatFabric reports that it has been improved greatly. Among other things, the toolset has increased considerably from 12 to 28 plugins - notably, seven of these plugins are destructive, allowing them to interfere with the device's boot process adversely. The malware is being distributed by attack chains utilizing known security flaws in Apple iOS and macOS as a means of triggering a WebKit exploit. 

A file with an extension ".PNG" is dropped by this exploit, but this file, in fact, is a Mach-O binary that exploits a memory corruption flaw known as CVE-2020-3837 to retrieve next-stage payloads from a remote server. LightSpy comes with a component called FrameworkLoader, which in turn downloads the application's main module, the Core module, and the available plugins, which have increased from 12 to 28 since LightSpy 7.9.0 was released. 

The Dutch security company reports that after the Core starts up, it will perform an Internet connectivity check using Baidu.com domains and, upon checking those arguments, the arguments will be compared against those passed from FrameworkLoader, which will be used to determine the [command-and-control] data and working directory," the security company stated. This means that the Core will create subfolders for log files, databases, and exfiltrated data using the /var/containers/Bundle/AppleAppLit/working directory path. 

This plugin can collect a wide range of data, including Wi-Fi information, screenshots, location, iCloud Keychain, sound recordings, photos, browser history, contacts, call history, and SMS messages. Additionally, these plugins can be used to gather information from apps such as Files, LINE, Mail Master, Telegram, Tencent QQ, WeChat, and WhatsApp as well. In the latest version of LightSpy (7.9.0), a component called FrameworkLoader is responsible for downloading and installing LightSpy's Core module and its various plugins, which has increased in number from 12 to 28 in the most recent version. 

Upon Core's startup, it will query the Baidu.com domain for Internet connectivity before examining the arguments provided by FrameworkLoader as the working directory and command-and-control data to determine whether it can establish Internet connectivity. In the Core, subfolders for logs, databases, and exfiltrated data are made using the working directory path /var/containers/Bundle/AppleAppLit/ as a default path. 

Among the many details that the plugins can collect are information about Wi-Fi networks, screenshots, locations, iCloud Keychain, sound recordings, images, contacts, call history, and SMS messages, just to mention a few. The apps can also be configured to collect data from apps such as Files, LINE, Mail Master, Telegram, Tencent QQ, WeChat, and WhatsApp as well as from search engines. It should be noted that some of the recent additions to Google Chrome include some potentially damaging features that can erase contacts, media files, SMS messages, Wi-Fi settings profiles, and browsing history in addition to wiping contacts and media files. 

In some cases, these plugins are even capable of freezing the device and preventing it from starting up again once it is frozen. It has also been discovered that some LightSpy plugins can be used to create phony push alerts with a different URL embedded within them. Upon analyzing the C2 logs, it was found that 15 devices were infected, out of which eight were iOS devices. 

Researchers suspect that most of these devices are intentionally spreading malware from China or Hong Kong, and frequently connect to a special Wi-Fi network called Haso_618_5G, which resembles a test network and seems to originate from China or Hong Kong. It was also discovered during ThreatFabric's investigation that Light Spy contains a unique plugin for recalculating location data specific to Chinese systems, suggesting that the spyware's developers may live in China, as the information it contains appears to have been obtained from Chinese sources. 

LightSpy's operators heavily rely on "one-day exploits," and consequently they take advantage of vulnerabilities as soon as they become public information. Using ThreatFabric's recommendation as a guide to iOS users, they are advised to reboot their iOS devices regularly since LightSpy, since it relies on a "rootless jailbreak," can not survive a reboot, giving users a simple, but effective, means to disrupt persistent spyware infections on their devices. 

As the researchers say, "The LightSpy iOS case illustrates the importance of keeping system updates current," and advise users to do just that. "Terrorists behind the LightSpy attack monitor security researchers' publications closely, using exploits that have recently been reported by security researchers as a means of delivering payloads and escalating their privileges on affected devices." Most likely, the infection takes place through the use of lures, which lead to infected websites used by the intended victim groups, i.e. so-called watering holes on the Internet. 

For users concerned about potential vulnerability to such attacks, ThreatFabric advises a regular reboot if their iOS is not up-to-date. Although rebooting will not prevent the spyware from re-infecting the device, it can reduce the amount of data attackers can extract. Keeping the device restarted regularly provides an additional layer of defence by temporarily disrupting spyware's ability to persistently gather sensitive information.

Tech Expert Warns AI Could Surpass Humans in Cyber Attacks by 2030

 

Jacob Steinhardt, an assistant professor at the University of California, Berkeley, shared insights at a recent event in Toronto, Canada, hosted by the Global Risk Institute. During his keynote, Steinhardt, an expert in electrical engineering, computer science, and statistics, discussed the advancing capabilities of artificial intelligence in cybersecurity.

Steinhardt predicts that by the end of this decade, AI could surpass human abilities in executing cyber attacks. He believes that AI systems will eventually develop "superhuman" skills in coding and finding vulnerabilities within software.

Exploits, or weak spots in software and hardware, are commonly exploited by cybercriminals to gain unauthorized access to systems. Once these access points are found, attackers can execute ransomware attacks, locking out users or encrypting sensitive data in exchange for a ransom. 

Traditionally, identifying these exploits requires painstakingly reviewing lines of code — a task that most humans find tedious. Steinhardt points out that AI, unlike humans, does not tire, making it particularly suited to the repetitive process of exploit discovery, which it could perform with remarkable accuracy.

Steinhardt’s talk comes amid rising cybercrime concerns. A 2023 report by EY Canada indicated that 80% of surveyed Canadian businesses experienced at least 25 cybersecurity incidents within the year. While AI holds promise as a defensive tool, Steinhardt warns that it could also be exploited for malicious purposes.

One example he cited is the misuse of AI in creating "deep fakes"— digitally manipulated images, videos, or audio used for deception. These fakes have been used to scam individuals and businesses by impersonating trusted figures, leading to costly fraud incidents, including a recent case involving a British company tricked into sending $25 million to fraudsters.

In closing, Steinhardt reflected on AI’s potential risks and rewards, calling himself a "worried optimist." He estimated a 10% chance that AI could lead to human extinction, balanced by a 50% chance it could drive substantial economic growth and "radical prosperity."

The talk wrapped up the Hinton Lectures in Toronto, a two-evening series inaugurated by AI pioneer Geoffrey Hinton, who introduced Steinhardt as the ideal speaker for the event.