Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

Europe Pushes to Reduce Dependence on U.S. Tech as Sovereign Digital Infrastructure Gains Momentum

 




Several European governments are trying to reduce their dependence on American software, cloud platforms, and digital infrastructure as debates around data control, political influence, and technological independence become more intense across the region.

The situation has exposed contradictions in Europe’s relationship with U.S. technology companies. Microsoft chief executive Satya Nadella has largely stayed away from the kind of political messaging often associated with Alex Karp. Despite this difference, France has started moving parts of its public systems away from Microsoft Windows while simultaneously renewing contracts linked to Palantir Technologies through its domestic intelligence agency.

This complicated approach shows how Europe is attempting to distance itself from American tech firms without fully breaking away from them. Many governments now believe that relying too heavily on foreign technology companies can also mean depending on foreign laws, political priorities, and corporate influence. Still, Europe’s response has not followed one common strategy, with many actions appearing fragmented or reactive.

Much of the debate intensified after the U.S. passed the CLOUD Act in 2018 during President Donald Trump’s first term. The law gives American authorities the ability to request data from U.S.-based technology companies even if that information is stored outside the United States. For European officials, this raised concerns that storing data inside Europe may no longer be enough to fully protect sensitive information from foreign legal access.

Healthcare data quickly became one of the strongest examples used in these discussions. Medical records are considered among the most sensitive forms of information governments hold because they contain deeply personal details tied to citizens. Even after the CLOUD Act came into force, the United Kingdom partnered with companies including Google, Microsoft, and Palantir Technologies during the COVID-19 pandemic for projects involving National Health Service data.

Critics have argued that such partnerships could expose public-sector information to outside influence. France later decided that its Health Data Hub would stop using Microsoft Azure infrastructure and move toward what officials described as a sovereign cloud model. The contract was awarded to Scaleway, a cloud provider owned by French telecommunications group Iliad. Scaleway has also been expanding its network of data centers across Europe.

Scaleway later became one of four companies selected in a €180 million sovereign cloud contract backed by the European Commission. The program is intended to support cloud services that operate under European legal and regulatory standards. Notably, the European Sovereign Cloud initiative launched by Amazon Web Services was not included among the selected providers, even though Amazon created the project to answer European concerns about digital sovereignty.

Questions have also emerged around whether some so-called sovereign alternatives remain partly tied to American technology companies underneath. Some observers pointed to S3NS, a joint venture involving French defense company Thales Group and Google Cloud. Critics worry that arrangements like these could still leave room for indirect U.S. access or legal exposure despite being promoted as trusted European solutions.

Europe has faced similar problems in the search engine market. French search company Qwant was previously recommended for public servants in France while relying on Microsoft Bing’s underlying search infrastructure. The relationship later deteriorated after Qwant accused Microsoft of taking advantage of its dominant position in the market. Although French regulators declined to act against Microsoft, Qwant eventually started searching for alternatives on its own.

Qwant later partnered with German nonprofit search platform Ecosia to launch Staan, a Europe-based search index designed to reduce reliance on Google and Bing technologies. The project focuses on privacy and regional control over search infrastructure. Even so, both companies remain far smaller than their American competitors. Ecosia, despite having around 20 million users, still operates on a completely different scale compared to Google’s global user base.

One of the biggest problems facing European technology firms is market dominance from American companies. U.S. providers continue to control large parts of cloud computing, enterprise software, internet search, and artificial intelligence markets because of their global infrastructure, financial resources, and established ecosystems. European officials hope that large public-sector contracts could help regional providers compete more effectively.

Besides Scaleway, the European Commission’s sovereign cloud program also selected French companies Clever Cloud and OVHcloud, along with STACKIT. STACKIT was developed by the Schwarz Group, the parent company of Lidl, originally for its own internal systems before later being turned into a commercial cloud service.

Supporters of the initiative believe government-backed contracts could encourage more European companies to invest in domestic infrastructure instead of depending on foreign cloud providers. Backers of the program have also said the project aims to encourage digital solutions that align with European laws, governance rules, and privacy standards.

Still, Europe’s strategy of distributing contracts across several companies may create another challenge. While diversification could reduce dependence on one dominant provider and improve resilience, it may also make it harder for Europe to build a single technology giant capable of competing globally with firms such as Microsoft, Amazon, or Google.

Some critics also view sovereign tech partly as an economic strategy meant to keep European spending within the region. However, Europe’s attempts to move away from U.S. technology have not always translated into direct support for startups. In several cases, governments have instead turned toward open-source software alternatives.

France has already started replacing parts of its Windows-based systems with Linux. Public institutions in Germany, Denmark, Austria, and Italy are also exploring alternatives to Microsoft’s office software products through platforms such as LibreOffice.

Several governments have also embraced a “build instead of buy” approach by creating internal software tools. That strategy has faced criticism from parts of the technology and financial sectors. France’s Court of Auditors reportedly questioned spending linked to Visio, an internally developed platform intended to act as an alternative to Zoom and Microsoft Teams.

French newspaper Les Echos also reported frustration from parts of the country’s technology sector. Some critics argued that if governments themselves do not consistently adopt domestic technology tools, it becomes difficult to convince large private companies to do the same.

Many giants of European businesses continue selecting American technology providers when they offer stronger technical or commercial advantages. German airline Lufthansa chose Starlink for onboard internet services. Air France also selected Starlink despite partial ownership ties to the French and Dutch governments. Reports have additionally suggested that France’s national railway operator SNCF may eventually adopt similar services.

The debate around European alternatives has become particularly visible in satellite communications. During a disagreement involving Poland, Elon Musk stated publicly that “there is no substitute for Starlink.” European governments are now trying to prove otherwise by investing in domestic telecommunications and space infrastructure projects.

Public sentiment has also started influencing the discussion. After President Trump threatened to take control of Greenland, applications encouraging consumers to boycott American products surged in popularity on Denmark’s App Store rankings. The reaction showed that calls to reduce dependence on U.S. companies are no longer limited to policymakers and regulators.

Pressure is also building on European governments to reconsider contracts involving controversial American firms. Palantir’s recent public messaging and political positioning have drawn criticism inside parts of the European Union and the United Kingdom. At the same time, many European officials and citizens have started distancing themselves from X, formerly Twitter, because of growing dissatisfaction around platform governance and political discourse.

American technology companies have also shown that Europe is not always their top commercial priority. When Meta delayed the European release of Threads because of regulatory concerns tied to EU laws, it reinforced the perception that large U.S. firms can afford to deprioritize the region when legal requirements become too restrictive.

At the same time, this environment is opening new opportunities for companies building products specifically designed for European markets, languages, and legal standards. Supporters of the EuroStack initiative are pushing for rules that would encourage or require public institutions to purchase locally developed technology whenever possible.

Backers of sovereign tech also hope European companies can eventually compete internationally rather than only within domestic markets. French artificial intelligence company Mistral AI has reportedly experienced strong revenue growth as some businesses search for alternatives to OpenAI. Meanwhile, the governments of Canada and Germany are supporting cooperation between Cohere and Aleph Alpha to create what supporters describe as a transatlantic AI platform for governments and businesses.

As geopolitical tensions continue reshaping the global technology industry, some companies are discovering that not being American, Chinese, or Russian is itself becoming a commercial advantage in international markets.

Financial Services Must Prepare for Attacks Originating Inside the Cloud



With the increase in adoption of cloud-based infrastructure, digital banking ecosystems, and interconnected transaction platforms, cybersecurity has evolved from a regulatory requirement to a critical element of operational resilience. 

Payment service providers, banks, insurance companies, and investment firms now process massive amounts of sensitive financial data and transactions across increasingly complex environments, which makes them persistent targets for sophisticated cyber-adversaries. It encompasses the protection of internal networks, cloud workloads, customer records, mobile banking systems, and critical transaction pipelines against unauthorised access, fraud, and compromise of data. 

A comprehensive financial cybersecurity strategy today goes far beyond perimeter defence, in addition to protecting internal networks, cloud workloads, customer records, and mobile banking systems. As threats evolve, preserving the confidentiality, integrity, and accessibility of financial systems becomes increasingly important not only to prevent cyberattacks and financial losses, but also to maintain institutional trust, regulatory compliance, and overall financial system stability. 

Cloud-based applications and distributed financial platforms are simultaneously expanding the attack surface for threat actors targeting the financial sector due to the increasing reliance on cloud-native applications. As explained by Cristian Rodriguez, CrowdStrike Field CTO for the Americas, an increasing frequency of cloud-based intrusions has been directly linked to the rapid migration of financial workloads and services to cloud-based environments. 

By leveraging stolen credentials and compromised digital identities, attackers have bypassed traditional exploitation techniques altogether in many observed incidents. The ability to move discreetly across environments allows adversaries to exfiltrate data, deploy malware, and run ransomware operations at a large scale, as well as abuse cloud infrastructure to perform command and control functions. 

Based on CrowdStrike's 2025 Threat Hunting Report, intrusions targeting the financial sector increased by 26 percent during 2024, with a significant portion associated with credentials acquired through cybercriminal marketplaces operated by access brokers. A significant increase of almost 80 percent in nation-state activity targeting financial institutions was also observed, reflecting growing geopolitical and economic reasons for these attacks. 

There is an increasing focus on obtaining intelligence regarding mergers, acquisitions, investment movements, and broader market trends from threat groups, who use stolen financial data to support strategic influence operations and economic espionage. 

Genesis Panda was observed as an actor in these operations, demonstrating the continued involvement of advanced state-aligned cyber groups in financial-driven cyber attacks. Due to the rapidly expanding digital footprint within the financial sector, cybersecurity has evolved from a technical safeguard to a critical business necessity. The financial sector is increasingly targeted by cybercriminals due to the vast amounts of sensitive customer information, financial credentials, and transaction records it manages. 

By encrypting, segmenting networks, implementing multi-factor authentication, protecting endpoints, and continuously monitoring threats, organizations are ensuring that their security is strengthened to combat evolving threats. As a consequence of cyber incidents, institutions face fraud, ransomware, regulatory penalties, operational disruption, and reputational damage in addition to data theft. 

Increasingly sophisticated attacks have made sophisticated technologies like intrusion detection systems, malware defense, and real-time incident response critical to reducing financial and operational risks. In addition to maintaining consumer trust, cybersecurity plays a key role in regulatory compliance and ensuring compliance with financial standards. 

Several frameworks, including the Bank Secrecy Act, Dodd-Frank Act, Sarbanes-Oxley Act and PCI DSS, require strict controls regarding access management, data protection, and network security throughout financial environments. As threat groups become more sophisticated, their vulnerabilities are becoming more apparent across hybrid cloud environments, particularly where cloud control planes interact with legacy on-premises infrastructures. 

The threat actor Genesis Panda has demonstrated a deep understanding of cloud architectures, exploiting configuration errors and identity vulnerabilities associated with integrating distributed IT systems on a regular basis. In order to keep abreast of evolving threat actors, attack indicators, and emerging configuration risks, financial institutions need to maintain constant engagement with cybersecurity vendors and intelligence providers. 

According to Matt Immler, Okta's Regional Chief Security Officer for the Americas, security teams cannot afford to be complacent as cloud ecosystems grow increasingly complex, and that proactive vendor collaboration is essential for ensuring defensive readiness is maintained. For nearly two years, Okta’s Threat Intelligence Team has provided financial organizations with insights into active cyber campaigns and attack tactics through quarterly intelligence briefings. 

A data-driven approach has proven beneficial to organizations such as NASDAQ, where security teams have been able to remain on top of rapidly evolving threats within the sector, according to Immler. Additionally, briefings have highlighted the increasing activity of groups such as Scattered Spider that exploit human weaknesses in order to gain unauthorized access to enterprise systems by manipulating help desks and identity recovery processes. 

Additionally, CrowdStrike’s Cristian Rodriguez observed that zero-trust security frameworks that have traditionally been applied to identity and endpoint protection need to be extended to cloud workloads and operational infrastructure, to prevent attackers from lateral movement. Additionally, destructive malware such as wiper malware remains a major concern in many sectors. 

In order to detect these attacks, which are intended to permanently destroy data and render systems inoperable, state-backed actors, particularly those linked to China, often use stealth-focused tactics that make them particularly difficult to detect. In particular, Immler noted that adversaries of this type often prioritize long-term persistence, quietly integrating themselves into target environments, remaining undetected for extended periods of time before unleashing disruptive payloads. 

With this increasing challenge, organizations are increasingly finding it difficult to determine the accurate depth of compromise within financial networks, therefore reinforcing the importance of continuous monitoring, integrated threat intelligence, and resilient cloud security architectures. 

Credential Theft Continues to Dominate Financial Attacks 

The financial institutions are experiencing a significant increase in credential-driven intrusions due to sophisticated and targeted phishing campaigns. The threat actors are now utilizing a variety of methods to bypass multi-factor authentication, including adversary-in-the-middle attacks and QR-code phishing operations capable of fooling even experienced employees.

As of mid-2025, Darktrace observed nearly 2.4 million phishing emails across financial sector environments, with almost 30% targeting VIPs and high-privilege users, a reflection of the growing importance of identity compromise as an initial method of access. 

Data Loss Prevention Risks Are Expanding

Organizations have expressed concerns about confidentiality and regulatory exposure as they struggle to safeguard sensitive information, leaving enterprise environments vulnerable to malicious attacks. In October 2025, Darktrace identified more than 214,000 emails with unfamiliar attachments sent to suspected personal accounts within the financial sector. There were also 351,000 emails that carried unfamiliar files that were forwarded to freemail services such as Gmail, Yahoo, and iCloud, reinforcing the concerns regarding the leakage of data, insider risk, and compliance failures regarding sensitive financial records and internal communications. 

Ransomware Operations Are Becoming More Destructive 

The majority of modern ransomware groups prioritize data theft and extortion before attempting to encrypt data. Cybercriminals, including Cl0p and RansomHub, have emphasized the use of trusted file-transfer platforms provided by financial institutions to exfiltrate sensitive information and exert increased reputational and regulatory pressure. Fortra GoAnywhere MFT was targeted by Darktrace research several days before the related vulnerability was publicly disclosed, showing how attackers are taking advantage of vulnerabilities before traditional patching cycles are available. 

Edge Infrastructure Has Become a Primary Target 

As a result of the growing threat of virtual private networking, firewalls, and remote access gateways, researchers have observed pre-disclosure exploitation campaigns affecting Citrix, Palo Alto, and Ivanti technologies, allowing attackers to hijack sessions, gather credentials, and enter critical banking environments lateral. VPN infrastructure is increasingly being described as a concentrated attack surface, particularly where patching delays and weak segmentation give attackers the opportunity to compromise systems more deeply. 

State-Backed Threat Activity Is Intensifying 

It has been reported that state-sponsored campaigns, linked to North Korean actors affiliated with the Lazarus Group, continue to expand across cryptocurrency and fintech organizations. According to investigators, malicious NPM packages, BeaverTail and InvisibleFerret malware, and exploiting React2Shell vulnerabilities were utilized to facilitate credential theft and persistent access. Organizations throughout Europe, Africa, the Middle East, and Latin America have been affected by the activity, demonstrating the global scope and extent of these financial crimes cyber operations. 

Cloud and AI Governance Challenges Are Growing 

There is an increasing perception among financial sector CISOs that cloud complexity, insider exposure, and uncontrolled AI adoption pose systemic security risks. Keeping visibility across distributed, multi-cloud environments while preventing sensitive information from being exposed through emerging artificial intelligence tools has become increasingly challenging. With the rapid integration of AI-driven technologies into operations, governance, compliance oversight and cloud security resilience are increasingly becoming board-level cybersecurity priorities rather than merely technical concerns. 

Building Long-Term Cyber Resilience 

Due to increasing sophistication of cyber threats, financial institutions are adopting resilient security strategies to strengthen cloud, identity, and data protection. AI-powered cybersecurity tools are being used increasingly by organizations across cloud and endpoint environments to enhance threat detection, automate security operations, and expedite incident response.

Meanwhile, financial firms are increasingly relying on third-party platforms, APIs, and connected services, which require stronger identity and access management controls. In addition to addressing resource and expertise gaps, many institutions are turning to managed security services to enhance operational readiness and address resource and expertise gaps. 

A number of industry leaders emphasize that data protection is not simply a compliance obligation, but rather a fundamental business risk, putting greater emphasis on enterprise-wide governance, risk classification, and ownership of sensitive financial information. In light of the increasingly volatile cyber landscape, financial institutions are shifting their focus from reactive defenses to long-term operational resilience in response to this threat. 

Cloud expansion, identity-driven attacks, ransomware evolution, and AI-related governance risks have all contributed to the strategic business priority of cybersecurity rather than an IT function alone. In order to maintain resilience, experts warn that continuous threat intelligence collaboration, enhanced identity security frameworks, proactive cloud governance, and increased incident response capabilities that are capable of responding to rapidly changing attack patterns will be necessary. 

With attackers increasingly exploiting trust, misconfigurations, and human vulnerabilities in an environment, securing critical infrastructure, sensitive data, and digital operations will be a critical component of preserving institutional stability, regulatory confidence, and customer trust.

Anthropic Probes Alleged Unauthorized Access to Powerful Claude Mythos AI Cybersecurity Model

 

Anthropic is examining claims that a limited number of individuals may have gained unauthorized access to its highly advanced Claude Mythos AI model, a cybersecurity-focused system the company considers too sensitive for public release.

"We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the company said in a statement.

The investigation follows a Bloomberg report alleging that users on a private online forum were able to interact with the model without receiving official authorization.

The Claude Mythos model has attracted significant attention due to its reported ability to identify and exploit security vulnerabilities at scale. While concerns continue to grow around the risks associated with powerful AI systems, some officials believe such tools could ultimately improve cybersecurity if managed responsibly.

Anthropic clarified that there is currently no evidence suggesting its own systems were compromised or that malicious actors have taken control of the model. However, the incident has renewed concerns about whether major AI firms can effectively safeguard advanced frontier AI technologies from unauthorized access.

Cybersecurity experts suggest the issue may not have resulted from a traditional hacking attack. According to Raluca Saceanu, chief executive of cybersecurity firm Smarttech247, the incident was "most likely through misuse of access rather than a classic hack."

Anthropic has reportedly provided select technology and financial organizations with access to the Mythos model to help strengthen their cybersecurity defenses. However, such partnerships rely heavily on third-party organizations maintaining strict internal access controls.

According to Bloomberg, the individual linked to the access claim may have already possessed permission to view Anthropic’s AI systems through work connected to a third-party contractor. The report further stated that the group continued using the model after obtaining access, although they allegedly avoided using it for offensive hacking activities to remain undetected.

"When powerful AI tools are accessed or used outside their intended controls, the risk is not just a security incident but the spread of capabilities that could be used for fraud, cyber abuse, or other malicious activity," Saceanu said.

Meanwhile, UK cybersecurity officials continue to stress both the risks and opportunities presented by advanced AI systems. Speaking at the CyberUK conference, National Cyber Security Centre (NCSC) chief Richard Horne highlighted how frontier AI technologies are rapidly changing the cybersecurity landscape.

"As we have seen in the media in recent days, frontier AI is rapidly enabling discovery and exploitation of existing vulnerabilities at scale, illustrating how quickly it will expose where fundamentals of cyber-security are still to be addressed," he said.

Horne encouraged organizations not to panic over emerging AI-driven threats but instead focus on strengthening basic cybersecurity practices such as software updates and modernizing outdated IT systems.

During the same event, UK Security Minister Dan Jarvis urged closer collaboration between governments and AI developers to ensure advanced AI technologies are used to protect critical infrastructure and national networks.

Most frontier AI systems are currently being developed by companies based in the United States and China, leaving countries like the UK dependent on foreign firms for access to cutting-edge cybersecurity tools such as Mythos.

The growing role of AI in cybersecurity comes amid rising concerns over cyber warfare and digital attacks linked to nation-state actors, particularly Russia and China. The NCSC has increasingly described cyberspace as the “home front” of modern defense, emphasizing the expanding role of cyber operations in global conflicts.

France’s Break From Microsoft Signals Europe’s Growing Push for Digital Sovereignty


In a move that reflects Europe’s deepening concerns over data sovereignty and foreign technological dependence, France has decided to move its national Health Data Hub away from Microsoft's cloud infrastructure and into the hands of domestic provider Scaleway. The decision marks one of the most significant shifts yet in Europe’s growing effort to reclaim control over sensitive public data. 
 
The Health Data Hub contains medical information relating to millions of French citizens and serves as a major research platform for healthcare analysis and innovation. Since 2019, the system had been hosted on Microsoft Azure, a decision that triggered years of political and legal controversy due to fears surrounding American surveillance laws and extraterritorial access to European data.   
 
French authorities have now selected Scaleway, a subsidiary of Iliad, after an extensive evaluation involving more than 350 technical criteria related to security, resilience, and operational capacity. The migration is expected to be completed between late 2026 and early 2027.   
 

Why Europe Is Growing Wary of American Cloud Giants 

 
The decision is part of a much broader European movement toward what policymakers increasingly describe as “digital sovereignty.” Governments across Europe have become increasingly uneasy about relying on American technology firms for critical infrastructure, especially after repeated debates surrounding the US CLOUD Act, which can compel US companies to provide data to American authorities even if that data is stored overseas.  
 
In France, these concerns intensified after Microsoft reportedly acknowledged before a French Senate inquiry that it could not fully resist certain US government data requests involving French citizens. That revelation significantly strengthened calls for sovereign cloud infrastructure controlled entirely within European legal jurisdiction. The shift also aligns with France’s wider technological repositioning. Earlier this year, the country announced plans to reduce reliance on Microsoft products across government systems, replacing several US-based platforms with domestic or open source alternatives.   
 

A Defining Moment for Europe’s Tech Independence 

 
France’s decision extends beyond healthcare infrastructure as it clearly represents a symbolic turning point in Europe’s evolving relationship with Big Tech. 
 
For years, European nations depended heavily on American cloud providers because of their scale, maturity, and technological dominance. But growing geopolitical tensions, concerns around privacy, and the strategic importance of data have begun reshaping that equation. 
 
By transferring one of its most sensitive national databases to a domestic provider, France is effectively signalling that technological convenience can no longer outweigh sovereignty concerns. The move may now encourage other European governments to reassess where their own critical data resides. 
 
At its core, this is no longer simply a cloud migration story. It is a declaration that, in the age of AI and mass data infrastructure, control over information has become inseparable from national security itself.

Mullvad Introduces Optional Fix for iOS VPN Leak Risks While Avoiding Update Glitches

 

Apple’s iOS ecosystem continues to pose distinct challenges for VPN services, particularly due to potential data leaks affecting certain types of traffic. On Tuesday, Mullvad VPN—widely recognized for its strong privacy standards—announced a new solution aimed at addressing this issue. However, the company is allowing users to decide whether to enable it, as the fix may complicate the iOS update process.

Security concerns on iOS include vulnerabilities to leaks and LocalNet attacks, in which attackers imitate trusted nearby Wi-Fi networks, such as those found in cafes. While VPNs can mitigate these risks, doing so requires routing all app data through the VPN. Mullvad’s approach involves enabling an “includeAllNetworks” configuration to enforce this behavior.

Although Mullvad has long been aware of this method, it previously avoided implementing it due to compatibility issues with Apple’s update system. In some cases, this setup could trigger a loop where iOS repeatedly attempts to update the Mullvad app, potentially causing devices to freeze, restart, and retry the update continuously.

The company has now introduced a new setting that activates includeAllNetworks, effectively addressing the leak vulnerability. To prevent update-related issues, Mullvad has made the feature optional and added a safeguard mechanism. When an iOS update is detected, users will receive a notification advising them to temporarily disable the VPN or switch off the includeAllNetworks setting to avoid complications. A representative from Mullvad didn't immediately respond to a request for comment.

Details about the rollout of this feature remain unclear, but Mullvad indicates it will be available soon. The company also cautions that the workaround is not flawless and encourages iOS users to report any instances of device freezing or bricking during updates directly to Apple.

For users exploring VPN options, Mullvad continues to stand out for its focus on advanced privacy measures. The service has incorporated post-quantum encryption to safeguard against future quantum-based threats and has implemented protections against AI-driven traffic analysis. Priced at $5 per month, it remains an affordable choice for privacy-conscious users.

AI Models Surpass Doctors in Emergency Diagnosis, Harvard Study Finds

 




A contemporary study conducted by researchers at Harvard University has revealed that advanced artificial intelligence systems are now capable of exceeding human doctors in both diagnosing medical conditions and determining treatment strategies, including in fast-paced and high-stakes emergency room environments. The research specifically accentuates the potential capabilities of modern AI systems in handling complex clinical reasoning tasks that were traditionally considered exclusive to trained physicians.

The findings, published in the peer-reviewed journal Science, are based on a controlled comparison between OpenAI o1 and experienced attending physicians. To ensure realistic testing conditions, the study used 76 actual emergency department cases sourced from Beth Israel Deaconess Medical Center. These cases were evaluated across multiple stages of the diagnostic process, allowing researchers to assess performance under varying levels of available patient information.

At the earliest stage of patient assessment, commonly referred to as initial triage, where clinicians typically have only limited details about a patient’s condition, the AI model demonstrated a notable advantage. It was able to correctly identify either the exact diagnosis or a closely related condition in 67.1 percent of the cases. In comparison, the two physicians involved in the study achieved accuracy rates of 55.3 percent and 50 percent respectively. This suggests that even with minimal data, the AI system was more effective at narrowing down potential diagnoses.

As the diagnostic process progressed and additional clinical information became available during the emergency room evaluation phase, the model’s performance improved further. Its diagnostic accuracy increased to 72.4 percent, reflecting its ability to refine its conclusions with more context. The physicians also showed improvement at this stage, but their accuracy remained lower, at 61.8 percent and 52.6 percent. This stage is particularly important as it mirrors real-world conditions where doctors continuously update their assessments based on new findings.

In the final phase of care, when patients were admitted either to general hospital wards or intensive care units, the AI model continued to outperform its human counterparts. It achieved an accuracy rate of 81.6 percent, compared to 78.9 percent and 69.7 percent for the physicians. Although the performance gap narrowed slightly at this stage, the AI still maintained a measurable edge, indicating consistency across the full diagnostic timeline.

Beyond identifying illnesses, the study also evaluated how effectively the AI system could design clinical management plans. This included decisions such as selecting appropriate medications, including antibiotics, as well as handling complex and sensitive scenarios like end-of-life care planning. Across five evaluated case studies, the AI achieved a median performance score of 89 percent. In contrast, physicians scored significantly lower, averaging 34 percent when relying on traditional clinical resources and 41 percent when supported by GPT-4. This underlines a substantial gap in structured decision-making support.

The researchers acknowledged that while integrating AI into clinical workflows is often viewed as a high-risk approach due to patient safety concerns, its potential benefits are significant. They noted that wider adoption of such systems could help reduce diagnostic errors, minimize treatment delays, and address disparities in access to healthcare services. These factors collectively contribute to both improved patient outcomes and reduced financial strain on healthcare systems.

At the same time, the study emphasizes that current AI systems are not without limitations. Clinical medicine involves more than text-based data. Doctors routinely rely on non-verbal and non-textual cues, such as observing a patient’s physical discomfort, interpreting imaging results, and making judgment calls based on experience. These aspects are not fully captured by existing AI models, which means human expertise remains essential.

The authors further concluded that large language models have now surpassed many traditional benchmarks used to measure clinical reasoning abilities. However, they stress the urgent need for more detailed research, including real-world clinical trials and studies focused on human-AI collaboration, to determine how these systems can be safely and effectively integrated into healthcare settings.

In comments shared with The Guardian, lead researcher Arjun Manrai clarified that the findings should not be interpreted as suggesting that AI will replace doctors. Instead, he described the results as evidence of a major technological shift that is likely to transform the medical field in the coming years.

From a macro industry perspective, this study reflects a developing trend in which AI is increasingly being used to augment clinical decision-making. However, experts continue to caution that challenges such as data bias, accountability, regulatory oversight, and patient trust must be addressed before such systems can be widely deployed. The future of healthcare, therefore, is likely to involve a collaborative model where AI amplifies efficiency and accuracy, while human doctors provide critical judgment, ethical oversight, and patient-centered care.

AI-Powered License Plate Surveillance Sparks Urgent Push for Stronger Privacy Laws

 


The growing use of license plate tracking systems by companies like Flock Safety and Motorola’s VehicleManager has transformed routine drives into continuously recorded digital trails. Originally designed to capture license plate data, these systems have rapidly advanced into highly sophisticated surveillance tools. With the integration of artificial intelligence, cameras can now identify not only vehicles but also faces and other distinguishing features, silently building detailed records of individuals’ movements.

This technological shift raises an important question about the effectiveness of existing privacy protections. Laws governing surveillance vary widely across states, making it difficult to determine which frameworks are truly effective and where gaps remain.

To better understand the landscape, insights were gathered from Chad Marlow, senior policy counsel and lead for surveillance at the American Civil Liberties Union. He emphasized that meaningful privacy protection requires collective effort rather than individual action. "Collective action, rather than individual action, is required," Marlow told. He also warned, "I would caution that while Flock is the most problematic ALPR company in America, there are many other ALPR companies, like Axon and Motorola, that present serious privacy risks, so switching from Flock to Axon/Motorola ALPRs at best may constitute minimal harm reduction, but it is far from a solution."

Current legislation largely focuses on two major tools used by law enforcement: automatic license plate readers (ALPRs), which track vehicles, and drones equipped with AI-enabled cameras. Meanwhile, companies are expanding into traditional surveillance cameras capable of live monitoring and tracking individuals on the ground.

Advanced AI capabilities, such as Flock’s “Freeform” search feature, allow authorities to input open-ended queries and retrieve results from vast camera networks. These developments highlight the need for updated and comprehensive regulations. Several categories of laws are emerging as particularly impactful:

Restrictions on AI Surveillance Capabilities

Some of the most comprehensive laws limit what AI-powered cameras are allowed to detect and analyze. While not always targeting ALPRs directly, they regulate how data can be searched and used. Illinois stands out with its Biometric Information Privacy Act (BIPA), which protects sensitive identifiers like facial data and fingerprints and requires user consent. This law is so strict that certain features, such as facial recognition in consumer devices, are disabled within the state. However, many of these laws still exclude vehicle and license plate data, which often remains unprotected.

Limiting ALPR Use to Specific Investigations

Several states allow ALPR usage only under defined circumstances, such as serious criminal investigations. These restrictions prevent widespread deployment by private entities like homeowners associations or businesses and may also limit camera placement in certain public areas.

Mandatory Data Deletion Policies

One of the most effective privacy safeguards requires that collected data be deleted within a set timeframe unless tied to an active investigation. This prevents long-term tracking and profiling of individuals. As Marlow explained, "The idea of keeping a location dossier on every single person just in case one of us turns out to be a criminal is just about the most un-American approach to privacy I can imagine."

States like New Hampshire enforce extremely short data retention limits, requiring deletion within minutes if the data is not used. Others allow slightly longer windows. "For states that want a little more time to see if captured ALPR data is relevant to an ongoing investigation, keeping the data for a few days is sufficient," Marlow told me. "Some states, like Washington and Virginia, recently adopted 21-day limits, which is the very outermost acceptable limit." He further cautioned that prolonged storage makes it easier to build behavioral profiles "that can eviscerate individual privacy."

Restrictions on Data Sharing Across Jurisdictions

Certain states prohibit sharing surveillance data beyond state borders, including with federal agencies. These measures aim to limit access by organizations such as the Department of Homeland Security or ICE, though enforcing such restrictions remains a challenge. As Marlow noted, "Ideally, no data should be shared outside the collecting agency without a warrant," Marlow said, "But some states have chosen to prohibit data sharing outside of the state, which is better than nothing, and does limit some risks."

Approval and Oversight Requirements

Another approach involves requiring state-level approval before installing ALPR systems. The rigor of these processes varies significantly. For example, Vermont implemented strict approval mechanisms that ultimately discouraged adoption altogether, with no agencies using ALPR systems by 2025.

Despite these efforts, new privacy laws often face resistance from companies and law enforcement agencies, sometimes leading to legal disputes and slow enforcement. Additionally, legislative proposals frequently evolve during the approval process, making it important for citizens to stay informed and engaged.

Advocacy groups and public participation also play a critical role. Initiatives like The Plate Project encourage individuals to take part in privacy discussions and reforms. Local involvement, such as attending city council meetings, can influence decisions on surveillance technology before implementation.

Ultimately, as surveillance capabilities continue to expand, the effectiveness of privacy protections will depend on both robust legislation and active public oversight.

Are You Letting AI Do Too Much of Your Thinking?

 




As artificial intelligence tools take on a growing share of everyday thinking tasks, researchers are raising concerns that this shift may be quietly affecting how people process information, remember ideas, and engage with their own work.

When Nataliya Kosmyna reviewed applications for internships, she noticed a pattern that stood out. Many cover letters were structured in nearly identical ways, written in polished language, and included vague or forced connections to her research. The consistency suggested that applicants were relying on large language models, the technology behind tools such as ChatGPT, Google Gemini, and Claude.

At the same time, while teaching at the Massachusetts Institute of Technology, Kosmyna began noticing that students were finding it harder to retain what they had learned. Compared to previous years, more students struggled to recall material, which led her to question whether growing dependence on AI tools could be influencing cognitive abilities.

Researchers studying human-computer interaction are increasingly concerned that relying too heavily on AI may alter not just how people write but how they think. This phenomenon, often described as “cognitive offloading,” refers to shifting mental effort onto external tools. While this has existed for years with calculators and search engines, experts warn that AI systems may deepen the effect because they generate complete responses rather than simply helping users find information.

Earlier research on internet usage identified what is known as the “Google effect,” where people became less likely to remember facts because they could easily look them up. Some researchers argued that this allowed the brain to focus on more complex tasks. However, AI tools now go a step further by producing answers, arguments, and even creative content, reducing the need for active thinking.

To better understand the impact, Kosmyna and her team conducted an experiment involving 54 students. Participants were divided into three groups. One group used AI tools to write essays, another relied on search engines without AI-generated summaries, and a third completed the task without any digital assistance. Their brain activity was monitored while they worked on open-ended topics such as happiness, loyalty, and everyday decisions.

The differences were clear. Students who worked without any tools showed strong and widespread brain activity across multiple regions. Those using search engines still demonstrated notable engagement, particularly in areas related to visual processing. In contrast, the group using AI tools showed comparatively lower brain activity, with levels dropping by as much as 55%. Activity in areas linked to creativity and deeper thinking was especially reduced.

The impact extended beyond brain activity. Students who used AI struggled to recall what they had written shortly after completing their essays. Several participants also reported feeling disconnected from their work, as if they had not fully contributed to it. Similar findings from other studies suggest that frequent use of AI tools can weaken memory retention and recall.

Research from the University of Pennsylvania introduces another concern described as “cognitive surrender,” where users accept AI-generated responses without questioning them. In such cases, individuals may rely on the system’s output even when it conflicts with their own understanding.

The effects are not limited to academic settings. A multinational study found that medical professionals who relied on AI tools for detecting colon cancer became less accurate when asked to identify cases without assistance after several months of use. This suggests that repeated dependence on AI may reduce independent decision-making skills, even in critical fields.

Kosmyna also observed that essays written with AI tended to be highly similar, lacking variation in style and depth. Teachers reviewing the work described it as uniform and lacking originality. In some cases, the responses were so alike that it appeared as though students had collaborated, even when they had not.

Follow-up observations months later revealed further differences. Students who had previously relied on AI showed weaker neural connectivity when asked to complete tasks without it, compared to those who had worked independently earlier. This may indicate that they had engaged less deeply with the material from the start.

Vivienne Ming, author of Robot Proof, has raised similar concerns. In her research, students asked to make real-world predictions often defaulted to copying answers from AI systems instead of forming their own conclusions. Brain measurements showed low levels of gamma wave activity, which is associated with active thinking. Reduced gamma activity has been linked in other studies to cognitive decline over time.

However, not all users showed the same pattern. A small group, fewer than 10%, used AI differently by treating it as a source of information rather than a final answer. These individuals analysed the output themselves, showed stronger brain engagement, and produced more accurate results.

The concerns echo earlier findings related to navigation technology. Increased reliance on GPS has been associated with reduced spatial memory in some studies. Weak spatial navigation skills have also been explored as a possible early indicator of conditions such as Alzheimer's disease. These parallels suggest that reduced mental effort over time may have broader cognitive consequences.

Researchers emphasize that AI itself is not the problem but how it is used. Ming advocates for a more deliberate approach, where individuals think through problems first and then use AI to test or refine their ideas. She suggests methods such as asking AI to challenge one’s reasoning or limiting it to providing context instead of direct answers, encouraging deeper engagement.

Kosmyna similarly recommends building a strong understanding of subjects without AI assistance before integrating such tools into the learning process.

The alarming takeaway from the current research is clear. While AI offers efficiency and convenience, it may also encourage mental shortcuts. Human cognition depends on regular effort and engagement, and reducing that effort could carry long-term consequences. As these tools become more integrated into daily life, the challenge will be to use them in ways that support thinking rather than replace it.



eSIM vs Physical SIM: Why the Digital Shift Still Falls Short of Its Promise

 

SIM cards were once essential in an era when multiple users often shared a single mobile device, and the cards themselves were much larger. Today, SIMs have shrunk dramatically and, in many ways, are no longer necessary due to the emergence of eSIM technology.

eSIMs—embedded, virtual SIM cards—were designed to simplify mobile connectivity, making it as effortless as connecting to Wi-Fi while eliminating the inconveniences tied to physical SIM cards. However, while the technology set out to resolve longstanding issues, it has introduced a new set of challenges in the process.

One of the biggest advantages promised by eSIMs was seamless network switching without the need for physical cards. Traditionally, travelers had to purchase local SIM cards abroad to avoid high roaming charges, often swapping out their primary SIM and storing it safely. eSIMs improve this experience significantly, allowing users to activate international plans digitally through services like Saily.

Despite this convenience, the level of ease still largely depends on individual carriers. In many cases, users must navigate complex activation steps, and switching between networks isn’t always as smooth as expected. Additionally, carrier-locked devices can prevent eSIMs from functioning across different networks, limiting flexibility.

Transferring a mobile number between devices has also become more complicated with eSIMs. With a physical SIM, users can simply move the card from one phone to another. In contrast, eSIM transfers often involve multiple steps and may even require contacting customer support—especially if the original device is lost or damaged.

While these additional steps are partly necessary to prevent fraud, such as unauthorized SIM swaps, they can still be inconvenient. For users who frequently change devices, this added friction can be a significant drawback.

Although the concept behind eSIM technology is strong, the current ecosystem is not fully prepared to deliver a seamless digital experience. A standardized process for activation and transfer across devices and carriers is still lacking. Ideally, users should be able to move eSIM profiles easily between devices, regardless of brand—whether switching from iPhone to Android or vice versa.

Moreover, many restrictions carried over from physical SIM systems still exist in the eSIM space, and these need to be reconsidered. Switching between multiple eSIMs on the same device also requires further refinement.

Ultimately, the goal should be to move beyond simply digitizing the traditional SIM model. A more advanced system could allow users to connect to available carriers instantly using login credentials, similar to accessing public Wi-Fi. With emerging technologies like passkeys, the reliance on SIM cards and phone numbers for authentication may soon become outdated, paving the way for a more streamlined and user-friendly mobile connectivity experience.

Bank of America Bets Big on Risky Anthropic AI

 

Bank of America is aggressively expanding its use of Anthropic's advanced AI technology, even as U.S. regulators issue stark cybersecurity warnings. The bank's commitment highlights a broader trend where nearly 70% of financial institutions integrate AI into operations, prioritizing innovation over potential risks. This move comes amid global concerns about Anthropic's Claude Mythos Preview model, which has detected thousands of high-severity vulnerabilities in major operating systems and browsers. 

In early April 2026, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell urgently met with CEOs from top U.S. banks, including Bank of America, to flag risks from Mythos. Officials warned that deploying the model could expose customer personal data to cyber threats, prompting Anthropic to limit access to a select group of tech and banking experts. World leaders echoed these fears: Bank of England Governor Andrew Bailey called AI a "very serious challenge," while ECB President Christine Lagarde supported restrictions on the technology. 

Anthropic itself has cautioned about the dangers, stating that rapid AI progress could spread powerful vulnerability-detection capabilities to unsafe actors, with severe fallout for economies and national security. Despite this, banks like JPMorgan, Goldman Sachs, Citigroup, and Bank of America are testing Mythos to bolster their own defenses. Canadian regulators and European counterparts have also raised alarms, underscoring the technology's global implications. 

Bank of America leads in AI adoption, with over 90% of its 200,000+ employees using the tools daily and a client-facing AI assistant logging three billion interactions in 2025 alone. Backed by a $13.5 billion tech budget—including $4 billion for AI initiatives—the bank focuses on end-to-end process transformation to boost revenue, client experience, and efficiency. Recent rollouts include an AI tool for financial advisors to identify prospects and summarize meetings. 

Bank of America's CTO Hari Gopalkrishnan emphasized balancing scale with governance at the Semafor World Economy 2026 summit, noting, "If you overdo it, you stall innovation. If you underdo it, you introduce a lot of risk." The strategy shifts from small proofs-of-concept to large-scale applications, aiming for measurable ROI while navigating regulatory scrutiny. As AI reshapes banking, Bank of America's bold push tests the fine line between opportunity and peril.

Salesforce’s New “Headless 360” Lets AI Agents Run Its Platform

 


Salesforce has introduced what it describes as the most crucial architectural overhaul in its 27-year history, launching a new initiative called “Headless 360.” The update is designed to allow artificial intelligence agents to control and operate the company’s entire platform without requiring a traditional graphical interface such as a dashboard or browser.

The announcement was made during the company’s annual TDX developer conference in San Francisco, where Salesforce revealed that it is releasing more than 100 new developer tools and capabilities. These tools immediately enable AI systems to interact directly with Salesforce environments. The move reflects a deeper shift in enterprise software, where the rise of intelligent agents capable of reasoning and executing tasks is forcing companies to rethink whether conventional user interfaces are still necessary.

Salesforce’s answer to that question is direct: instead of designing software primarily for human interaction, the platform is now being rebuilt so that machines can access and operate it programmatically. According to the company, this transformation began over two years ago with a strategic decision to expose all internal capabilities rather than keeping them hidden behind user interfaces.

This shift is taking place during a period of uncertainty in the broader software industry. Concerns that advanced AI models developed by companies like OpenAI and Anthropic could disrupt traditional software business models have already impacted market performance. Industry indicators, including software-focused exchange-traded funds, have declined substantially, reflecting investor anxiety about the long-term relevance of existing SaaS platforms.

Senior leadership at Salesforce has indicated that the new architecture is based on practical challenges observed while deploying AI systems across enterprise clients. According to internal insights, building an AI agent is only the initial step. Organizations also face ongoing challenges related to development workflows, system reliability, updates, and long-term maintenance.

To address these challenges, Headless 360 is structured around three foundational pillars.

The first pillar focuses on development flexibility. Salesforce has introduced more than 60 tools based on Model Context Protocol, along with over 30 pre-configured coding capabilities. These allow external AI coding agents, including systems such as Claude Code, Cursor, Codex, and Windsurf, to gain direct, real-time access to a company’s Salesforce environment. This includes data, workflows, and underlying business logic. Developers are no longer required to use Salesforce’s own integrated development environment and can instead operate from any terminal or external setup.

In addition, Salesforce has upgraded its native development environment, Agentforce Vibes 2.0, by introducing an “open agent harness.” This system supports multiple agent frameworks, including those from OpenAI and Anthropic, and dynamically adjusts capabilities depending on which AI model is being used. The platform also supports multiple models simultaneously, including advanced systems like Claude Sonnet and GPT-5, while maintaining full awareness of the organization’s data from the start.

A notable technical enhancement is the introduction of native React support. During demonstrations, developers created a fully functional application using React instead of Salesforce’s traditional Lightning framework. The application connected to Salesforce data through GraphQL while still inheriting built-in security controls. This significantly expands front-end flexibility for developers.

The second pillar focuses on deployment. Salesforce has introduced an “experience layer” that separates how an AI agent functions from how it is presented to users. This allows developers to design an experience once and deploy it across multiple platforms, including Slack, mobile applications, Microsoft Teams, ChatGPT, Claude, Gemini, and other compatible environments. Importantly, this can be done without rewriting code for each platform. The approach represents a change from requiring users to enter Salesforce interfaces to delivering Salesforce-powered experiences directly within existing workflows.

The third pillar addresses trust, control, and scalability. Salesforce has introduced a comprehensive set of tools that manage the entire lifecycle of AI agents. These include systems for testing, evaluation, monitoring, and experimentation. A central component is “Agent Script,” a new programming language designed to combine structured, rule-based logic with the flexible reasoning capabilities of AI models. It allows organizations to define which parts of a process must follow strict rules and which parts can rely on AI-driven decision-making.

Additional tools include a Testing Center that identifies logical errors and policy violations before deployment, custom evaluation systems that define performance standards, and an A/B testing interface that allows multiple agent versions to run simultaneously under real-world conditions.

One of the key technical challenges addressed by Salesforce is the difference between probabilistic and deterministic systems. AI agents do not always produce identical results, which can create instability in enterprise environments where consistency is critical. Early adopters reported that once agents were deployed, even small modifications could lead to unpredictable outcomes, forcing teams to repeat extensive testing processes.

Agent Script was developed to solve this problem by introducing a structured framework. It defines agent behavior as a state machine, where certain steps are fixed and controlled while others allow flexible reasoning. This approach ensures both reliability and adaptability.

Salesforce also distinguishes between two types of AI system architectures. Customer-facing agents, such as those used in sales or support, require strict control to ensure they follow predefined rules and maintain brand consistency. These operate within structured workflows. In contrast, employee-facing agents are designed to operate more freely, exploring multiple paths and refining their outputs dynamically before presenting results. Both systems operate on a unified underlying architecture, allowing organizations to manage them without maintaining separate platforms.

The company is also expanding its ecosystem. It now supports integration with a wide range of AI models, including those from Google and other providers. A new marketplace brings together thousands of applications and tools, supported by a $50 million initiative aimed at encouraging further development.

At the same time, Salesforce is taking a flexible approach to emerging technical standards such as Model Context Protocol. Rather than relying on a single method, the company is offering APIs, command-line interfaces, and protocol-based integrations simultaneously to remain adaptable as the industry evolves.

A real-world example surfaced during the announcement demonstrated how one company built an AI-powered customer service agent in just 12 days. The system now handles approximately half of customer interactions, improving efficiency while reducing operational costs.

Finally, Salesforce is also changing its business model. The company is shifting away from traditional per-user pricing toward a consumption-based approach, reflecting a future where AI agents, rather than human users, perform the majority of work within enterprise systems.

This transformation suggests a new layer in strategic operations. Instead of resisting the rise of AI, Salesforce is restructuring its platform to align with it, betting that its existing data infrastructure, enterprise integrations, and accumulated operational logic will continue to provide value even as software becomes increasingly autonomous.

Nvidia’s AI Launch Sparks Quantum Stock Surge, Minting Xanadu’s CEO a Billionaire

 

Quantum computing stocks jumped after Nvidia unveiled its Ising open-source AI model family, a move that investors interpreted as a strong validation of the sector. The result was a sharp rally in several names, with Xanadu standing out as the biggest winner and its founder Christian Weedbrook briefly joining the billionaire ranks.

The core issue is that Nvidia’s announcement did not introduce a new quantum computer; instead, it introduced software tools aimed at two of quantum computing’s hardest problems: calibration and error correction. Nvidia said Ising can make decoding up to 2.5 times faster and three times more accurate than pyMatching, which helped convince traders that the path to practical quantum systems may be improving faster than expected. 

That enthusiasm quickly turned into extreme stock moves. Xanadu’s shares climbed from under $8 to roughly $40 in six trading sessions, while the Toronto exchange paused trading several times because of the speed of the move. Similar gains appeared across the sector, including D-Wave, IonQ, Rigetti, Infleqtion, and Quantum Computing, showing that the market was bidding up the whole group rather than just one company. 

For Xanadu, the rally created an extraordinary paper windfall. Weedbrook owns 15.6% of the company through multiple voting shares, and his stake was valued at about $1.5 billion to $1.6 billion during the surge. The story is notable because the company’s valuation moved dramatically on sentiment tied to Nvidia’s broader endorsement of quantum-related tooling, not on a fresh commercial breakthrough from Xanadu itself. 

The main issue is that quantum computing remains a high-expectation, low-certainty industry. Nvidia’s move suggests that investors increasingly view AI and quantum as complementary technologies, especially if software can help make fragile quantum hardware more usable. But the volatility also highlights the risk: when a sector is still early and speculative, a single announcement can create massive gains, even before the business fundamentals fully catch up.

Can AI Own Its Work? A Debate That Started With a Monkey Photo

 



A single photograph captured in a remote forest over a decade ago has become central to one of the most complex legal questions of the digital age: what happens when creative work is produced without direct human authorship? The answer now carries long-term consequences for artificial intelligence, creative industries, and ownership rights in the modern world.

The image in question originated in 2011, when wildlife photographer David Slater was documenting crested black macaques in Indonesia. These monkeys are not only endangered but also known for their highly expressive faces, making them attractive subjects for photography. However, Slater faced difficulty capturing close-up shots because the animals were wary of human presence.

To work around this, he positioned his camera on a tripod, enabled automatic focus, and used a flash, allowing the monkeys to approach and interact with the equipment without feeling threatened. His approach relied on curiosity rather than control. Eventually, one macaque handled the camera and pressed the shutter button while looking directly into the lens. The resulting image, widely known as the “monkey selfie,” appeared almost intentional, with the animal’s expression resembling a posed portrait.

While the photograph initially brought attention and recognition, it soon triggered an unexpected legal dispute. The core issue was deceptively simple: if a photograph is not taken by a human, can anyone claim ownership over it?

The situation escalated when the image was uploaded to Wikipedia, making it freely accessible worldwide. Slater objected to this distribution, arguing that he had lost approximately £10,000 in potential earnings because the image could now be used without payment. However, the Wikimedia Foundation refused to remove the photograph. Its reasoning was based on copyright law, which generally requires a human creator. Since the image was captured by an animal, the organisation classified it as public domain material.

This interpretation was later reinforced by the U.S. Copyright Office, which formally clarified that works produced without human authorship cannot be registered. In its guidance, the office explicitly listed a photograph taken by a monkey as an example of ineligible material, establishing a clear precedent.

The dispute took another unusual turn when People for the Ethical Treatment of Animals filed a lawsuit attempting to assign copyright ownership to the macaque itself. Although framed as a legal claim over the photograph, the case was widely interpreted as an effort to establish broader legal rights for animals. After several years of legal proceedings, a court dismissed the case, concluding that animals do not have the legal capacity to initiate lawsuits.

Legal experts later observed that, although the case focused on animal authorship, it introduced a broader conceptual challenge that would become more relevant with the rise of artificial intelligence. According to intellectual property lawyer Ryan Abbott, the debate could easily extend beyond animals to machines capable of producing creative outputs.

This possibility became reality when computer scientist Stephen Thaler attempted to secure copyright protection for an image generated by his AI system, DABUS. Thaler described the system as capable of independently producing ideas, arguing that it should be recognised as the sole creator of its output. He characterised the system as exhibiting a form of machine-based cognition, though this view is strongly disputed within the scientific community.

Despite these claims, the Copyright Office rejected the application, applying the same reasoning used in the monkey selfie case. Because the work was not created by a human, it could not qualify for copyright protection. This rejection led to a legal challenge that progressed through multiple levels of the U.S. judicial system.

When the case reached the Supreme Court of the United States, the court declined to hear it, leaving lower court rulings intact. The outcome effectively confirmed that, under current U.S. law, works generated entirely by artificial intelligence cannot be owned by anyone, including the developer of the system or the individual who prompted it.

This position has reverberating implications for the creative economy. Copyright law exists to allow creators and organisations to control and monetise their work. Without ownership rights, it becomes difficult to build sustainable business models around fully AI-generated content. Legal scholar Stacey Dogan noted that this limitation reduces the likelihood of a future where machine-generated content completely replaces human-created media.

At the same time, the rapid expansion of generative AI tools continues to complicate the landscape. These systems function by analysing large datasets and producing outputs based on user instructions, often referred to as prompts. While they can generate text, images, and video at scale, their outputs raise questions about originality and authorship, particularly when human involvement is minimal.

Recent industry developments illustrate this uncertainty. Experimental AI-generated content has attracted large audiences online, suggesting a level of public interest, even if motivations such as novelty or criticism play a role. However, some technology companies have begun reassessing their AI content strategies, particularly where ownership and profitability remain unclear.

Expert opinion on the value of fully AI-generated content remains divided. Some specialists argue that such content lacks depth or authenticity, while others view AI as a useful tool for supporting human creativity rather than replacing it. This perspective positions AI as a collaborator rather than an independent creator.

Legal approaches also vary internationally. In the United Kingdom, copyright law allows ownership of computer-generated works by assigning authorship to the individual responsible for arranging their creation. However, this framework is currently being reconsidered as policymakers evaluate whether it remains appropriate in the context of modern AI systems.

One of the most complex unresolved issues involves hybrid creation. When humans actively guide, refine, and edit AI-generated outputs, determining ownership becomes less straightforward. A notable example involves an AI-assisted artwork that won a competition after extensive prompting and editing, raising questions about how much human contribution is required for copyright protection.

This debate is not entirely new. When photography first emerged, similar concerns were raised about whether cameras, rather than humans, were responsible for creative output. Over time, legal systems adapted by recognising the role of human intention and decision-making. Artificial intelligence now presents a more advanced version of that same challenge.

For now, the legal position in the United States remains clear: without meaningful human involvement, creative works cannot be protected by copyright. However, as AI becomes increasingly integrated into creative processes, the distinction between human and machine contribution is becoming more difficult to define.

What began as an unexpected interaction between a monkey and a camera has therefore evolved into a defining case in the global conversation about creativity, ownership, and technology. The decisions made in courts today will shape how creative work is produced, distributed, and valued in the future.



AI Was Meant to Help. So Why Is It Making Work Harder for Women in Indonesia?

 



Artificial intelligence is often presented as a neutral and forward-looking force that improves efficiency and removes human bias from decision-making. In practice, however, many women working in Indonesia’s gig economy experience these systems very differently. Rather than easing workloads, AI-driven platforms are intensifying existing pressures.

Recent research examining female gig workers introduces the concept of “AI colonialism.” This idea describes how older patterns of domination continue through digital systems. In this framework, powerful technology actors, largely based in wealthier regions, extract labour, data, and economic value from workers in developing countries, reinforcing unequal global relationships. The structure resembles historical colonial systems, but operates through algorithms and platforms instead of direct political control.

In Indonesia, platforms such as Gojek, Grab, Maxim, and Shopee rely heavily on informal workers. These companies have not transformed the nature of employment. Instead, they have digitised an already informal labour market. Workers are labelled as independent “partners,” which excludes them from basic protections such as minimum wages, paid sick leave, and maternity benefits. Earnings depend entirely on the number of completed tasks and algorithm-based performance scores.

For women, this structure intersects with what is often described as the “double burden,” where paid work must be balanced alongside unpaid domestic responsibilities. One delivery worker, Lia, begins her day before sunrise by preparing meals and organising her children’s routines. Only after completing these responsibilities can she log into the platform. As she explains, the system recognises only whether she is online, not the constraints shaping her availability.

Platform algorithms prioritise continuous, uninterrupted activity. Incentive systems often require completing a fixed number of orders within strict time windows. For workers managing caregiving roles, this creates structural disadvantages. Logging off to attend to family responsibilities can result in lost bonuses, while reducing work hours due to fatigue or health issues leads to declining performance metrics.

This reflects a greater economic reality in which unpaid domestic labour underpins the formal economy without recognition or compensation. Instead of addressing this imbalance, AI systems can intensify it. Another worker, Cinthia, observed a noticeable drop in job assignments after taking time off due to illness. The experience created a sense that the system penalises any interruption, making workers reluctant to pause even when necessary.

Although algorithms do not explicitly target women, they are designed around an ideal worker who is always available and unconstrained by caregiving duties. This assumption produces indirect but consistent disadvantage. The claim that digital platforms operate neutrally is further challenged by everyday experiences. For example, a driver named Yanti often informs passengers in advance that she is female, leading to frequent cancellations. While the system records these cancellations, it does not capture the gender bias behind them.

Safety concerns also shape participation. Many women avoid working late hours due to risk, which limits access to peak-demand periods and higher earnings. The system interprets this reduced availability as lower productivity. Scholars such as Virginia Eubanks have argued that automated systems frequently replicate and amplify existing social inequalities rather than eliminate them.

Similar patterns have been observed in other countries. In India, women working in ride-hailing services report lower average earnings, partly because safety considerations influence when and where they work. Algorithms, however, measure output without accounting for these risks.

Safety challenges persist even within delivery roles. Around 90% of women in group discussions reported choosing delivery work over ride-hailing due to perceived safety advantages, yet harassment remains a concern from both customers and other drivers. During the COVID-19 pandemic, gig workers were classified as essential, but their incomes declined sharply, in some cases by up to 67% in early 2020. To compensate, many worked more than 13 hours a day. Despite these conditions, platform performance systems remained unchanged, and illness-related breaks often resulted in lower ratings.

This inflicts a deeper impact in the contemporary labour control, where oversight is embedded within digital systems rather than managed by human supervisors. AI colonialism, in this sense, extends beyond ownership to the structure of control itself. Workers provide labour, time, and data, while platforms retain authority over decision-making processes.

In response, women workers have developed informal networks through messaging platforms to share information, warn others about unsafe situations, and adapt to algorithmic changes. They support each other by increasing activity on inactive accounts, lending money for operational costs, and collectively responding to account suspensions. When harassment occurs, information is circulated quickly to protect others.

These practices represent a form of mutual support rooted in shared vulnerability. Rather than relying on formal recognition as employees, many women build systems of protection among themselves. This surfaces a form of everyday resistance, where collective action becomes a strategy for navigating structural constraints.

Artificial intelligence is not inherently exploitative. However, when deployed within unequal economic systems, it can reinforce patterns of extraction and imbalance. As digital platforms continue to expand, understanding the lived experiences of workers, particularly women in developing economies, is essential. Behind every efficient system is a human reality shaped by trade-offs between income, safety, and dignity.