Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Technology. Show all posts

AI Models Surpass Doctors in Emergency Diagnosis, Harvard Study Finds

 




A contemporary study conducted by researchers at Harvard University has revealed that advanced artificial intelligence systems are now capable of exceeding human doctors in both diagnosing medical conditions and determining treatment strategies, including in fast-paced and high-stakes emergency room environments. The research specifically accentuates the potential capabilities of modern AI systems in handling complex clinical reasoning tasks that were traditionally considered exclusive to trained physicians.

The findings, published in the peer-reviewed journal Science, are based on a controlled comparison between OpenAI o1 and experienced attending physicians. To ensure realistic testing conditions, the study used 76 actual emergency department cases sourced from Beth Israel Deaconess Medical Center. These cases were evaluated across multiple stages of the diagnostic process, allowing researchers to assess performance under varying levels of available patient information.

At the earliest stage of patient assessment, commonly referred to as initial triage, where clinicians typically have only limited details about a patient’s condition, the AI model demonstrated a notable advantage. It was able to correctly identify either the exact diagnosis or a closely related condition in 67.1 percent of the cases. In comparison, the two physicians involved in the study achieved accuracy rates of 55.3 percent and 50 percent respectively. This suggests that even with minimal data, the AI system was more effective at narrowing down potential diagnoses.

As the diagnostic process progressed and additional clinical information became available during the emergency room evaluation phase, the model’s performance improved further. Its diagnostic accuracy increased to 72.4 percent, reflecting its ability to refine its conclusions with more context. The physicians also showed improvement at this stage, but their accuracy remained lower, at 61.8 percent and 52.6 percent. This stage is particularly important as it mirrors real-world conditions where doctors continuously update their assessments based on new findings.

In the final phase of care, when patients were admitted either to general hospital wards or intensive care units, the AI model continued to outperform its human counterparts. It achieved an accuracy rate of 81.6 percent, compared to 78.9 percent and 69.7 percent for the physicians. Although the performance gap narrowed slightly at this stage, the AI still maintained a measurable edge, indicating consistency across the full diagnostic timeline.

Beyond identifying illnesses, the study also evaluated how effectively the AI system could design clinical management plans. This included decisions such as selecting appropriate medications, including antibiotics, as well as handling complex and sensitive scenarios like end-of-life care planning. Across five evaluated case studies, the AI achieved a median performance score of 89 percent. In contrast, physicians scored significantly lower, averaging 34 percent when relying on traditional clinical resources and 41 percent when supported by GPT-4. This underlines a substantial gap in structured decision-making support.

The researchers acknowledged that while integrating AI into clinical workflows is often viewed as a high-risk approach due to patient safety concerns, its potential benefits are significant. They noted that wider adoption of such systems could help reduce diagnostic errors, minimize treatment delays, and address disparities in access to healthcare services. These factors collectively contribute to both improved patient outcomes and reduced financial strain on healthcare systems.

At the same time, the study emphasizes that current AI systems are not without limitations. Clinical medicine involves more than text-based data. Doctors routinely rely on non-verbal and non-textual cues, such as observing a patient’s physical discomfort, interpreting imaging results, and making judgment calls based on experience. These aspects are not fully captured by existing AI models, which means human expertise remains essential.

The authors further concluded that large language models have now surpassed many traditional benchmarks used to measure clinical reasoning abilities. However, they stress the urgent need for more detailed research, including real-world clinical trials and studies focused on human-AI collaboration, to determine how these systems can be safely and effectively integrated into healthcare settings.

In comments shared with The Guardian, lead researcher Arjun Manrai clarified that the findings should not be interpreted as suggesting that AI will replace doctors. Instead, he described the results as evidence of a major technological shift that is likely to transform the medical field in the coming years.

From a macro industry perspective, this study reflects a developing trend in which AI is increasingly being used to augment clinical decision-making. However, experts continue to caution that challenges such as data bias, accountability, regulatory oversight, and patient trust must be addressed before such systems can be widely deployed. The future of healthcare, therefore, is likely to involve a collaborative model where AI amplifies efficiency and accuracy, while human doctors provide critical judgment, ethical oversight, and patient-centered care.

AI-Powered License Plate Surveillance Sparks Urgent Push for Stronger Privacy Laws

 


The growing use of license plate tracking systems by companies like Flock Safety and Motorola’s VehicleManager has transformed routine drives into continuously recorded digital trails. Originally designed to capture license plate data, these systems have rapidly advanced into highly sophisticated surveillance tools. With the integration of artificial intelligence, cameras can now identify not only vehicles but also faces and other distinguishing features, silently building detailed records of individuals’ movements.

This technological shift raises an important question about the effectiveness of existing privacy protections. Laws governing surveillance vary widely across states, making it difficult to determine which frameworks are truly effective and where gaps remain.

To better understand the landscape, insights were gathered from Chad Marlow, senior policy counsel and lead for surveillance at the American Civil Liberties Union. He emphasized that meaningful privacy protection requires collective effort rather than individual action. "Collective action, rather than individual action, is required," Marlow told. He also warned, "I would caution that while Flock is the most problematic ALPR company in America, there are many other ALPR companies, like Axon and Motorola, that present serious privacy risks, so switching from Flock to Axon/Motorola ALPRs at best may constitute minimal harm reduction, but it is far from a solution."

Current legislation largely focuses on two major tools used by law enforcement: automatic license plate readers (ALPRs), which track vehicles, and drones equipped with AI-enabled cameras. Meanwhile, companies are expanding into traditional surveillance cameras capable of live monitoring and tracking individuals on the ground.

Advanced AI capabilities, such as Flock’s “Freeform” search feature, allow authorities to input open-ended queries and retrieve results from vast camera networks. These developments highlight the need for updated and comprehensive regulations. Several categories of laws are emerging as particularly impactful:

Restrictions on AI Surveillance Capabilities

Some of the most comprehensive laws limit what AI-powered cameras are allowed to detect and analyze. While not always targeting ALPRs directly, they regulate how data can be searched and used. Illinois stands out with its Biometric Information Privacy Act (BIPA), which protects sensitive identifiers like facial data and fingerprints and requires user consent. This law is so strict that certain features, such as facial recognition in consumer devices, are disabled within the state. However, many of these laws still exclude vehicle and license plate data, which often remains unprotected.

Limiting ALPR Use to Specific Investigations

Several states allow ALPR usage only under defined circumstances, such as serious criminal investigations. These restrictions prevent widespread deployment by private entities like homeowners associations or businesses and may also limit camera placement in certain public areas.

Mandatory Data Deletion Policies

One of the most effective privacy safeguards requires that collected data be deleted within a set timeframe unless tied to an active investigation. This prevents long-term tracking and profiling of individuals. As Marlow explained, "The idea of keeping a location dossier on every single person just in case one of us turns out to be a criminal is just about the most un-American approach to privacy I can imagine."

States like New Hampshire enforce extremely short data retention limits, requiring deletion within minutes if the data is not used. Others allow slightly longer windows. "For states that want a little more time to see if captured ALPR data is relevant to an ongoing investigation, keeping the data for a few days is sufficient," Marlow told me. "Some states, like Washington and Virginia, recently adopted 21-day limits, which is the very outermost acceptable limit." He further cautioned that prolonged storage makes it easier to build behavioral profiles "that can eviscerate individual privacy."

Restrictions on Data Sharing Across Jurisdictions

Certain states prohibit sharing surveillance data beyond state borders, including with federal agencies. These measures aim to limit access by organizations such as the Department of Homeland Security or ICE, though enforcing such restrictions remains a challenge. As Marlow noted, "Ideally, no data should be shared outside the collecting agency without a warrant," Marlow said, "But some states have chosen to prohibit data sharing outside of the state, which is better than nothing, and does limit some risks."

Approval and Oversight Requirements

Another approach involves requiring state-level approval before installing ALPR systems. The rigor of these processes varies significantly. For example, Vermont implemented strict approval mechanisms that ultimately discouraged adoption altogether, with no agencies using ALPR systems by 2025.

Despite these efforts, new privacy laws often face resistance from companies and law enforcement agencies, sometimes leading to legal disputes and slow enforcement. Additionally, legislative proposals frequently evolve during the approval process, making it important for citizens to stay informed and engaged.

Advocacy groups and public participation also play a critical role. Initiatives like The Plate Project encourage individuals to take part in privacy discussions and reforms. Local involvement, such as attending city council meetings, can influence decisions on surveillance technology before implementation.

Ultimately, as surveillance capabilities continue to expand, the effectiveness of privacy protections will depend on both robust legislation and active public oversight.

Are You Letting AI Do Too Much of Your Thinking?

 




As artificial intelligence tools take on a growing share of everyday thinking tasks, researchers are raising concerns that this shift may be quietly affecting how people process information, remember ideas, and engage with their own work.

When Nataliya Kosmyna reviewed applications for internships, she noticed a pattern that stood out. Many cover letters were structured in nearly identical ways, written in polished language, and included vague or forced connections to her research. The consistency suggested that applicants were relying on large language models, the technology behind tools such as ChatGPT, Google Gemini, and Claude.

At the same time, while teaching at the Massachusetts Institute of Technology, Kosmyna began noticing that students were finding it harder to retain what they had learned. Compared to previous years, more students struggled to recall material, which led her to question whether growing dependence on AI tools could be influencing cognitive abilities.

Researchers studying human-computer interaction are increasingly concerned that relying too heavily on AI may alter not just how people write but how they think. This phenomenon, often described as “cognitive offloading,” refers to shifting mental effort onto external tools. While this has existed for years with calculators and search engines, experts warn that AI systems may deepen the effect because they generate complete responses rather than simply helping users find information.

Earlier research on internet usage identified what is known as the “Google effect,” where people became less likely to remember facts because they could easily look them up. Some researchers argued that this allowed the brain to focus on more complex tasks. However, AI tools now go a step further by producing answers, arguments, and even creative content, reducing the need for active thinking.

To better understand the impact, Kosmyna and her team conducted an experiment involving 54 students. Participants were divided into three groups. One group used AI tools to write essays, another relied on search engines without AI-generated summaries, and a third completed the task without any digital assistance. Their brain activity was monitored while they worked on open-ended topics such as happiness, loyalty, and everyday decisions.

The differences were clear. Students who worked without any tools showed strong and widespread brain activity across multiple regions. Those using search engines still demonstrated notable engagement, particularly in areas related to visual processing. In contrast, the group using AI tools showed comparatively lower brain activity, with levels dropping by as much as 55%. Activity in areas linked to creativity and deeper thinking was especially reduced.

The impact extended beyond brain activity. Students who used AI struggled to recall what they had written shortly after completing their essays. Several participants also reported feeling disconnected from their work, as if they had not fully contributed to it. Similar findings from other studies suggest that frequent use of AI tools can weaken memory retention and recall.

Research from the University of Pennsylvania introduces another concern described as “cognitive surrender,” where users accept AI-generated responses without questioning them. In such cases, individuals may rely on the system’s output even when it conflicts with their own understanding.

The effects are not limited to academic settings. A multinational study found that medical professionals who relied on AI tools for detecting colon cancer became less accurate when asked to identify cases without assistance after several months of use. This suggests that repeated dependence on AI may reduce independent decision-making skills, even in critical fields.

Kosmyna also observed that essays written with AI tended to be highly similar, lacking variation in style and depth. Teachers reviewing the work described it as uniform and lacking originality. In some cases, the responses were so alike that it appeared as though students had collaborated, even when they had not.

Follow-up observations months later revealed further differences. Students who had previously relied on AI showed weaker neural connectivity when asked to complete tasks without it, compared to those who had worked independently earlier. This may indicate that they had engaged less deeply with the material from the start.

Vivienne Ming, author of Robot Proof, has raised similar concerns. In her research, students asked to make real-world predictions often defaulted to copying answers from AI systems instead of forming their own conclusions. Brain measurements showed low levels of gamma wave activity, which is associated with active thinking. Reduced gamma activity has been linked in other studies to cognitive decline over time.

However, not all users showed the same pattern. A small group, fewer than 10%, used AI differently by treating it as a source of information rather than a final answer. These individuals analysed the output themselves, showed stronger brain engagement, and produced more accurate results.

The concerns echo earlier findings related to navigation technology. Increased reliance on GPS has been associated with reduced spatial memory in some studies. Weak spatial navigation skills have also been explored as a possible early indicator of conditions such as Alzheimer's disease. These parallels suggest that reduced mental effort over time may have broader cognitive consequences.

Researchers emphasize that AI itself is not the problem but how it is used. Ming advocates for a more deliberate approach, where individuals think through problems first and then use AI to test or refine their ideas. She suggests methods such as asking AI to challenge one’s reasoning or limiting it to providing context instead of direct answers, encouraging deeper engagement.

Kosmyna similarly recommends building a strong understanding of subjects without AI assistance before integrating such tools into the learning process.

The alarming takeaway from the current research is clear. While AI offers efficiency and convenience, it may also encourage mental shortcuts. Human cognition depends on regular effort and engagement, and reducing that effort could carry long-term consequences. As these tools become more integrated into daily life, the challenge will be to use them in ways that support thinking rather than replace it.



eSIM vs Physical SIM: Why the Digital Shift Still Falls Short of Its Promise

 

SIM cards were once essential in an era when multiple users often shared a single mobile device, and the cards themselves were much larger. Today, SIMs have shrunk dramatically and, in many ways, are no longer necessary due to the emergence of eSIM technology.

eSIMs—embedded, virtual SIM cards—were designed to simplify mobile connectivity, making it as effortless as connecting to Wi-Fi while eliminating the inconveniences tied to physical SIM cards. However, while the technology set out to resolve longstanding issues, it has introduced a new set of challenges in the process.

One of the biggest advantages promised by eSIMs was seamless network switching without the need for physical cards. Traditionally, travelers had to purchase local SIM cards abroad to avoid high roaming charges, often swapping out their primary SIM and storing it safely. eSIMs improve this experience significantly, allowing users to activate international plans digitally through services like Saily.

Despite this convenience, the level of ease still largely depends on individual carriers. In many cases, users must navigate complex activation steps, and switching between networks isn’t always as smooth as expected. Additionally, carrier-locked devices can prevent eSIMs from functioning across different networks, limiting flexibility.

Transferring a mobile number between devices has also become more complicated with eSIMs. With a physical SIM, users can simply move the card from one phone to another. In contrast, eSIM transfers often involve multiple steps and may even require contacting customer support—especially if the original device is lost or damaged.

While these additional steps are partly necessary to prevent fraud, such as unauthorized SIM swaps, they can still be inconvenient. For users who frequently change devices, this added friction can be a significant drawback.

Although the concept behind eSIM technology is strong, the current ecosystem is not fully prepared to deliver a seamless digital experience. A standardized process for activation and transfer across devices and carriers is still lacking. Ideally, users should be able to move eSIM profiles easily between devices, regardless of brand—whether switching from iPhone to Android or vice versa.

Moreover, many restrictions carried over from physical SIM systems still exist in the eSIM space, and these need to be reconsidered. Switching between multiple eSIMs on the same device also requires further refinement.

Ultimately, the goal should be to move beyond simply digitizing the traditional SIM model. A more advanced system could allow users to connect to available carriers instantly using login credentials, similar to accessing public Wi-Fi. With emerging technologies like passkeys, the reliance on SIM cards and phone numbers for authentication may soon become outdated, paving the way for a more streamlined and user-friendly mobile connectivity experience.

Bank of America Bets Big on Risky Anthropic AI

 

Bank of America is aggressively expanding its use of Anthropic's advanced AI technology, even as U.S. regulators issue stark cybersecurity warnings. The bank's commitment highlights a broader trend where nearly 70% of financial institutions integrate AI into operations, prioritizing innovation over potential risks. This move comes amid global concerns about Anthropic's Claude Mythos Preview model, which has detected thousands of high-severity vulnerabilities in major operating systems and browsers. 

In early April 2026, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell urgently met with CEOs from top U.S. banks, including Bank of America, to flag risks from Mythos. Officials warned that deploying the model could expose customer personal data to cyber threats, prompting Anthropic to limit access to a select group of tech and banking experts. World leaders echoed these fears: Bank of England Governor Andrew Bailey called AI a "very serious challenge," while ECB President Christine Lagarde supported restrictions on the technology. 

Anthropic itself has cautioned about the dangers, stating that rapid AI progress could spread powerful vulnerability-detection capabilities to unsafe actors, with severe fallout for economies and national security. Despite this, banks like JPMorgan, Goldman Sachs, Citigroup, and Bank of America are testing Mythos to bolster their own defenses. Canadian regulators and European counterparts have also raised alarms, underscoring the technology's global implications. 

Bank of America leads in AI adoption, with over 90% of its 200,000+ employees using the tools daily and a client-facing AI assistant logging three billion interactions in 2025 alone. Backed by a $13.5 billion tech budget—including $4 billion for AI initiatives—the bank focuses on end-to-end process transformation to boost revenue, client experience, and efficiency. Recent rollouts include an AI tool for financial advisors to identify prospects and summarize meetings. 

Bank of America's CTO Hari Gopalkrishnan emphasized balancing scale with governance at the Semafor World Economy 2026 summit, noting, "If you overdo it, you stall innovation. If you underdo it, you introduce a lot of risk." The strategy shifts from small proofs-of-concept to large-scale applications, aiming for measurable ROI while navigating regulatory scrutiny. As AI reshapes banking, Bank of America's bold push tests the fine line between opportunity and peril.

Salesforce’s New “Headless 360” Lets AI Agents Run Its Platform

 


Salesforce has introduced what it describes as the most crucial architectural overhaul in its 27-year history, launching a new initiative called “Headless 360.” The update is designed to allow artificial intelligence agents to control and operate the company’s entire platform without requiring a traditional graphical interface such as a dashboard or browser.

The announcement was made during the company’s annual TDX developer conference in San Francisco, where Salesforce revealed that it is releasing more than 100 new developer tools and capabilities. These tools immediately enable AI systems to interact directly with Salesforce environments. The move reflects a deeper shift in enterprise software, where the rise of intelligent agents capable of reasoning and executing tasks is forcing companies to rethink whether conventional user interfaces are still necessary.

Salesforce’s answer to that question is direct: instead of designing software primarily for human interaction, the platform is now being rebuilt so that machines can access and operate it programmatically. According to the company, this transformation began over two years ago with a strategic decision to expose all internal capabilities rather than keeping them hidden behind user interfaces.

This shift is taking place during a period of uncertainty in the broader software industry. Concerns that advanced AI models developed by companies like OpenAI and Anthropic could disrupt traditional software business models have already impacted market performance. Industry indicators, including software-focused exchange-traded funds, have declined substantially, reflecting investor anxiety about the long-term relevance of existing SaaS platforms.

Senior leadership at Salesforce has indicated that the new architecture is based on practical challenges observed while deploying AI systems across enterprise clients. According to internal insights, building an AI agent is only the initial step. Organizations also face ongoing challenges related to development workflows, system reliability, updates, and long-term maintenance.

To address these challenges, Headless 360 is structured around three foundational pillars.

The first pillar focuses on development flexibility. Salesforce has introduced more than 60 tools based on Model Context Protocol, along with over 30 pre-configured coding capabilities. These allow external AI coding agents, including systems such as Claude Code, Cursor, Codex, and Windsurf, to gain direct, real-time access to a company’s Salesforce environment. This includes data, workflows, and underlying business logic. Developers are no longer required to use Salesforce’s own integrated development environment and can instead operate from any terminal or external setup.

In addition, Salesforce has upgraded its native development environment, Agentforce Vibes 2.0, by introducing an “open agent harness.” This system supports multiple agent frameworks, including those from OpenAI and Anthropic, and dynamically adjusts capabilities depending on which AI model is being used. The platform also supports multiple models simultaneously, including advanced systems like Claude Sonnet and GPT-5, while maintaining full awareness of the organization’s data from the start.

A notable technical enhancement is the introduction of native React support. During demonstrations, developers created a fully functional application using React instead of Salesforce’s traditional Lightning framework. The application connected to Salesforce data through GraphQL while still inheriting built-in security controls. This significantly expands front-end flexibility for developers.

The second pillar focuses on deployment. Salesforce has introduced an “experience layer” that separates how an AI agent functions from how it is presented to users. This allows developers to design an experience once and deploy it across multiple platforms, including Slack, mobile applications, Microsoft Teams, ChatGPT, Claude, Gemini, and other compatible environments. Importantly, this can be done without rewriting code for each platform. The approach represents a change from requiring users to enter Salesforce interfaces to delivering Salesforce-powered experiences directly within existing workflows.

The third pillar addresses trust, control, and scalability. Salesforce has introduced a comprehensive set of tools that manage the entire lifecycle of AI agents. These include systems for testing, evaluation, monitoring, and experimentation. A central component is “Agent Script,” a new programming language designed to combine structured, rule-based logic with the flexible reasoning capabilities of AI models. It allows organizations to define which parts of a process must follow strict rules and which parts can rely on AI-driven decision-making.

Additional tools include a Testing Center that identifies logical errors and policy violations before deployment, custom evaluation systems that define performance standards, and an A/B testing interface that allows multiple agent versions to run simultaneously under real-world conditions.

One of the key technical challenges addressed by Salesforce is the difference between probabilistic and deterministic systems. AI agents do not always produce identical results, which can create instability in enterprise environments where consistency is critical. Early adopters reported that once agents were deployed, even small modifications could lead to unpredictable outcomes, forcing teams to repeat extensive testing processes.

Agent Script was developed to solve this problem by introducing a structured framework. It defines agent behavior as a state machine, where certain steps are fixed and controlled while others allow flexible reasoning. This approach ensures both reliability and adaptability.

Salesforce also distinguishes between two types of AI system architectures. Customer-facing agents, such as those used in sales or support, require strict control to ensure they follow predefined rules and maintain brand consistency. These operate within structured workflows. In contrast, employee-facing agents are designed to operate more freely, exploring multiple paths and refining their outputs dynamically before presenting results. Both systems operate on a unified underlying architecture, allowing organizations to manage them without maintaining separate platforms.

The company is also expanding its ecosystem. It now supports integration with a wide range of AI models, including those from Google and other providers. A new marketplace brings together thousands of applications and tools, supported by a $50 million initiative aimed at encouraging further development.

At the same time, Salesforce is taking a flexible approach to emerging technical standards such as Model Context Protocol. Rather than relying on a single method, the company is offering APIs, command-line interfaces, and protocol-based integrations simultaneously to remain adaptable as the industry evolves.

A real-world example surfaced during the announcement demonstrated how one company built an AI-powered customer service agent in just 12 days. The system now handles approximately half of customer interactions, improving efficiency while reducing operational costs.

Finally, Salesforce is also changing its business model. The company is shifting away from traditional per-user pricing toward a consumption-based approach, reflecting a future where AI agents, rather than human users, perform the majority of work within enterprise systems.

This transformation suggests a new layer in strategic operations. Instead of resisting the rise of AI, Salesforce is restructuring its platform to align with it, betting that its existing data infrastructure, enterprise integrations, and accumulated operational logic will continue to provide value even as software becomes increasingly autonomous.

Nvidia’s AI Launch Sparks Quantum Stock Surge, Minting Xanadu’s CEO a Billionaire

 

Quantum computing stocks jumped after Nvidia unveiled its Ising open-source AI model family, a move that investors interpreted as a strong validation of the sector. The result was a sharp rally in several names, with Xanadu standing out as the biggest winner and its founder Christian Weedbrook briefly joining the billionaire ranks.

The core issue is that Nvidia’s announcement did not introduce a new quantum computer; instead, it introduced software tools aimed at two of quantum computing’s hardest problems: calibration and error correction. Nvidia said Ising can make decoding up to 2.5 times faster and three times more accurate than pyMatching, which helped convince traders that the path to practical quantum systems may be improving faster than expected. 

That enthusiasm quickly turned into extreme stock moves. Xanadu’s shares climbed from under $8 to roughly $40 in six trading sessions, while the Toronto exchange paused trading several times because of the speed of the move. Similar gains appeared across the sector, including D-Wave, IonQ, Rigetti, Infleqtion, and Quantum Computing, showing that the market was bidding up the whole group rather than just one company. 

For Xanadu, the rally created an extraordinary paper windfall. Weedbrook owns 15.6% of the company through multiple voting shares, and his stake was valued at about $1.5 billion to $1.6 billion during the surge. The story is notable because the company’s valuation moved dramatically on sentiment tied to Nvidia’s broader endorsement of quantum-related tooling, not on a fresh commercial breakthrough from Xanadu itself. 

The main issue is that quantum computing remains a high-expectation, low-certainty industry. Nvidia’s move suggests that investors increasingly view AI and quantum as complementary technologies, especially if software can help make fragile quantum hardware more usable. But the volatility also highlights the risk: when a sector is still early and speculative, a single announcement can create massive gains, even before the business fundamentals fully catch up.

Can AI Own Its Work? A Debate That Started With a Monkey Photo

 



A single photograph captured in a remote forest over a decade ago has become central to one of the most complex legal questions of the digital age: what happens when creative work is produced without direct human authorship? The answer now carries long-term consequences for artificial intelligence, creative industries, and ownership rights in the modern world.

The image in question originated in 2011, when wildlife photographer David Slater was documenting crested black macaques in Indonesia. These monkeys are not only endangered but also known for their highly expressive faces, making them attractive subjects for photography. However, Slater faced difficulty capturing close-up shots because the animals were wary of human presence.

To work around this, he positioned his camera on a tripod, enabled automatic focus, and used a flash, allowing the monkeys to approach and interact with the equipment without feeling threatened. His approach relied on curiosity rather than control. Eventually, one macaque handled the camera and pressed the shutter button while looking directly into the lens. The resulting image, widely known as the “monkey selfie,” appeared almost intentional, with the animal’s expression resembling a posed portrait.

While the photograph initially brought attention and recognition, it soon triggered an unexpected legal dispute. The core issue was deceptively simple: if a photograph is not taken by a human, can anyone claim ownership over it?

The situation escalated when the image was uploaded to Wikipedia, making it freely accessible worldwide. Slater objected to this distribution, arguing that he had lost approximately £10,000 in potential earnings because the image could now be used without payment. However, the Wikimedia Foundation refused to remove the photograph. Its reasoning was based on copyright law, which generally requires a human creator. Since the image was captured by an animal, the organisation classified it as public domain material.

This interpretation was later reinforced by the U.S. Copyright Office, which formally clarified that works produced without human authorship cannot be registered. In its guidance, the office explicitly listed a photograph taken by a monkey as an example of ineligible material, establishing a clear precedent.

The dispute took another unusual turn when People for the Ethical Treatment of Animals filed a lawsuit attempting to assign copyright ownership to the macaque itself. Although framed as a legal claim over the photograph, the case was widely interpreted as an effort to establish broader legal rights for animals. After several years of legal proceedings, a court dismissed the case, concluding that animals do not have the legal capacity to initiate lawsuits.

Legal experts later observed that, although the case focused on animal authorship, it introduced a broader conceptual challenge that would become more relevant with the rise of artificial intelligence. According to intellectual property lawyer Ryan Abbott, the debate could easily extend beyond animals to machines capable of producing creative outputs.

This possibility became reality when computer scientist Stephen Thaler attempted to secure copyright protection for an image generated by his AI system, DABUS. Thaler described the system as capable of independently producing ideas, arguing that it should be recognised as the sole creator of its output. He characterised the system as exhibiting a form of machine-based cognition, though this view is strongly disputed within the scientific community.

Despite these claims, the Copyright Office rejected the application, applying the same reasoning used in the monkey selfie case. Because the work was not created by a human, it could not qualify for copyright protection. This rejection led to a legal challenge that progressed through multiple levels of the U.S. judicial system.

When the case reached the Supreme Court of the United States, the court declined to hear it, leaving lower court rulings intact. The outcome effectively confirmed that, under current U.S. law, works generated entirely by artificial intelligence cannot be owned by anyone, including the developer of the system or the individual who prompted it.

This position has reverberating implications for the creative economy. Copyright law exists to allow creators and organisations to control and monetise their work. Without ownership rights, it becomes difficult to build sustainable business models around fully AI-generated content. Legal scholar Stacey Dogan noted that this limitation reduces the likelihood of a future where machine-generated content completely replaces human-created media.

At the same time, the rapid expansion of generative AI tools continues to complicate the landscape. These systems function by analysing large datasets and producing outputs based on user instructions, often referred to as prompts. While they can generate text, images, and video at scale, their outputs raise questions about originality and authorship, particularly when human involvement is minimal.

Recent industry developments illustrate this uncertainty. Experimental AI-generated content has attracted large audiences online, suggesting a level of public interest, even if motivations such as novelty or criticism play a role. However, some technology companies have begun reassessing their AI content strategies, particularly where ownership and profitability remain unclear.

Expert opinion on the value of fully AI-generated content remains divided. Some specialists argue that such content lacks depth or authenticity, while others view AI as a useful tool for supporting human creativity rather than replacing it. This perspective positions AI as a collaborator rather than an independent creator.

Legal approaches also vary internationally. In the United Kingdom, copyright law allows ownership of computer-generated works by assigning authorship to the individual responsible for arranging their creation. However, this framework is currently being reconsidered as policymakers evaluate whether it remains appropriate in the context of modern AI systems.

One of the most complex unresolved issues involves hybrid creation. When humans actively guide, refine, and edit AI-generated outputs, determining ownership becomes less straightforward. A notable example involves an AI-assisted artwork that won a competition after extensive prompting and editing, raising questions about how much human contribution is required for copyright protection.

This debate is not entirely new. When photography first emerged, similar concerns were raised about whether cameras, rather than humans, were responsible for creative output. Over time, legal systems adapted by recognising the role of human intention and decision-making. Artificial intelligence now presents a more advanced version of that same challenge.

For now, the legal position in the United States remains clear: without meaningful human involvement, creative works cannot be protected by copyright. However, as AI becomes increasingly integrated into creative processes, the distinction between human and machine contribution is becoming more difficult to define.

What began as an unexpected interaction between a monkey and a camera has therefore evolved into a defining case in the global conversation about creativity, ownership, and technology. The decisions made in courts today will shape how creative work is produced, distributed, and valued in the future.



AI Was Meant to Help. So Why Is It Making Work Harder for Women in Indonesia?

 



Artificial intelligence is often presented as a neutral and forward-looking force that improves efficiency and removes human bias from decision-making. In practice, however, many women working in Indonesia’s gig economy experience these systems very differently. Rather than easing workloads, AI-driven platforms are intensifying existing pressures.

Recent research examining female gig workers introduces the concept of “AI colonialism.” This idea describes how older patterns of domination continue through digital systems. In this framework, powerful technology actors, largely based in wealthier regions, extract labour, data, and economic value from workers in developing countries, reinforcing unequal global relationships. The structure resembles historical colonial systems, but operates through algorithms and platforms instead of direct political control.

In Indonesia, platforms such as Gojek, Grab, Maxim, and Shopee rely heavily on informal workers. These companies have not transformed the nature of employment. Instead, they have digitised an already informal labour market. Workers are labelled as independent “partners,” which excludes them from basic protections such as minimum wages, paid sick leave, and maternity benefits. Earnings depend entirely on the number of completed tasks and algorithm-based performance scores.

For women, this structure intersects with what is often described as the “double burden,” where paid work must be balanced alongside unpaid domestic responsibilities. One delivery worker, Lia, begins her day before sunrise by preparing meals and organising her children’s routines. Only after completing these responsibilities can she log into the platform. As she explains, the system recognises only whether she is online, not the constraints shaping her availability.

Platform algorithms prioritise continuous, uninterrupted activity. Incentive systems often require completing a fixed number of orders within strict time windows. For workers managing caregiving roles, this creates structural disadvantages. Logging off to attend to family responsibilities can result in lost bonuses, while reducing work hours due to fatigue or health issues leads to declining performance metrics.

This reflects a greater economic reality in which unpaid domestic labour underpins the formal economy without recognition or compensation. Instead of addressing this imbalance, AI systems can intensify it. Another worker, Cinthia, observed a noticeable drop in job assignments after taking time off due to illness. The experience created a sense that the system penalises any interruption, making workers reluctant to pause even when necessary.

Although algorithms do not explicitly target women, they are designed around an ideal worker who is always available and unconstrained by caregiving duties. This assumption produces indirect but consistent disadvantage. The claim that digital platforms operate neutrally is further challenged by everyday experiences. For example, a driver named Yanti often informs passengers in advance that she is female, leading to frequent cancellations. While the system records these cancellations, it does not capture the gender bias behind them.

Safety concerns also shape participation. Many women avoid working late hours due to risk, which limits access to peak-demand periods and higher earnings. The system interprets this reduced availability as lower productivity. Scholars such as Virginia Eubanks have argued that automated systems frequently replicate and amplify existing social inequalities rather than eliminate them.

Similar patterns have been observed in other countries. In India, women working in ride-hailing services report lower average earnings, partly because safety considerations influence when and where they work. Algorithms, however, measure output without accounting for these risks.

Safety challenges persist even within delivery roles. Around 90% of women in group discussions reported choosing delivery work over ride-hailing due to perceived safety advantages, yet harassment remains a concern from both customers and other drivers. During the COVID-19 pandemic, gig workers were classified as essential, but their incomes declined sharply, in some cases by up to 67% in early 2020. To compensate, many worked more than 13 hours a day. Despite these conditions, platform performance systems remained unchanged, and illness-related breaks often resulted in lower ratings.

This inflicts a deeper impact in the contemporary labour control, where oversight is embedded within digital systems rather than managed by human supervisors. AI colonialism, in this sense, extends beyond ownership to the structure of control itself. Workers provide labour, time, and data, while platforms retain authority over decision-making processes.

In response, women workers have developed informal networks through messaging platforms to share information, warn others about unsafe situations, and adapt to algorithmic changes. They support each other by increasing activity on inactive accounts, lending money for operational costs, and collectively responding to account suspensions. When harassment occurs, information is circulated quickly to protect others.

These practices represent a form of mutual support rooted in shared vulnerability. Rather than relying on formal recognition as employees, many women build systems of protection among themselves. This surfaces a form of everyday resistance, where collective action becomes a strategy for navigating structural constraints.

Artificial intelligence is not inherently exploitative. However, when deployed within unequal economic systems, it can reinforce patterns of extraction and imbalance. As digital platforms continue to expand, understanding the lived experiences of workers, particularly women in developing economies, is essential. Behind every efficient system is a human reality shaped by trade-offs between income, safety, and dignity.


Google Chrome Introduces “Skills” to Reuse AI Prompts Across Web Pages with Gemini Integration

 

Google has announced a new wave of AI-powered enhancements for its Chrome browser, unveiling a feature called “Skills.” This addition enables users to store and reuse their preferred AI prompts across different websites, eliminating the need to repeatedly type them.

The new functionality builds on Chrome’s integration with Gemini, which arrived as competition in the browser space heats up with offerings from companies like OpenAI (Atlas), Perplexity (Comet), and The Browser Company (Dia).

Gemini already enables users to interact with web pages by asking questions, generating summaries, or completing tasks. With the addition of Skills, users can now save frequently used prompts and activate them instantly whenever needed.

For example, Google notes that users who regularly ask Gemini for vegan alternatives while browsing recipes can save that instruction as a Skill and apply it seamlessly across multiple sites. These prompts can be saved directly from chat history and later accessed by typing a forward slash (/) or clicking the plus (+) icon. Once selected, the Skill executes on the current page and can also extend to other selected tabs.

Google highlighted that Skills remain flexible, allowing users to modify them at any time. Early testing showed that adopters used the feature for tasks such as tracking nutrition metrics in recipes, comparing products while shopping, and summarizing long-form content.

To simplify onboarding, Google is also launching a Skills library featuring ready-made prompts for common use cases like productivity, budgeting, shopping, and cooking. Users can add these pre-built Skills to their collection and customize them as needed.

Similar to other Gemini-powered actions in Chrome, the browser will request user approval before carrying out sensitive tasks, such as sending emails or scheduling calendar events.

The rollout of Skills begins today for desktop Chrome users logged into their Google accounts. Initially, the feature will only be available when the browser language is set to English (US).

AI Scams Are Becoming Harder to Detect — 7 Warning Signs You Should Watch Closely

 



Artificial intelligence is not only improving everyday technology but also strengthening both traditional and emerging scam techniques. As a result, avoiding fraud now requires greater awareness of how these schemes are taking new shapes.

Being able to identify scams is an essential skill for everyone, regardless of age. This is especially important as AI tools continue to advance rapidly, contributing to a noticeable increase in reported fraud cases. According to the Federal Bureau of Investigation’s 2025 Internet Crime Report, complaints linked to cryptocurrency and artificial intelligence ranked among the most financially damaging cybercrimes, with total losses approaching $21 billion. The agency also highlighted that, for the first time in its history, its Internet Crime Complaint Center included a dedicated section on artificial intelligence, documenting 22,364 cases that resulted in losses of nearly $893 million.

These scams are increasingly convincing. AI can generate realistic emails and replicate human voices through audio deepfakes, making fraudulent communication difficult to distinguish from legitimate interactions. Because of this, such threats should be treated as ongoing and persistent risks.

Protecting yourself, your family, and your finances requires both instinct and awareness. By training both your attention to detail and your ability to listen carefully, you can better identify suspicious activity. Below are seven warning signs that can help you recognize AI-driven scams and avoid serious consequences.

1. Messages that feel unusually personalized

AI can gather publicly available details, including your job, interests, or recent purchases, to create messages that appear tailored specifically to you. While these messages may seem accurate, they can still contain subtle errors or incorrect assumptions about your life, which should raise concern.


2. Requests that create urgency

Scammers often attempt to rush you with statements such as warnings that your account will be locked, demands for immediate payment, or requests for login credentials to restore access. This pressure is designed to force quick decisions without careful thinking.


3. Messages that appear overly polished

Unlike older scams filled with spelling or grammar mistakes, AI-generated messages are often clear and well-written. However, phrases like “confirm your information to avoid cancellation” or “we noticed unusual activity” should still be treated cautiously, especially if accompanied by suspicious visuals or a lack of supporting detail.


4. Audio that sounds slightly unnatural

Voice-cloning technology can imitate people you know, making phone-based scams more believable. Still, these voices may reveal themselves through unnatural pacing, limited emotional variation, or requests that seem out of character for the person being impersonated.


5. Deepfake videos that seem real but contain flaws

AI can also generate convincing videos of colleagues, family members, or even public figures. These may appear during video calls, workplace interactions, or through compromised social media accounts. Warning signs include inconsistent lighting, unusual shadows, or subtle distortions in facial movement.


6. Attempts to move conversations across platforms

Scammers may begin communication through email or professional platforms and then attempt to shift the interaction to messaging apps, payment platforms, or other channels. This tactic, often supported by chatbot-driven conversations, is used to appear credible while avoiding detection.


7. Unusual or suspicious payment requests

Requests for payment through gift cards, wire transfers, or cryptocurrency remain a major red flag. These methods are difficult to trace and are frequently used in fraudulent schemes, regardless of how legitimate the request may initially appear.


Why awareness matters

While AI has not changed the underlying tactics of scams, it has made them far more refined and scalable. Techniques such as impersonation, urgency, and trust-building are now enhanced through automation and data-driven personalization.

As these technologies continue to become an omnipresent aspect of our lives and keep developing, the risk will proportionately grow. Staying cautious, verifying unexpected requests, and sharing this knowledge with friends and family are critical steps in reducing exposure.

In a digital environment where scams increasingly resemble genuine communication, recognizing these warning signs remains one of the most effective ways to stay protected.

Google's Eloquent: Offline AI Dictation Hits iOS, Android Launch Imminent


Google’s quiet release of AI Edge Eloquent marks a notable shift in how it wants people to use AI on phones: not as a cloud-first assistant, but as a fast, private, on-device dictation tool. Based on the reporting around the launch, the app is designed to transcribe speech locally on iOS, keep working without an internet connection, and clean up spoken language into polished text. 

Google’s move matters because it lands in a market already shaped by focused dictation apps like Wispr Flow, SuperWhisper, and Willow. Those products have helped make AI transcription feel less like a novelty and more like a practical writing tool, so Google is entering a space where users already expect speed, accuracy, and convenience. By shipping a product that works offline, Google is also signaling that on-device AI is becoming good enough for everyday productivity rather than just demo material. 

The app’s core appeal is that it does more than convert audio into text. It reportedly removes filler words such as “um” and “uh,” fixes mid-sentence stumbles, and can rewrite output into formats like “Key points,” “Formal,” “Short,” and “Long.” That means Eloquent is aimed not just at transcription, but at people who want speech turned into something usable immediately, whether for emails, notes, drafts, or quick summaries.

A second major point is privacy and reliability. Because the app runs locally after the model download, users can dictate even when they are offline, which is useful on flights, in weak signal areas, or in workplaces where connectivity is inconsistent. Local processing also reduces the amount of audio that needs to leave the device, which may appeal to users who are cautious about cloud-based voice tools.

There is also a broader strategic angle here. Google appears to be using Eloquent to show that its Gemma-based models can power practical consumer AI on a phone, not just in the cloud. The app’s reported free availability makes the competitive pressure even stronger, because it lowers the barrier for users to try Google’s approach and compare it directly with paid or subscription-based rivals. 

The deeper issue is that this launch reflects a wider race in AI: whoever makes on-device models feel seamless may control the next wave of personal productivity software. If Google can keep improving transcription quality, formatting, and cross-platform access, Eloquent could become more than a niche dictation tool and turn into a template for how lightweight AI assistants should work on mobile.

AI Startup Rocket Launches Platform to Turn Ideas into Data-Driven Product Strategies

 

lndian startup Rocket is focusing on a gap that comes before the rise of “vibe coding” — helping users decide what to build in the first place. The company has introduced a new platform designed to generate consulting-style product strategies using artificial intelligence.

Headquartered in Surat, Rocket unveiled its platform, Rocket 1.0 recently. The system integrates research, product development, and competitive intelligence into a unified workflow. It produces in-depth product strategy documents covering areas such as pricing, unit economics, and go-to-market planning.

With the rapid growth of AI-powered coding tools — including platforms like Cursor, Replit, and Lovable, as well as features such as Claude Code and Codex — software development has become faster and more accessible. However, Rocket believes the real challenge lies elsewhere. “Everyone can generate the code now … it has become a commodity. But what to build is something which everyone is missing,” said Rocket co-founder and CEO Vishal Virani, adding that “running a business and just building a codebase are two different things.”

In early testing, the platform was able to generate detailed product requirement documents in PDF format from simple prompts. These reports resemble structured consulting outputs rather than typical AI coding assistants or chatbots, which tend to focus more on execution and features.

That said, some of the platform’s insights appear to be compiled from existing datasets — including pricing benchmarks, user behavior trends, and competitor analysis — rather than independently verified data. This means users may still need to validate the outputs before relying on them for strategic decisions. According to Virani, human support is available if users need assistance.

Rocket also offers competitor tracking features, monitoring updates to rival websites and traffic patterns. The platform pulls from over 1,000 data sources, including Meta’s ad libraries, Similarweb’s API, and proprietary crawlers.

The company provides tiered subscription plans, starting at $25 per month for app-building tools, $250 for advanced strategy and research features, and up to $350 for full access, including competitive intelligence. The $250 plan can generate two to three “McKinsey-grade” research reports alongside product builds, positioning Rocket as a cost-effective alternative to traditional consulting services, which often charge significantly higher fees.

Rocket raised $15 million in a seed funding round in September, backed by Accel, Salesforce Ventures, and Together Fund. Since then, the company claims to have expanded its user base from 400,000 to over 1.5 million users across 180 countries. It also reported an annualized average revenue per user of around $4,000, though it did not share specific figures on paying customers. The startup operates with gross margins exceeding 50%, with 20% to 30% of its users coming from small and medium-sized businesses.

Currently, Rocket employs 57 people and is headquartered in Surat, with additional operations in Palo Alto.

Microsoft Introduces Secure Boot Status Dashboard Ahead of Certificate Expiry

 

Microsoft is preparing for the upcoming expiration of its original 2011 Secure Boot certificates, set for June 2026, by introducing a new Secure Boot status dashboard within Windows. This feature is designed to help users verify whether their systems remain protected during startup.

Beginning this month, the dashboard will be integrated into the Windows Security app. Users will find a Secure Boot status indicator under the Device security section, specifically within Secure Boot settings.

"The Windows Security app now shows whether your device has received these updates, what your current status is, and whether any action is needed," Microsoft says on a new support page.

The indicator will display three possible statuses. A green badge confirms that the system has received the necessary updates. A yellow badge signals a recommendation from Microsoft, often suggesting a firmware update to install the latest certificates. A red badge indicates that the device is unable to receive the updated Secure Boot certificates.

“This state appears only after a security vulnerability that affects the boot process is discovered and cannot be serviced on devices that have not yet received the updated certificates. This could occur as early as June 2026, when some of the current Secure Boot certificates begin to expire,” the company says.

In addition to the visual indicators, Microsoft will provide detailed guidance within the dashboard, advising users on steps to resolve issues. These may include updating the Windows operating system or contacting the device manufacturer.

Secure Boot plays a critical role in ensuring that only trusted software runs during the startup process, protecting systems from persistent malware that can survive OS reinstalls. However, many devices are still running Windows 10, which reached end of support in October and no longer receives standard security updates.

Earlier this year, Microsoft cautioned that such unsupported Windows 10 systems would not receive the new Secure Boot certificates. The only exception applies to devices enrolled in the Windows 10 Extended Security Updates (ESU) program, which offers limited continued protection.

Microsoft confirmed that the new Secure Boot status indicator will be available only on Windows 10 ESU systems and Windows 11 devices. Systems running unsupported versions of Windows 10 should assume their certificates will begin expiring from June onward.

For eligible systems, the updated certificates are expected to be delivered automatically through routine monthly updates. However, some devices may still require a separate firmware update from the PC or motherboard manufacturer before the certificates can be applied—hence the yellow and red warnings.

Even if a system does not receive the updated certificates, it will continue to function. However, Microsoft cautions: “The device will enter a degraded security state that limits its ability to receive future boot-level protections,” leaving it vulnerable to potential “boot-level vulnerabilities” that attackers could exploit.

Users facing a red status will also have the option to proceed without taking action by selecting “I accept the risks, don’t remind me.”

Microsoft plans to expand alerts related to Secure Boot beyond the Windows Security app. “Beginning in May 2026, additional improvements will become available, including notifications outside the app (such as system alerts) and additional in-app guidance and controls to help you respond to Secure Boot warnings.”

Laptop Reliability Rankings 2025: Which Brands Last the Longest?

 

When buying a new laptop, it’s not just about powerful specifications or staying within budget. One critical factor that often gets overlooked is long-term reliability. A device that looks perfect on paper can quickly become frustrating if it fails within a short period.

According to three years of surveys conducted by Consumer Reports among its subscribers, reliability stands out as the top priority for buyers. About 56% of respondents rated it above performance and price. The organization measures reliability based on whether a laptop continues to function properly after three years of use. While user care and external conditions can influence longevity, certain brands consistently perform better than others.

This ranking of laptop brands—from least to most reliable—combines reliability data from Consumer Reports and PCMag’s Readers’ Choice 2025 survey, along with insights gathered from various online reviews. Each brand’s top-performing model, as identified by Consumer Reports, is also highlighted to reflect its strengths.

1. Dell
Founded in 1984, Dell has long been a major player in the computer industry. Despite its legacy, it ranks at the bottom in Consumer Reports’ reliability scores and falls into the lower tier in PCMag’s survey. Its gaming division, Alienware, was excluded due to missing PCMag data, though its Consumer Reports score is even lower.

Dell’s broad product range may contribute to its weaker reliability standing. Consumer feedback suggests that entry-level lines like Vostro and Inspiron are less durable, while premium models such as the XPS series perform more consistently. Business-focused laptops, particularly the Latitude and Precision lines, are often described as highly durable, with some users calling Precision models “built like tanks.”

Among Dell’s top-rated models are the Inspiron Plus 16 and the Latitude 7000, both equipped with 32GB RAM. The Inspiron Plus 16 features a 16-inch display and runs on the Intel Core Ultra 7 155H processor, while the Latitude 7000 offers a 14-inch screen powered by the Qualcomm Snapdragon X Elite X1E80100 processor. Based on user feedback, the Latitude series may provide better long-term reliability.

2. HP
With origins dating back to the 1940s, HP is the oldest brand in this comparison. However, its long history doesn’t necessarily translate into stronger reliability, as it ranks ninth overall based on combined scores from Consumer Reports and PCMag.

Like Dell, HP’s wide product lineup may be affecting its reliability ratings. Feedback from repair professionals suggests that many issues arise from Pavilion models and other budget offerings commonly sold through large retailers. More premium lines such as ProBook, EliteBook, and ZBook are generally recommended for better durability.

One recurring concern highlighted by users involves hinge issues, with some jokingly referring to HP as “Hinge Problems.” Despite these concerns, the HP OmniBook X Flip stands out as the brand’s highest-rated model. This convertible laptop combines solid performance with an Intel Ultra 9 288V processor and 32GB RAM, placing it among the better devices in the ranking.

3. Acer
Acer occupies a middle position in the lower half of the reliability rankings, with modest scores from both Consumer Reports and PCMag. Public opinion on the brand is divided. Some users report positive experiences with durability, while others mention recurring issues, particularly devices failing shortly after warranty expiration. This pattern may explain Acer’s lower reliability score, given Consumer Reports’ three-year evaluation window.

The Acer Swift Go 14, the brand’s top-rated laptop, reflects this mixed perception. The device features a 14-inch display, Intel Ultra 7 155H processor, and 16GB RAM. Reviews highlight its strong build quality and durable hinge design, with several sources describing it as a good value for its price.

The full list can be viewed here.

Passkeys Gaining Traction as More Secure Alternative to Passwords, Experts Say

 

Security experts are increasingly urging users to move away from traditional passwords and adopt passkeys, a newer method of logging into accounts that aims to reduce risks such as hacking and phishing. 

Passwords remain widely used, but they are often reused, simplified or poorly managed. Even with password managers, which help generate and store complex credentials, risks remain. These systems typically rely on a single master password, creating a potential point of failure if compromised. Passkeys take a different approach. 

Instead of requiring users to remember or enter passwords, they rely on device-based authentication, such as a phone’s screen lock or biometric verification like fingerprint or facial recognition. 

The system works using a pair of cryptographic keys. One key is stored on the service being accessed, while the other remains securely on the user’s device. When logging in, the service sends a request that the device verifies locally. 

If the authentication is successful, access is granted without transmitting a password. Because no password is shared or stored centrally, passkeys are considered more resistant to phishing attacks, which the FBI has previously identified as one of the most common forms of cybercrime. 

The method is supported by the FIDO Alliance and adopted by major technology companies including Google, Apple and Microsoft. Passkeys are designed to work automatically once set up, requiring minimal user input. 

However, they are tied to specific devices, meaning losing access to a device could complicate account recovery unless backup options are enabled. Experts say the shift reflects broader concerns about password security. 

Once an email address or login credential is exposed through data breaches or online use, it can be reused by attackers across multiple platforms. Passkeys also generate unique credentials for each service, limiting the impact of a breach on any single platform. 

While adoption is still growing, the approach is increasingly seen as part of a move toward passwordless authentication, as companies look to reduce reliance on systems that have long been vulnerable to misuse.

Google DeepMind Maps How the Internet Could be Used to Manipulate AI Agents

Researchers at Google DeepMind have outlined a growing but less visible risk in artificial intelligence deployment, the possibility that the internet itself can be used to manipulate autonomous AI agents. In a recent paper titled “AI Agent Traps,” the researchers describe how online content can be deliberately designed to mislead, control or exploit AI systems as they browse websites, read information and take actions. The study focuses not on flaws inside the models, but on the environments these agents operate in.  

The issue is becoming more urgent as companies move toward deploying AI agents that can independently handle tasks such as booking travel, managing emails, executing transactions and writing code. At the same time, malicious actors are increasingly experimenting with AI for cyberattacks. OpenAI has also acknowledged that one of the key weaknesses involved, prompt injection, may never be fully eliminated. 

The paper groups these risks into six broad categories. One category involves hidden instructions embedded in web pages. These can be placed in parts of a page that humans do not see, such as HTML comments, invisible elements or metadata. While a user sees normal content, an AI agent may read and follow these concealed commands. In more advanced cases, websites can detect when an AI agent is visiting and deliver a different version of the page tailored to influence its behavior. 

Another category focuses on how language shapes an agent’s interpretation. Pages filled with persuasive or authoritative sounding phrases can subtly steer an agent’s conclusions. In some cases, harmful instructions are disguised as educational or hypothetical content, which can bypass a model’s safety checks. The researchers also describe a feedback loop where descriptions of an AI’s personality circulate online, are later absorbed by models and begin to influence how those systems behave. 

A third type of risk targets an agent’s memory. If false or manipulated information is inserted into the data sources an agent relies on, the system may treat that information as fact. Even a small number of carefully placed documents can affect how the agent responds to specific topics. Other attacks focus directly on controlling an agent’s actions. Malicious instructions embedded in ordinary web pages can override safety safeguards once processed by the agent. 

In some experiments, attackers were able to trick agents into retrieving sensitive data, such as local files or passwords, and sending it to external destinations at high success rates. The researchers also highlight risks that emerge at scale. Instead of targeting a single system, some attacks aim to influence many agents at once. They draw comparisons to the Flash Crash, where automated trading systems amplified a single event into a large market disruption. 

A similar dynamic could occur if multiple AI agents respond simultaneously to false or manipulated information. Another category involves the human users overseeing these systems. Outputs can be designed to appear credible and technical, increasing the likelihood that a person approves an action without fully understanding the risks. 

In one example, harmful instructions were presented as legitimate troubleshooting steps, making them easier to accept. To address these risks, the researchers outline several areas for improvement. On the technical side, they suggest training models to better recognize adversarial inputs, as well as deploying systems that monitor both incoming data and outgoing actions. 

At a broader level, they propose standards that allow websites to signal which content is intended for AI systems, along with reputation mechanisms to assess the trustworthiness of sources. The paper also points to unresolved legal questions. If an AI agent carries out a harmful action after being manipulated, it is unclear who should be held responsible. 

The researchers describe this as an “accountability gap” that will need to be addressed before such systems can be widely deployed in regulated sectors. The study does not present a complete solution. Instead, it argues that the industry lacks a clear, shared understanding of the problem. Without that, the researchers suggest, efforts to secure AI systems may continue to focus on the wrong areas.