Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

Automated OAuth Abuse by ConsentFix v3 Raises Azure Security Concerns

  Researchers discovered that a newly identified phishing framework called ConsentFix v3 is having a direct impact on identity-based attacks...

All the recent news you need to know

Maryland’s New Grocery Pricing Rules Leave Critics Unconvinced


 

Despite the increasing acceptance of algorithmic pricing systems in today's retail ecosystem, Maryland has taken action to establish the first statewide legal ban on grocery pricing that incorporates consumer surveillance data. 

Upon signing House Bill 895 into law on April 28, 2026, Governor Wes Moore introduced a regulatory framework to restrict the use of personal data by food retailers and third-party delivery platforms to influence consumer costs by establishing a regulatory framework. 

The Act is formally titled the Protection From Predatory Pricing Act. Specifically, this legislation addresses the use of artificial intelligence-driven pricing engines and behavioral analytics that may adjust prices according to factors such as purchase history, browser activity, geographical location, and demographic traits. 

The law, framed by state officials as an effective consumer protection measure against profit optimization powered by data, prohibits large food retailers, qualified delivery service providers, and others operating stores over 15,000 square feet from imposing higher prices on consumers based upon individual data signals. Supporters see the measure as a significant step in responding to the increasing commercialization of consumer data, but critics claim that the measure’s limited scope and enforcement structures may significantly erode its practical significance.

The Maryland approach is being closely examined as a possible template for pricing regulation in the future by policymakers and industry stakeholders throughout the United States. The debate is centered on the increasing use of surveillance-based dynamic pricing systems that continuously adjust product costs based on an analysis of the consumer’s digital footprint as well as their purchasing patterns, geographic location, and demographics. These models may result in completely different prices for the same grocery item if two shoppers purchase the item within minutes of each other. The results are determined by algorithms that analyze shoppers' perceived purchase tolerance.

A consumer advocate or competition analyst contends that such practices shift pricing strategy away from traditional market factors and toward individualised revenue extraction, enabling businesses to identify and charge the highest amount that a specific customer is statistically most likely to accept. 

In spite of Maryland's legislation being specifically tailored to the grocery sector, federal regulators, such as the Federal Trade Commission, have identified similar pricing mechanisms across retail categories including apparel, cosmetics, home improvement products, and consumer goods previously. 

Several advocacy groups claim that the impact of price volatility is even more significant within the food retail industry, where pricing volatility directly impacts household affordability and access to essentials. In the wake of committee-level debates regarding enforcement language and consumer protection standards, the legislation quickly gained momentum, culminating in Senate approval on March 23, 2026, followed by final House concurrence after several weeks of sustained lobbying by the industry. 

By passing HB 895 on April 28, Governor Wes Moore established Maryland as the first state to pass legislation prohibiting discriminatory surveillance-driven grocery pricing practices. As the state's Attorney General prepares interpretive guidance later this summer, retailers and third-party delivery platforms will have a limited five-month compliance window to comply with the statute, which is scheduled to take effect on October 1, 2026. 

While the legislation has received broad bipartisan support, the accelerated legislative process has left unresolved compliance and evidentiary questions that industry stakeholders are now seeking to clarify. In Maryland, enforcement authority is primarily delegated to the Maryland Consumer Protection Division and the Attorney General, where violations can be prosecuted as unfair and deceptive trade practices subject to civil penalties of up to $10,000 per violation, with repeat offenses subject to double fines. 

Furthermore, the law provides that individuals may be subject to misdemeanor penalties, including imprisonment for up to a year and a fine of up to $1,000 for committing a misdemeanor. The law will also provide businesses accused of violations with 45 days to remedy the alleged misconduct prior to formal enforcement, which critics claim could substantially lessen its deterrent effect. 

Due to the narrowly limited rights to sue outside of limited labor-related circumstances, early legal interpretations are anticipated to be primarily determined by state-led enforcement actions which identify whether algorithmic pricing decisions are based on protected categories of personal information.

Regulatory specialists anticipate that the forthcoming guidance will clarify the evidence standards necessary to establish data-driven pricing manipulation, particularly when such manipulation involves opaque artificial intelligence systems and automated pricing engines. For retailers with mature compliance programs, financial penalties are likely to remain manageable. However, legal observers observe that reputational damage, regulatory scrutiny, and the erosion of consumer trust may ultimately prove more consequential than statutory fines. 

Labor unions, consumer advocacy organizations, and analysts of digital rights have increased the debate over Maryland's surveillance pricing law by arguing that the legislation has significant operational gaps retailers could potentially exploit by utilizing sophisticated pricing strategies. Public awareness campaigns have already been launched by United Food and Commercial Workers International Union, including a 30-second advertisement in which algorithmic pricing systems are illustrated as a possible way to reshape grocery shopping based on predictions of consumer behavior.

The advocacy groups maintain that despite the statute's significant legal precedent, the exemptions and enforcement structure may ultimately permit the continuation of many forms of data-driven price discrimination. Before the bill was enacted, Consumer Reports researchers had warned lawmakers about the bill's weaknesses, arguing that it lacks a clear baseline price standard against which discriminatory pricing could be measured.

Policy analysts have suggested that this omission creates a situation where nearly any fluctuating price could be viewed as a promotional discount instead of a targeted surcharge. Additionally, criticism has focused on the law's narrow restrictions against individualized pricing while allowing hyper-segmented pricing models to segment consumers into highly specific groups based on demographics or behavioral characteristics. There has been a growing consensus among consumer advocates that pricing strategies that target narrowly defined groups of consumers such as elderly individuals living alone in restricted retail markets - can result in similar outcomes to direct targeting of individual consumers. 

The broad exemptions granted to loyalty programs, membership pricing structures, subscription-based purchases, and recurring service models are also being criticized as providing retailers with alternative mechanisms for deploying surveillance-based pricing systems that would not technically violate the law. 

Maryland's legislation has sparked widespread national interest as at least a dozen states are considering similar restrictions on algorithmic price personalization practices, including New York, New Jersey and Illinois. According to consumer rights advocates, the Maryland experience is an early example of a regulatory stress test that may provide guidance for how future state legislatures will address the intersection of artificial intelligence, behavioral analytics, and retail pricing governance in the future. 

Some critics of the current framework, such as consumer advocate Oyefeso, contend that it risk legitimizing more extensive surveillance-based pricing practices by implying to retailers that some forms of algorithmic personalization remain legal. Supporters of stronger reforms, however, believe the legislation may be revisited in subsequent sessions as lawmakers grapple with the practical realities of enforcing transparency and accountability in increasingly opaque AI-driven pricing environments. 

Regulating surveillance pricing in Maryland marks a significant shift in the broader debate about how artificial intelligence, consumer data, and algorithmic commerce should be regulated in essential retail markets. It is argued that the law's exemptions, cure periods, and enforcement limitations may reduce the law's effectiveness immediately; however, the legislation has already set a national standard by requiring policymakers, retailers, and technology companies to consider the ethical and regulatory implications of data-driven price personalization. 

Maryland's framework may serve as both a cautionary example and a basis for future policies relating to the protection of consumers from algorithmic pricing as more states consider similar measures and consumer scrutiny over algorithmic pricing increases. 

A growing number of grocery retailers and delivery platforms have become aware that pricing systems that use behavioral analytics and artificial intelligence will no longer be exempt from regulatory oversight, particularly when affordability, transparency, and public trust are at stake.

Experts Say ‘Ghost Tapping’ Payment Scams Are Uncommon, But Consumers Should Still Stay Alert

 










As contactless payment systems become increasingly common at stores, public events, and seasonal markets, cybersecurity and payment security experts are reminding consumers to remain aware of how digital transactions work and to regularly monitor their financial activity. The warning follows growing discussions around so-called “ghost tapping” scams, a term used to describe situations where a payment could allegedly be processed through a smartphone’s tap-to-pay feature without the owner intentionally authorizing the transaction.

Despite online concern surrounding the issue, consumer protection specialists say incidents involving “ghost tapping” remain highly uncommon. Erin McGovern, a consumer protection official who has been monitoring complaints linked to the scam, said her organization has received fewer than 10 reports connected to these cases so far. However, she cautioned that risks associated with payment fraud may become more noticeable during busy shopping periods such as holiday markets, craft fairs, and seasonal events where large numbers of people rely on mobile payment systems for convenience.

At these public events, many vendors use portable payment terminals that allow customers to quickly complete purchases using smartphones or digital wallets instead of physical cash or bank cards. McGovern explained that while the speed and convenience of tap-to-pay technology make shopping easier, consumers should still remain careful about confirming the exact amount being charged before approving any transaction. She noted that shoppers sometimes become distracted in crowded environments, making it easier to overlook suspicious activity or incorrect payment totals.

The discussion around “ghost tapping” has raised concerns online because many consumers are unfamiliar with the technical limitations of contactless payment systems. Security specialists explain that tap-to-pay technology operates through Near Field Communication, commonly known as NFC. This wireless communication technology allows devices such as smartphones, smartwatches, and payment terminals to exchange encrypted payment information when placed extremely close together.

According to payment security experts, NFC technology only functions across a very short range, typically four centimeters or less. Michael Jabbara, Senior Vice President and Head of Payment Ecosystem Risk and Control at Visa, explained that the required distance is approximately the size of a small paper clip. Because of this limitation, an individual attempting to secretly trigger a payment would need to move unusually close to another person’s phone or pocket.

Jabbara stated that most people would naturally notice if someone entered their personal space to that extent. For that reason, experts say it would be highly difficult for a scammer to perform an unauthorized tap-to-pay transaction without drawing attention. While researchers acknowledge that such activity may be technically possible under certain conditions, they emphasize that it would be extremely unusual for it to happen without the victim becoming aware of suspicious behavior.

Still, cybersecurity professionals say the conversation surrounding “ghost tapping” highlights a broader and more realistic concern: many consumers fail to regularly review their banking activity or payment notifications. According to Jabbara, fraudsters often depend on victims ignoring account activity until the end of the month or waiting several weeks before reviewing statements. This delay can allow unauthorized purchases to remain undetected long enough for scammers to continue exploiting stolen payment information.

Financial security experts recommend reviewing banking applications, credit card activity, and digital wallet transactions frequently instead of waiting until a dispute becomes necessary. Early detection of suspicious purchases significantly increases the chances of stopping additional fraudulent activity and recovering lost funds.

Consumer protection authorities also note that individuals who believe they were targeted by payment fraud can dispute unauthorized charges directly with their bank or credit card provider. In some cases, victims may also submit formal complaints to their local attorney general’s office or consumer protection agencies for further investigation.

However, specialists say prevention remains the most effective defense against digital payment scams. One of the strongest recommendations from payment security experts is enabling instant transaction alerts through banking and credit card applications. Many financial institutions already use automated fraud-detection systems that analyze unusual spending behavior and risk patterns before approving transactions. Even so, transaction alerts provide another important layer of protection by notifying users immediately whenever money is spent through their account.

These notifications can help consumers quickly identify purchases linked to unfamiliar merchant names, unexpected locations, or payment amounts they did not approve. Experts say immediate awareness often prevents fraud from escalating into larger financial losses.

Another important safety measure is always requesting a receipt after making a purchase. Receipts serve as proof of payment and can become important evidence if consumers later need to challenge suspicious charges with their bank or payment provider. McGovern warned that vendors refusing to provide receipts or claiming that their payment system is suddenly malfunctioning could represent a potential warning sign of fraudulent behavior.

Cybersecurity analysts additionally point out that modern digital wallet systems, including services such as Apple Pay and Google Pay, already contain multiple layers of security protection. These systems rely on technologies such as tokenization and encryption, which help prevent actual card numbers from being directly exposed during transactions. Instead of transmitting sensitive banking details, digital wallets generate encrypted payment tokens designed to reduce the likelihood of financial data theft.

Although security protections built into modern payment platforms have substantially reduced many traditional forms of card fraud, experts caution that scammers continuously adapt their tactics as digital payment technology evolves. For that reason, cybersecurity professionals stress that awareness, regular account monitoring, transaction alerts, and cautious payment habits remain essential safeguards for consumers using contactless payment systems.

AI Deepfake Scam Changes Aadhaar Mobile Without OTP

 

AI-enabled fraudsters are now using deepfake tools to change Aadhaar details, such as the mobile number linked to an account, without victims noticing, enabling identity theft and loan fraud.

In Ahmedabad, cybercrime investigators uncovered a racket that quietly replaced victims’ Aadhaar-linked mobile numbers and then used those new numbers to intercept OTPs and take control of digital services, including DigiLocker and banking apps. The gang reportedly collected Aadhaar numbers, photographs and other personal data from leaks and social media, then used AI software to turn still photos into short “blink” videos that mimic liveness checks and fool verification systems. 

Once the fraudsters changed the registered mobile number, they could receive OTPs and update KYC details, effectively hijacking victims’ digital identities and applying for loans or accessing accounts in their names. Police say the operation was organised with distinct roles: some members sourced data and photos, others used Aadhaar update kits—often through Common Service Centres (CSCs)—to make unauthorised changes, and specialists created deepfake clips to pass biometric checks.

Authorities arrested several suspects after a businessman reported that his Aadhaar-linked number was altered without any OTP or call alerts, revealing how smoothly the criminals combined social engineering, physical update kits, and AI manipulation to bypass safeguards. Reports indicate the attackers exploited weaknesses in offline update workflows and gaps in liveness-detection systems that still accept AI-generated motion as genuine.

Safety recommendations 

To protect yourself, regularly verify the mobile number linked to your Aadhaar and lock your biometrics using official mAadhaar or UIDAI services when not in use. Monitor DigiLocker and bank accounts for unexpected changes and set up transaction alerts with your bank; if you spot unusual activity, report it immediately to local cybercrime units or UIDAI’s helplines. Avoid uploading Aadhaar photos or documents on unfamiliar platforms and be cautious about sharing personal information on social media, which criminals can reuse to create realistic deepfakes. 

Longer-term fixes will require stricter controls around Aadhaar update kits at CSCs, better audit trails for demographic changes, and improved liveness-detection algorithms that can distinguish AI-generated clips from real facial movement. Experts and regulators also urge faster data-breach notification rules and tighter controls on access to identity databases so criminals cannot easily assemble the building blocks for such attacks. Until these systemic changes arrive, vigilance, biometric locks, and immediate reporting remain the best defenses for citizens.

AI Chatbot Training Raises Growing Privacy and Data Security Concerns

 

Most conversations with AI bots carry hidden layers behind simple replies. While offering answers, some firms quietly gather exchanges to refine machine learning models. Personal thoughts, job-related facts, or private topics might slip into data pools shaping tomorrow's algorithms. Experts studying digital privacy point out people rarely notice how freely they share in routine bot talks. Hidden purposes linger beneath what seems like casual back-and-forth. Most chatbots rely on what experts call a large language model. 

Through exposure to massive volumes of text - pulled from sites, online discussions, video transcripts, published works, and similar open resources - these models grow sharper. Exposure shapes their ability to spot trends, suggest fitting answers, and produce dialogue resembling natural speech. As their learning material expands, so does their skill in managing complex questions and forming thorough outputs. Wider input often means smoother interactions. 

Still, official data isn’t what fills these models alone. Input from people using apps now feeds just as much raw material to tech firms building artificial intelligence. Each message entered into a conversational program might later get saved, studied, then applied to sharpen how future versions respond. Often, that process runs by default - only pausing if someone actively adjusts their preferences or chooses to withdraw when given the chance. Worries about digital privacy keep rising.

Talking to artificial intelligence systems means sharing intimate details - things like medical issues, money problems, mental health, job conflicts, legal questions, or relationship secrets. Even though firms say data gets stripped of identities prior to being used in machine learning, skeptics point out people must rely on assurances they can’t personally check. 

Some data marked as private today might lose that status later. Experts who study system safety often point out how new tools or pattern-matching tricks could link disguised inputs to real people down the line. Talks involving personal topics kept inside artificial intelligence platforms can thus pose hidden exposure dangers years after they happen. Most jobs now involve some form of digital tool interaction. 

As staff turn to AI assistants for tasks like interpreting files, generating scripts, organizing data tables, composing summaries, or solving tech glitches, risks grow quietly. Information meant to stay inside - such as sensitive project notes, client histories, budget figures, unique program logic, compliance paperwork, or strategic plans - can slip out without warning. When typed into an assistant interface, those fragments might linger in remote servers, later shaping how the system responds to others. Hidden patterns emerge where private inputs feed public outputs. 

One concern among privacy experts involves possible legal risks for firms in tightly controlled sectors. When companies send sensitive details - like internal strategies or customer records - to artificial intelligence tools without caution, trouble might follow. Problems may emerge later, such as failing to meet confidentiality duties or drawing attention from oversight authorities. These exposures stem not from malice but from routine actions taken too quickly. 

Because reliance on AI helpers keeps rising, people and companies must reconsider what details they hand over to chatbots. Speedy answers tend to push aside careful thinking, particularly when automated aids respond quickly with helpful outcomes. Still, specialists insist grasping how these learning models are built matters greatly - especially for shielding private data and corporate secrets amid expanding artificial intelligence use.

22 Year Old Developer Reverse Engineered Code in Claude Mythos, Tech Industry Shocked

 


Earlier this year, AI tech giant Anthropic launched its powerful new model called Claude Mythos. It created storms in the silicon valley and tech industry. The general-purpose model could find software bugs that no human knew ever existed.

About Claude Mythos


But Claude did not launch Mythos to the world, it only offered it to cybersecurity experts at big organizations that make or have critical software infrastructure and asked them to find and patch flaws before Anthropic released it commercially for the public use.

But, in just two weeks, a 22-year old developer called Kye Gomez made predictions about the core designs that made Claude Mythos advanced and later published OpenMythos. It is an open project that anticipates Anthropic’s breakthrough. Gomez’s code created a tsunami in the AI and tech research community.

If real, this incident can have serious implications . Why? Because if a self-taught developer can reverse engineer the infrastructure innovation of a billion-dollar AI firm in just a few days, then what can threat-actors with malicious intent do. If this happens, the proprietary debate about AI architecture will fade away.

About OpenMythos


OpenMythos allows developers to run and train effective variants of these models on laptops, also raising concerns about long-term dependency on huge, environment and community-destroying data centers.

Boon or curse?


Fortunately, organizations won’t be able to get AI secrets that only the big tech companies such as OpenAI, Anthropic, or Google control.

But what if users and small teams across the world can also reverse engineer the code of the biggest AI companies? It will be difficult to maintain a safe-tech world order. Advanced capabilities will sprout, and it will be difficult to contain.

About the developer, Gomez is not your typical ML engineer. He started coding as a kid, left school early and did not attend college. He built his reputation via coding.

Why OpenMythos


OpenMythos is built upon Gomez’s hypothesis that Claude Mythos uses a unique large language model (LLM) that has been under development since 2022 and shown reliability while training at scale at the start of this year. How is OpenMythos different from Claude Mythos?

Instead of putting neural network layers to give models more depth, experts advised looping data repetitively via smaller packets. This gave the model depth in due time.

Workplace Apps May Be Selling Employee Data Without Consent, Study Warns

 

A growing number of workplace applications are collecting vast amounts of employee data and, in many cases, sharing or selling that information to third-party companies without workers’ knowledge or permission, according to a recent analysis by privacy-focused tech company Incogni.

The company, which specializes in helping users locate and remove personal information from online databases, examined several employer-provided tools and widely used workplace communication platforms. The findings revealed how deeply integrated data collection has become in modern work environments, raising fresh concerns about employee privacy and cybersecurity.

“Collectively, these apps account for over 12.5 billion downloads on Google Play alone,” the Incogni post on the findings said. “On average, workplace apps collect around 19 data points and share approximately 2 data types [per user]. The three Google and Microsoft apps (Gmail, Google Meet, and Microsoft Teams) cluster at the top of the collection spectrum, each gathering 21–26 data types.”

The report highlighted that common communication platforms such as Gmail, Zoom, and Microsoft Teams often gather extensive user information. However, unlike consumer-focused platforms that sometimes provide opt-out settings, many workplace-mandated tools do not offer employees the ability to refuse data collection.

According to Incogni, productivity tracking and monitoring applications are especially aggressive in sharing information with outside organizations. Beyond standard details such as email addresses, location data, contacts, and app activity, some applications may also collect sensitive financial or health-related information.

The report identified Notion as one of the most data-sharing-intensive platforms reviewed. Using the app as an example, Incogni stated that it “shares the most data with third parties, distributing 8 distinct data types to third parties—including email addresses, names, user IDs, device or other IDs, and app interactions.”

Privacy experts warn that this growing exchange of employee data creates significant risks. Once personal information is transferred to multiple external entities, workers may lose visibility and control over how their data is being used. In addition, broader distribution increases exposure to cyberattacks and data breaches, incidents that platforms like Slack and Zoom have previously experienced.

“People tend to think of workplace apps as safe tools, but they don’t exist in isolation,” Incogni CEO Darius Belejevas told enterprise technology publication No Jitter. “A lot of them are part of much larger data ecosystems. Once information is collected, especially if it’s shared with third parties, it can travel much further than users expect.”

Experts suggest employees can lower some of these risks by limiting personal activity on workplace communication platforms and avoiding the use of personal devices for professional work whenever possible.

At the same time, businesses are being encouraged to prioritize stricter privacy protections when selecting workplace software. Organizations may benefit from requiring vendors to reduce unnecessary data collection and restrict third-party sharing practices before adopting enterprise tools.

“Workplace applications that access and share employee information can pose significant security and privacy risks for organizations,” Sarah McBride told No Jitter. “These risks arise from the sensitive nature of the data involved, the potential for misuse, and vulnerabilities in the applications themselves.”

Featured