Modern workplaces are beginning to track more than just employee hours or tasks. Today, many employers are collecting very personal information about workers' bodies and behaviors. This includes data like fingerprints, eye scans, heart rates, sleeping patterns, and even the way someone walks or types. All of this is made possible by tools like wearable devices, security cameras, and AI-powered monitoring systems.
The reason companies use these methods varies. Some want to increase workplace safety. Others hope to improve employee health or get discounts from insurance providers. Many believe that collecting this kind of data helps boost productivity and efficiency. At first glance, these goals might sound useful. But there are real risks to both workers and companies that many overlook.
New research shows that being watched in such personal ways can lead to fear and discomfort. Employees may feel anxious or unsure about their future at the company. They worry their job might be at risk if the data is misunderstood or misused. This sense of insecurity can impact mental health, lower job satisfaction, and make people less motivated to perform well.
There have already been legal consequences. In one major case, a railway company had to pay millions to settle a lawsuit after workers claimed their fingerprints were collected without consent. Other large companies have also faced similar claims. The common issue in these cases is the lack of clear communication and proper approval from employees.
Even when health programs are framed as helpful, they can backfire. For example, some workers are offered lower health insurance costs if they participate in screenings or share fitness data. But not everyone feels comfortable handing over private health details. Some feel pressured to agree just to avoid being judged or left out. In certain cases, those who chose not to participate were penalized. One university faced a lawsuit for this and later agreed to stop the program after public backlash.
Monitoring employees’ behavior can also affect how they work. For instance, in one warehouse, cameras were installed to track walking routes and improve safety. However, workers felt watched and lost the freedom to help each other or move around in ways that felt natural. Instead of making the workplace better, the system made workers feel less trusted.
Laws are slowly catching up, but in many places, current rules don’t fully protect workers from this level of personal tracking. Just because something is technically legal does not mean it is ethical or wise.
Before collecting sensitive data, companies must ask a simple but powerful question: is this really necessary? If the benefits only go to the employer, while workers feel stressed or powerless, the program might do more harm than good. In many cases, choosing not to collect such data is the better and more respectful option.
Serco Leisure, a prominent leisure firm based in the UK, finds itself at the centre of a regulatory storm as the Information Commissioner's Office (ICO) intensifies its scrutiny. The ICO has raised serious concerns over the alleged illegal processing of biometric data, affecting more than 2,000 employees spread across 38 leisure facilities operated by the company. At the heart of the matter is the contentious implementation of facial scanning and fingerprint technology, ostensibly deployed to track staff attendance. This move has drawn sharp criticism from the ICO, which contends that the company's actions in this regard are not only ethically questionable but also fall short of principles of fairness and proportionality.
Despite Serco Leisure claiming it sought legal advice before installing the cameras and asserting that employees did not complain during the five years, the ICO found the firm had failed to provide a clear alternative to collecting biometric data. The company's staff, who also undergo fingerprint scanning, were not offered less intrusive methods, such as ID cards or fobs.
The ICO, led by UK Information Commissioner John Edwards, argued that Serco Leisure's actions created a power imbalance in the workplace, leaving employees feeling compelled to surrender their biometric data. Edwards emphasised that the company neglected to fully assess the risks associated with biometric technology, prioritising business interests over employee privacy.
According to the ICO, biometric data, being unique to an individual, poses greater risks in the event of inaccuracies or security breaches. Unlike passwords, faces and fingerprints cannot be reset, heightening concerns regarding data security.
Serco Leisure, while committing to comply with the enforcement notice, insisted that the facial scanning technology aimed to simplify clocking in and out for workers. The company claimed that it consulted with team members before the technology's implementation and received positive feedback.
After this occurrence, the ICO is releasing new guidance for organisations considering the use of employees' biometric data. This guidance aims to help such organisations comply with data protection laws. The controversial nature of biometric technology has sparked debates, with privacy advocates asserting that it infringes on individuals' rights, especially as artificial intelligence enhances the capabilities of these systems. On the other hand, law enforcement and some businesses argue that it is a precise and efficient method for ensuring safety and catching criminals.
Serco Leisure's use of facial scanning technology to monitor staff attendance has raised legal concerns, leading to an enforcement notice from the ICO. The incident surfaces the need for organisations to carefully consider the privacy implications of biometric data usage and explore less intrusive alternatives to protect employee privacy while maintaining operational efficiency. The ICO's upcoming guidance will serve as a crucial resource for organisations navigating the complexities of using biometric data in the workplace.
In a recent report by FICO on Fraud, Identity, and Digital Banking, it was revealed that nearly two million Brits may have fallen victim to identity theft last year. The analytics firm found that 4.3% of respondents experienced fraudsters using their identity to open financial accounts. This percentage, when extrapolated to the adult UK population, equates to approximately 1.9 million people. While this marks a decrease from 2022 when 7.7% reported such incidents, there's a concern that the actual numbers could be higher.
According to Sarah Rutherford, senior director of fraud marketing at FICO, the data only represents those who are aware of their stolen identity being used for financial fraud. Many individuals might not immediately discover such fraudulent activities, and perpetrators often exploit stolen identities multiple times, amplifying the overall impact.
The report identifies this type of fraud as the most worrisome financial crime for UK citizens, with 30% expressing concern. Following closely are fears of credit card theft and bank account takeovers by fraudsters, at 24% and 20%, respectively.
Consumer Preferences and Concerns Drive Financial Organisations' Strategies
FICO's research emphasises the significant impact that robust fraud protection measures can have on financial organisations. Approximately 34% of respondents prioritise good fraud protection when selecting a new account provider, and an overwhelming 73% include it in their top three considerations. However, 18% stated they would abandon opening a bank account if identity checks were too challenging or time-consuming, highlighting the importance of achieving a balance between security and user convenience.
Biometric authentication emerged as a favoured choice among respondents, with 87% acknowledging its excellent security features. Fingerprint scanning ranked highest among biometric methods, preferred by 38% of participants, followed by face scans (34%) and iris scans (25%). In contrast, only 17% believed that the traditional combination of username and password provides excellent protection.
Sarah Rutherford expressed optimism about the shift in attitudes towards new verification tools such as iris, face, and fingerprint scans, as individuals increasingly recognise the benefits they offer in enhancing security.
Commercial Impact
The study suggests that financial institutions incorporating strong fraud protection measures may reap significant commercial benefits. With consumer preferences indicating a growing emphasis on security, financial organisations must navigate the challenge of implementing effective identity checks without compromising the ease of service. Striking this balance becomes crucial, especially as 20% of respondents indicated they would abandon the account opening process if identity checks were deemed too cumbersome.
Amidst growing concerns surrounding identity fraud affecting a significant portion of the British population, there is a discernible shift towards the acceptance of advanced biometric authentication methods. Financial organizations are urged to prioritise formidable fraud protection measures, not only to enhance consumer appeal but also to reinforce security protocols for sensitive information. This imperative reflects the industry's transformation, shedding light on the growing importance of heightened security measures address the increasing challenges of identity theft.
Thankfully, even on the best low-cost Android phones, biometric authentication is becoming mainstream and easily accessible. This has led to the adoption of passkeys for user authentication by a number of well-known social networking platforms and password manager apps. WhatsApp is the newest application to offer passkey support for all of its users after a month of beta testing.
Passkeys replace conventional passwords with a unique cryptographic key pair, such that only the users can log in. Only after a successful biometric authentication, the key is made accessible to the respective users, negating the requirement for two-factor authentication techniques like OTP distribution through SMS and email. Passkeys shield users from the risks associated with password reuse and phishing attacks. Google disclosed the new technology supports more rapid user authentication after revealing support for passkey storage in its password manager.
WhatsApp’s effort in adopting passkey technology came to light in early August. Also, beta testing on the same commenced in late September.
Now, around a month later, WhatsApp announced support for passkeys was coming in the stable channel on X (formerly Twitter). The feature makes the login process significantly more secure by taking the place of the one-time password (OTP) sent via SMS. The app enables users to authenticate themselves using screen lock options, including their on-device fingerprint, face unlock, PIN, or swipe pattern. In the meantime, Google Password Manager automatically stores the cryptographic key.
The login system, with no password requirement, turns out to be quite time-efficient for users when they are setting up WhatsApp on a new phone. Commendable enough, WhatsApp is also explaining to online users how passkeys work, in order to secure their accounts.
Moreover, it is important for users to see the difference between passkeys for logging into WhatsApp and in-app features like WhatsApp chat lock, which still requires biometric authentication. Importantly, passkeys and passwords for traditional user authentication will both be available on WhatsApp.
However, WhatsApp has not yet clarified whether the feature will be made immediately accessible everywhere. Nonetheless, Passkey support, like every other major WhatsApp feature, is anticipated to be implemented gradually in the stable channel. But it is still great to see WhatsApp reiterate its dedication to user security and privacy with features like this.