An app for tracking employee productivity by logging keystrokes and capturing screenshots was hit by a major privacy breach resulting in more than 21 million images of employee activity left in an unsafe Amazon S3 bucket.
Experts at Cybernews discovered the breach at WorkComposer, a workplace surveillance software that monitors employee activity by tracking their digital presence. Although the company did secure access after being informed by Cybernews, the data was already leaked in real time to anyone with an internet connection, exposing the sensitive work information online of thousands of employees and companies.
WorkComposer is an application used by more than 200,000 users in various organizations. It is aimed to help those organizations surveil employee productivity by logging keystrokes, monitoring how much time employees spend on each app, and capturing desktop screenshots every few minutes.
With millions of these screenshots leaked to the open web raises threats of vast sensitive data exposed: email captures, confidential business documents, internal chats, usernames and passwords, and API keys. These things could be misused to target companies and launch identity theft scams, hack employee accounts, and commit more breaches.
Also, the businesses that have been using WorkCompose could now be accountable to E.U GDPR (General Data Protection Regulation) or U.S CCPA (California Consumer Privacy Act) violations besides other legal actions.
As employees have no agency over what tracking tools may record in their workday, information such as private chats, medical info, or confidential projects; the surveillance raises ethical concerns around tracking tools and a severe privacy violation if these screenshots are exposed.
Since workers have no control over what tracking tools may capture in their workday, be it private chats, confidential projects, or even medical info, there’s already an iffy ethical territory around tracking tools and a serious privacy violation if the screenshots are leaked.
The WorkComposer incident is not the first. Cybernews have reported previous leaks from WebWork, another workplace tracking tool that experienced a breach of 13 million screenshots.
Switzerland is thinking about changing its digital surveillance laws, and privacy experts are worried. The new rules could force VPN companies and secure messaging services to track their users and give up private information if requested.
At the center of the issue is a proposed change that would expand government powers over online services like email platforms, messaging apps, VPNs, and even social media sites. These services could soon be required to collect and store personal details about their users and hand over encrypted data when asked.
This move has sparked concern among privacy-focused companies that operate out of Switzerland. If the law is approved, it could prevent them from offering the same level of privacy they are known for.
What Could the New Rules Mean?
The suggested law says that if a digital service has over 5,000 users, it must collect and verify users’ identities and store that information for half a year after they stop using the service. This would affect many platforms, even small ones run by individuals or non-profits.
Another part of the law would give authorities the power to access encrypted messages, but only if the company has the key needed to unlock them. This could break the trust users have in these services, especially those who rely on privacy for safety or security.
Why VPN Providers Are Speaking Out
VPN services are designed to hide user activity and protect data from being tracked. They usually don’t keep any records that could identify a user. But if Swiss law requires them to log personal data, that goes against the very idea of privacy that VPNs are built on.
Swiss companies like Proton VPN, Threema, and NymVPN are all worried. They say the law could damage Switzerland’s reputation as a country that supports privacy and secure digital tools.
NymVPN’s Warning
NymVPN, a newer VPN service backed by privacy activist Chelsea Manning, has raised strong objections. Alexis Roussel, the company’s Chief Operating Officer, explained that the new rules would not only hurt businesses but could also put users in danger—especially people in sensitive roles, like journalists or activists.
Roussel added that this law may try to go around earlier court rulings that protected privacy rights, which could hurt Switzerland’s fast-growing privacy tech industry.
What People Can Do
Swiss citizens have time to give feedback on the proposal until May 6, 2025. NymVPN is encouraging people to spread the word, take part in the consultation process, and contact government officials to share their concerns. They’re also warning people in other countries to stay alert in case similar ideas start appearing elsewhere.
Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.
Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.
But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.
This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.
Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.
Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.
This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.
In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.