Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI Privacy. Show all posts

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

Grok AI Faces Global Backlash Over Nonconsensual Image Manipulation on X

 

A dispute over X's internal AI assistant, Grok, is gaining attention - questions now swirl around permission, safety measures online, yet also how synthetic media tools can be twisted. This tension surfaced when Julie Yukari, a musician aged thirty-one living in Rio de Janeiro, posted a picture of herself unwinding with her cat during New Year’s Eve celebrations. Shortly afterward, individuals on the network started instructing Grok to modify that photograph, swapping her outfit for skimpy beach attire through digital manipulation. 

What started as skepticism soon gave way to shock. Yukari had thought the system wouldn’t act on those inputs - yet it did. Images surfaced, altered, showing her with minimal clothing, spreading fast across the app. She called the episode painful, a moment that exposed quiet vulnerabilities. Consent vanished quietly, replaced by algorithms working inside familiar online spaces. 

A Reuters probe found that Yukari’s situation happens more than once. The organization uncovered multiple examples where Grok produced suggestive pictures of actual persons, some seeming underage. No reply came from X after inquiries about the report’s results. Earlier, xAI - the team developing Grok - downplayed similar claims quickly, calling traditional outlets sources of false information. 

Across the globe, unease is growing over sexually explicit images created by artificial intelligence. Officials in France have sent complaints about X to legal authorities, calling such content unlawful and deeply offensive to women. A similar move came from India’s technology ministry, which warned X it did not stop indecent material from being made or shared online. Meanwhile, agencies in the United States, like the FCC and FTC, chose silence instead of public statements. 

A sudden rise in demands for Grok to modify pictures into suggestive clothing showed up in Reuters' review. Within just ten minutes, over one00 instances appeared - mostly focused on younger females. Often, the system produced overt visual content without hesitation. At times, only part of the request was carried out. A large share vanished quickly from open access, limiting how much could be measured afterward. 

Some time ago, image-editing tools driven by artificial intelligence could already strip clothes off photos, though they mostly stayed on obscure websites or required payment. Now, because Grok is built right into a well-known social network, creating such fake visuals takes almost no work at all. Warnings had been issued earlier to X about launching these kinds of features without tight controls. 

People studying tech impacts and advocacy teams argue this situation followed clearly from those ignored alerts. From a legal standpoint, some specialists claim the event highlights deep flaws in how platforms handle harmful content and manage artificial intelligence. Rather than addressing risks early, observers note that X failed to block offensive inputs during model development while lacking strong safeguards on unauthorized image creation. 

In cases such as Yukari’s, consequences run far beyond digital space - emotions like embarrassment linger long after deletion. Although aware the depictions were fake, she still pulled away socially, weighed down by stigma. Though X hasn’t outlined specific fixes, pressure is rising for tighter rules on generative AI - especially around responsibility when companies release these tools widely. What stands out now is how little clarity exists on who answers for the outcomes.

AI Emotional Monitoring in the Workplace Raises New Privacy and Ethical Concerns

 

As artificial intelligence becomes more deeply woven into daily life, tools like ChatGPT have already demonstrated how appealing digital emotional support can be. While public discussions have largely focused on the risks of using AI for therapy—particularly for younger or vulnerable users—a quieter trend is unfolding inside workplaces. Increasingly, companies are deploying generative AI systems not just for productivity but to monitor emotional well-being and provide psychological support to employees. 

This shift accelerated after the pandemic reshaped workplaces and normalized remote communication. Now, industries including healthcare, corporate services and HR are turning to software that can identify stress, assess psychological health and respond to emotional distress. Unlike consumer-facing mental wellness apps, these systems sit inside corporate environments, raising questions about power dynamics, privacy boundaries and accountability. 

Some companies initially introduced AI-based counseling tools that mimic therapeutic conversation. Early research suggests people sometimes feel more validated by AI responses than by human interaction. One study found chatbot replies were perceived as equally or more empathetic than responses from licensed therapists. This is largely attributed to predictably supportive responses, lack of judgment and uninterrupted listening—qualities users say make it easier to discuss sensitive topics. 

Yet the workplace context changes everything. Studies show many employees hesitate to use employer-provided mental health tools due to fear that personal disclosures could resurface in performance reviews or influence job security. The concern is not irrational: some AI-powered platforms now go beyond conversation, analyzing emails, Slack messages and virtual meeting behavior to generate emotional profiles. These systems can detect tone shifts, estimate personal stress levels and map emotional trends across departments. 

One example involves workplace platforms using facial analytics to categorize emotional expression and assign wellness scores. While advocates claim this data can help organizations spot burnout and intervene early, critics warn it blurs the line between support and surveillance. The same system designed to offer empathy can simultaneously collect insights that may be used to evaluate morale, predict resignations or inform management decisions. 

Research indicates that constant monitoring can heighten stress rather than reduce it. Workers who know they are being analyzed tend to modulate behavior, speak differently or avoid emotional honesty altogether. The risk of misinterpretation is another concern: existing emotion-tracking models have demonstrated bias against marginalized groups, potentially leading to misread emotional cues and unfair conclusions. 

The growing use of AI-mediated emotional support raises broader organizational questions. If employees trust AI more than managers, what does that imply about leadership? And if AI becomes the primary emotional outlet, what happens to the human relationships workplaces rely on? 

Experts argue that AI can play a positive role, but only when paired with transparent data use policies, strict privacy protections and ethical limits. Ultimately, technology may help supplement emotional care—but it cannot replace the trust, nuance and accountability required to sustain healthy workplace relationships.

Think Twice Before Uploading Personal Photos to AI Chatbots

 

Artificial intelligence chatbots are increasingly being used for fun, from generating quirky captions to transforming personal photos into cartoon characters. While the appeal of uploading images to see creative outputs is undeniable, the risks tied to sharing private photos with AI platforms are often overlooked. A recent incident at a family gathering highlighted just how easy it is for these photos to be exposed without much thought. What might seem like harmless fun could actually open the door to serious privacy concerns. 

The central issue is unawareness. Most users do not stop to consider where their photos are going once uploaded to a chatbot, whether those images could be stored for AI training, or if they contain personal details such as house numbers, street signs, or other identifying information. Even more concerning is the lack of consent—especially when it comes to children. Uploading photos of kids to chatbots, without their ability to approve or refuse, creates ethical and security challenges that should not be ignored.  

Photos contain far more than just the visible image. Hidden metadata, including timestamps, location details, and device information, can be embedded within every upload. This information, if mishandled, could become a goldmine for malicious actors. Worse still, once a photo is uploaded, users lose control over its journey. It may be stored on servers, used for moderation, or even retained for training AI models without the user’s explicit knowledge. Just because an image disappears from the chat interface does not mean it is gone from the system.  

One of the most troubling risks is the possibility of misuse, including deepfakes. A simple selfie, once in the wrong hands, can be manipulated to create highly convincing fake content, which could lead to reputational damage or exploitation. 

There are steps individuals can take to minimize exposure. Reviewing a platform’s privacy policy is a strong starting point, as it provides clarity on how data is collected, stored, and used. Some platforms, including OpenAI, allow users to disable chat history to limit training data collection. Additionally, photos can be stripped of metadata using tools like ExifTool or by taking a screenshot before uploading. 

Consent should also remain central to responsible AI use. Children cannot give informed permission, making it inappropriate to share their images. Beyond privacy, AI-altered photos can distort self-image, particularly among younger users, leading to long-term effects on confidence and mental health. 

Safer alternatives include experimenting with stock images or synthetic faces generated by tools like This Person Does Not Exist. These provide the creative fun of AI tools without compromising personal data. 

Ultimately, while AI chatbots can be entertaining and useful, users must remain cautious. They are not friends, and their cheerful tone should not distract from the risks. Practicing restraint, verifying privacy settings, and thinking critically before uploading personal photos is essential for protecting both privacy and security in the digital age.

Digital Safety 101: Essential Cybersecurity Tips for Everyday Internet Users

 9to5Mac is brought to you by Incogni: a service that helps you wipe your personal data—including your phone number, address, and email—from data brokers and people-search websites. With a 30-day money-back guarantee, Incogni offers peace of mind for anyone looking to guard their privacy.


1. Use a Password Manager

The old advice to create strong, unique passwords for each website still holds true—but is only realistic if you use a password manager. Fortunately, Apple’s built-in Passwords app makes this easy, and there are many third-party options too. Use these tools to generate and save complex passwords every time you sign up for a new service.

2. Update Old Passwords

Accounts created years ago may still have weak or repeated passwords. This makes you vulnerable to credential stuffing attacks—where hackers use stolen logins from one site to access others. Prioritize updating your passwords for financial services, Apple, Google, Amazon, and any accounts that have already been compromised. To check this, enter your email on Have I Been Pwned.

3. Enable Passkeys Where Available

Passkeys are becoming the modern alternative to passwords. Instead of storing a traditional password, your device uses Face ID or Touch ID to verify your identity, and only sends confirmation of that identity to the site—never the actual password. This reduces the risk of your credentials being hacked or stolen.

4. Use Two-Factor Authentication (2FA)

2FA provides an added layer of security by requiring a rolling code each time you log in. Avoid SMS-based 2FA—it's prone to SIM-swap attacks. Instead, opt for an authenticator app like Google Authenticator or use the built-in support in Apple’s Passwords app. Set this up using the QR code provided by the service.

5. Monitor Last Login Activity

Some platforms, especially banking apps, show the date and time of your last login. Get into the habit of checking this regularly. Unexpected logins are an immediate red flag and could signal that your account has been compromised.

6. Use a VPN on Public Wi-Fi

Public Wi-Fi networks can be unsafe and vulnerable to “Man-in-the-Middle” (MitM) attacks. These involve a rogue device impersonating a Wi-Fi hotspot to intercept your internet traffic. While HTTPS reduces the risk, using a VPN is still the best protection. Choose a trusted provider that maintains a no-logs policy and undergoes third-party audits. “I use NordVPN for this reason.”

7. Don’t Share Personal Info With AI Chatbots

Conversations with AI chatbots may be stored or used as training data. Avoid typing anything sensitive, such as passwords, addresses, or identification numbers—just as you wouldn’t post them publicly online.

8. Consider Data Removal Services

Your personal information may already be listed with data brokers, exposing you to spam and scams. Manually removing this data can be tedious, but services like Incogni can automate the process and reduce your digital footprint efficiently.

9. Verify Any Request for Money

If someone asks for money—even if it looks like a friend, family member, or colleague—double-check their identity using a separate communication method.

“If they emailed you, phone them. If they phoned you, email or message them.”

Also, if you're asked to send gift cards or wire money, it's almost always a scam. Be especially cautious if you're told a bank account has changed—confirm directly before transferring funds.

Global Data Breach Uncovers 23 Million Stolen Credentials

 


As a consequence of the fact that a single set of login credentials can essentially unlock an individual's financial, professional, and personal life, the exposure of billions of passwords represents more than just a routine cybersecurity concern today- it signals a global crisis in the trust of digital systems and data security. 

Cybernews has recently reported a staggering number of 19 billion passwords that circulate on underground criminal forums right now, according to their findings. According to experts, this massive database of compromised credentials, which is one of the most extensive collections of credentials ever recorded, is intensifying cyberattacks around the globe in an attempt to increase their scale and sophistication. 

As opposed to isolated breaches of the past, this latest leak seems to have come from years of data breaches, reassembled and repurposed in a way that enables threat actors to launch highly automated and targeted attacks that can be used by threat actors. Not only is the leaked data being used to breach individual accounts, but it is also allowing credential stuffing campaigns to run on a large scale against banks, corporations, and government systems, involving automated login attempts using the leaked credentials. 

Due to this rapid development of the threat landscape, cybersecurity professionals are warning that attacks will become more personal, more frequent, and harder to detect in the future. Considering the sheer number of compromised passwords, it is evident that it is essential to implement more comprehensive digital hygiene practices, such as multi-factor authentication, regular password updates, and educating the public about the dangers associated with reused or weak credentials. Today's hyperconnected world is a powerful reminder that cybersecurity isn't an optional issue. This development serves as a strong reminder of the importance of maintaining strong digital hygiene.

As the threat posed by infostealer malware continues to grow, a thriving underground economy of stolen digital identities will continue to thrive as a result. Infections are silently carried out by these malicious programs that harvest sensitive information from devices. These details include login credentials, browser-stored data, and session cookies. These data are then sold or traded between cybercriminals. With billions of compromised records currently circulating within these illicit networks, it is alarming to see the scale of this ongoing data theft. 

One example of this was when a massive dataset, referred to as "ALIEN TXTBASE", was ingested into the widely trusted breach monitoring service, Have I Been Pwned, by cybersecurity expert Troy Hunt, known for being a very prominent case study. In the dataset, 1.5 terabytes of stealer logs are included, which contain approximately 23 billion individual data rows. These logs comprise 1.5 terabytes in total. According to the researchers, over 284 million distinct email accounts around the world were impacted by these breaches, which accounted for 493 million unique combinations of websites and email addresses. This trove of disclosed information underscores the magnitude of these breaches as they are becoming increasingly widespread and indiscriminate.

A malware program known as Infostealer does not target specific individuals but rather casts a wide net, infecting systems en large and stealing personal information without the knowledge of the user. As a result, there is an ever-increasing number of compromised digital identities that are constantly growing, which is a significant contributor to the global increase in the risks of account takeovers, fraud, and phishing attacks, as well as long-term privacy violations. 

It is common for individuals to believe they are unlikely targets for cybercriminals simply because they do not feel that they are "important enough." This belief is very, very false, and it is not possible to find a way to change it. In reality, modern cyberattacks are not manually orchestrated by hackers selecting a specific victim; instead, they are driven by automated tools capable of scanning and exploiting vulnerabilities at a large scale using automated tools. Regardless of whether a person has a professional or personal online presence, anyone can potentially be at risk, no matter what their profession, profile, or perceived importance is. 

The worst part is that, based on recent data, about 94% of the 19 billion leaked passwords were reused on multiple accounts in a way that makes the situation even more concerning. Cybercriminals can successfully infiltrate others using the same credentials once one account has been compromised, increasing the chances of successful attacks. It can be extremely difficult for an individual to cope with the consequences of a successful password breach. 

They may have to give up their email accounts, social media accounts, cloud storage accounts, financial applications, and more if they are hacked. When hackers have access to their accounts, they may use them to commit identity theft, open fraudulent credit lines, or conduct unauthorised financial transactions. As a result of the exposure of sensitive personal and professional information, it is also possible to face public humiliation, blackmail, or reputational damage, especially if malicious actors misuse compromised accounts for the dissemination of misinformation or for conducting illicit activities. 

As a result, cybercrime is becoming more sophisticated and sophisticated, thereby making everyone, regardless of their digital literacy, vulnerable without proper cybersecurity measures in place. Cybercrime risks are no longer theoretical—they are becoming a reality daily. Several leaked records reveal the inner workings of infostealer malware, offering a sobering insight into how these threats function in such a precise and stealthy manner. 

While traditional data breaches are focused on large corporate databases, infostealers typically infect individual devices without the user's knowledge and take a more insidious approach, often without the user being aware of it. In addition to extracting data such as saved passwords, session cookies, autofill entries, and browser history, these malicious tools can also extract a wide range of sensitive data as soon as they are embedded. 

Once the data is stolen, it is then trafficked into cybercriminal circles, leading to a vicious cycle of account takeovers, financial fraud, and identity theft. It has recently been reported that the ALIEN TXTBase dataset, which has received much attention because of its huge scope and structure, is a notable example of this trend. There is a misconception that this dataset stems from a single incident, but in fact, it is actually a compilation of stealer logs from 744 different files that were derived from a single incident. 

It was originally shared through a Telegram channel, where threat actors often spread such information in a very unregulated and open environment. Each entry in the dataset follows the same format as a password—URL, login, and password, which provides an in-depth look at the credentials compromised. Troy Hunt, a cybersecurity researcher, gathered these fragments and compiled them into one unified and analysed dataset, which was then incorporated into Have I Been Pwned, a platform that can be used to identify a user's vulnerability. 

It is important to note that only two sample files were initially reviewed; however, as it became clear that the extent of the leak was immense, the whole collection was merged and analysed to gain a clearer picture of the damage. By aggregating this data methodically, cybercriminals are demonstrating that they aren't merely exploiting isolated incidents; they're assembling vast, cumulative archives of stolen credentials that they're cultivating over time. By sharing and organising this data in such a widespread manner, the reach and effectiveness of infostealer campaigns can be accelerated, presenting a threat to both personal privacy as well as organisational security for many years to come.

Act Without Delay 


As a result of the recent security breaches of passwords, individuals can still protect themselves by taking action as soon as possible to protect themselves and their devices. Procrastination increases vulnerability as threats are rapidly evolving. 

Strengthen Passwords


Creating a strong, unique password is essential. Users should avoid using common patterns when writing their passwords and create passphrases that include uppercase, lowercase, numbers, and symbols, in addition to letters and numbers. Password managers can assist in creating and storing complex passwords securely. 

Replace Compromised Credentials


Changing passwords should be done immediately if they are reused across different websites or remain unchanged for an extended period, especially for sensitive accounts like email, banking, and social media. Tools like Have I Been Pwned can help identify breaches faster. 

Enable Multi-Factor Authentication 


A multi-factor authentication system (MFA) reduces the risk of a security breach by reducing the need to upload multiple authentication credentials. App-based authenticators such as Google Authenticator provide better security than SMS-based authenticators, which are still preferable. 

Use Privacy Tools

Several platforms like Cloaked provide disposable email addresses and masked phone numbers, which minimise the possibility of sensitive information being breached and the exposure of personal information. 

Stay Vigilant and Informed

It is critical to monitor account activity regularly, revoke untrusted entry to accounts, and enable alerts on untrusted devices. Staying informed through a trusted cybersecurity source and educating others on how to protect themselves will further enhance collective security. The growing threat of credential theft can be combated by raising awareness, taking timely action, and establishing strong security habits. 

Protecting a person's digital identity is an ongoing responsibility which requires vigilance, proactive measures, and continuous awareness. As a result of recent credential leaks of unprecedented scale and sophistication, it has become increasingly imperative for individuals as well as organisations to take additional measures to ensure their cybersecurity posture is as secure as possible. Proactive and continuous vigilance must become an integral part of all organisations' cybersecurity practices, incorporating not just robust password management and multi-factor authentication, but also regular security audits and real-time monitoring of digital assets. 

As a precautionary measure against exploitation, companies should implement comprehensive cybersecurity frameworks, which include employee training, threat intelligence sharing, and incident response planning. It is equally important that users adopt privacy-enhancing tools and remain informed about emerging threats to stay ahead of adversaries who continually change their tactics, thereby protecting themselves against the relentless attacks of cyber adversaries. 

In the end, protecting digital identities is a continuous commitment that requires both awareness and action; if you fail to perform these responsibilities, you expose your business and personal data to relentless cybercriminals. Stakeholders need to cultivate a culture of security, mindfulness,sadandeverage advanced protective measures. This will reduce their vulnerability in the increasingly interconnected digital ecosystems of today, preserving trust and resilience to overcome the challenges presented by cybersecurity threats.

New WhatsApp Feature Allows Users to Control Media Auto-Saving

 


As part of WhatsApp's ongoing efforts to ensure the safety of its users, a new feature will strengthen the confidential nature of chat histories. The enhancement is part of the platform's overall initiative aimed at increasing privacy safeguards and allowing users to take more control of their messaging experience by strengthening the privacy safeguards. This upcoming feature offers advanced settings which allow individuals to control how their conversations are stored, accessed, and used, providing a deeper level of protection against unauthorized access to their communications. 

As WhatsApp refines its privacy architecture, it aims to meet the evolving expectations of its users about data security while strengthening their trust in it at the same time. WhatsApp's strategy of focusing on user-centric innovation reflects its focus on ensuring communication remains seamless as well as secure in an increasingly digital world, which is the reason for this development. As part of its continued effort to improve digital safety, WhatsApp has introduced a new feature that is aimed at protecting the privacy of conversations of its users.

With the launch of this initiative, the platform is highlighting its evolving approach to data security to create a user-friendly, secure messaging environment. As part of this new development, users will be able to customize how their chat data is handled within the app through a set of refined privacy controls. By allowing individuals to customize their privacy preferences, rather than relying solely on default settings, they will be able to tailor their privacy preferences specifically to meet their communication needs.

By using this approach, people are minimizing the risk that users will experience unauthorized access, and some are also enhancing transparency in how data is managed on their platform. In line with the broader shift toward ensuring users are more autonomous in protecting their digital interactions, these improvements are aligned with a greater industry shift. With WhatsApp's strong balance between usability and robust privacy standards, it continues to position itself as a leader in secure communication.

As social media becomes an increasingly integral part of our daily lives, it continues to prioritize the delivery of tools that prioritize the trust and resilience of its users as well as their technological abilities. During the coming months, WhatsApp plans on introducing a new feature that will allow users to take control over how recipients handle their shared content. 

There was a time when media files sent through the platform were automatically saved to the recipient's device, but now with this upcoming change, users will have the option of preventing others from automatically saving the media that they send—which will make it easier to maintain their privacy, whether it be in one-to-one or group conversations. This new functionality extends similar privacy protections to regular chats and their associated media, as well as disappearing messages. 

It will also be a great idea for users to activate the feature to get additional security precautions, such as a restriction on exporting complete chat histories from conversations where the setting is enabled. Even though the feature does not prevent individuals from forwarding individual messages, it does set stronger limits on the ability to share and archive entire conversations. 

By making this change to the privacy setting, users can limit the reach of their content while still having the flexibility to use the messaging experience as freely as possible. Another interesting aspect of this update is how it interacts with artificial intelligence software. When the advanced privacy setting is enabled, participants of that conversation will not be able to make use of Meta AI features within the chat when this setting is enabled.

It seems that this inclusion indicates an underlying commitment to enhancing data protection and ethical AI integration. The feature is still in the development stage, and WhatsApp is expected to refine and expand its capabilities in advance of its official release. Once it is released, it will remain an optional feature, which users will be able to choose to enable or disable based on their personal preferences. 

In addition to its ongoing improvements to the calling features of WhatsApp, it is rumoured that the company will launch a new privacy-focused tool to give users more control over how their media is shared. As a matter of tradition, the platform has always defaulted to store pictures and videos sent to users on their devices, and this default behaviour has created ongoing concerns about data privacy, data protection, and the safety of digital devices. 

WhatsApp has responded to this problem by allowing senders to decide whether the media they share can be saved by the recipient. Using this feature, WhatsApp introduces a new level of content ownership by giving the sender the ability to decide whether or not their message should be saved. The setting is presented in the chat interface as a toggle option, and functions similarly to the existing Disappearing Messages feature. 

In addition, WhatsApp has also developed a system to limit the automatic storage of files that are shared during a typical conversation. By doing so, WhatsApp hopes to reduce the risk of sensitive content being accidentally stored on unauthorized devices, shared further without consent, or stored on devices that are not properly secured. A user in an era when data is becoming increasingly vulnerable will certainly appreciate this additional control, which is particularly useful for users who handle confidential, personal or time-sensitive information. 

In addition to presently being in beta testing, this update is part of WhatsApp's overall strategy to roll out improvements in user-centred privacy in phases. Although the beta program will expand to a wider audience within the next few weeks, users enrolled in the beta program are the first ones to have access to the feature. To ensure early access to new functionalities, WhatsApp encourages users to keep their app up to date so that they can explore the latest privacy tools. 

To push users for greater privacy, WhatsApp has developed an advanced chat protection tool that goes beyond controlling media downloads to strengthen the user experience. In terms of data retention and third-party access, this upcoming functionality is intended to give users a greater sense of control over how they manage their conversations. 

By focusing on features that restrict how chats can be saved and exported, the platform aims to create an environment that is both safe and respectful for its users. The restriction of exporting entire chat histories is an important part of this update. This setting is activated when users enable the feature. 

Once users activate this setting, recipients will not be able to export conversations that include messages from users whose settings have been enabled by this feature. This restriction aims to prevent the wholesale sharing of private information by preventing concerns over unauthorized data transfers. However, the inability to send individual messages will continue to be allowed, however, the inability to export full conversations will ensure that long-form chats remain confidential, particularly those that contain sensitive or personal material. 

In addition, the integration of artificial intelligence tools is significantly limited because of this feature, which introduces an important limitation. As long as advanced chat privacy is enabled, neither the sender nor the recipient will be able to interact with Meta AI within a conversation when it is active. The restriction represents a larger shift towards cautious and intentional AI implementation, ensuring that private interactions are left safe from automating or analyzing them without the need for human intervention. 

 The feature, which is still under development, may require further refinements before it becomes widely available, but when it becomes widely available, it will be offered to users as an opt-in setting, so they have the option to enhance their privacy in any way that they choose.

Orion Brings Fully Homomorphic Encryption to Deep Learning for AI Privacy

 

As data privacy becomes an increasing concern, a new artificial intelligence (AI) encryption breakthrough could transform how sensitive information is handled. Researchers Austin Ebel, Karthik Garimella, and Assistant Professor Brandon Reagen have developed Orion, a framework that integrates fully homomorphic encryption (FHE) into deep learning. 

This advancement allows AI systems to analyze encrypted data without decrypting it, ensuring privacy throughout the process. FHE has long been considered a major breakthrough in cryptography because it enables computations on encrypted information while keeping it secure. However, applying this method to deep learning has been challenging due to the heavy computational requirements and technical constraints. Orion addresses these challenges by automating the conversion of deep learning models into FHE-compatible formats. 

The researchers’ study, recently published on arXiv and set to be presented at the 2025 ACM International Conference on Architectural Support for Programming Languages and Operating Systems, highlights Orion’s ability to make privacy-focused AI more practical. One of the biggest concerns in AI today is that machine learning models require direct access to user data, raising serious privacy risks. Orion eliminates this issue by allowing AI to function without exposing sensitive information. The framework is built to work with PyTorch, a widely used machine learning library, making it easier for developers to integrate FHE into existing models. 

Orion also introduces optimization techniques that reduce computational burdens, making privacy-preserving AI more efficient and scalable. Orion has demonstrated notable performance improvements, achieving speeds 2.38 times faster than previous FHE deep learning methods. The researchers successfully implemented high-resolution object detection using the YOLO-v1 model, which contains 139 million parameters—a scale previously considered impractical for FHE. This progress suggests Orion could enable encrypted AI applications in sectors like healthcare, finance, and cybersecurity, where protecting user data is essential. 

A key advantage of Orion is its accessibility. Traditional FHE implementations require specialized knowledge, making them difficult to adopt. Orion simplifies the process, allowing more developers to use the technology without extensive training. By open-sourcing the framework, the research team hopes to encourage further innovation and adoption. As AI continues to expand into everyday life, advancements like Orion could help ensure that technological progress does not come at the cost of privacy and security.