Over the last quarter-century, privacy policies—the lengthy, complex legal language you quickly scan through before mindlessly clicking "agree"—have grown both longer and denser. The average length of a privacy policy quadrupled between 1996 and 2021, according to a study published last year, and they also got much harder to understand.
While examining the content of privacy policies, Isabel Wagner, an associate professor at De Montfort University, found several worrying trends, including the increasing usage of location data, the growing use of implicitly collected data, the absence of meaningful choice, the ineffective notification of privacy policy changes, an increase in data sharing with unidentified third parties, and the absence of specific information regarding security and privacy measures.
While machine learning can be an effective tool for comprehending the world of privacy rules, its use within a privacy policy has the potential to cause a ruckus. Zoom is a good example.
In a recent article from the technology news site Stack Diary, a clause in Zoom's terms of service that stated the company could employ user data to train artificial intelligence drew harsh criticism from users and privacy advocates. Zoom is a well-known web conferencing service that became commonplace when pandemic lockdowns forced many in-person meetings to take place on laptop screens.
According to the user agreement, Zoom users granted the firm "a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable licence" to use "customer content" for a variety of purposes such as "machine learning, artificial intelligence, training, [and] testing." This clause does not specify that users must first provide explicit permission for the company to do so.
Quick and ferocious internet criticism of Zoom resulted in some companies, including the news outlet Bellingcat, declaring their intention to stop using Zoom for video conferences.
Unsurprisingly, Zoom was forced to reply. Within hours of the story going viral on Monday, Zoom Chief Product Officer Smita Hashim wrote a blog post to allay concerns that people might worry that when they're virtually wishing their grandma a happy birthday from thousands of miles away, their likeness and mannerisms will be added to artificial intelligence models.
“As part of our commitment to transparency and user control, we are providing clarity on our approach to two essential aspects of our services: Zoom’s AI features and customer content sharing for product improvement purposes,” Hashim explained. “Our goal is to enable Zoom account owners and administrators to have control over these features and decisions, and we’re here to shed light on how we do that and how that affects certain customer groups.”
Hashim further stated that Zoom revised its terms of service to clarify the company's data usage guidelines. The clause stating that Zoom has "a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable licence" to use customer data for "machine learning, artificial intelligence, training, [and] testing" was left in tact, but a new clause was added below: “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent.”
Although he can understand why Zoom's terms of service struck a nerve, data engineer Jesse Woo pointed out that the sentiment expressed therein—that consumers permit the company to copy and use their content—is actually very common in these kinds of user agreements. The issue with Zoom's policy is that it was structured in such a way that each of the rights granted to the corporation are expressly listed, which can seem like a lot. However, that's also kind of what you might expect to experience while using goods or services in 2023—sorry, welcome to the future!