Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Privacy Concerns. Show all posts

Microsoft Revises AI Feature After Privacy Concerns

 

Microsoft is making changes to a controversial feature announced for its new range of AI-powered PCs after it was flagged as a potential "privacy nightmare." The "Recall" feature for Copilot+ was initially introduced as a way to enhance user experience by capturing and storing screenshots of desktop activity. However, following concerns that hackers could misuse this tool and its saved screenshots, Microsoft has decided to make the feature opt-in. 

"We have heard a clear signal that we can make it easier for people to choose to enable Recall on their Copilot+ PC and improve privacy and security safeguards," said Pavan Davuluri, corporate vice president of Windows and Devices, in a blog post on Friday. The company is banking on artificial intelligence (AI) to drive demand for its devices. Executive vice president Yusuf Medhi, during the event's keynote speech, likened the feature to having photographic memory, saying it used AI "to make it possible to access virtually anything you have ever seen on your PC." 

The feature can search through a user's past activity, including files, photos, emails, and browsing history. While many devices offer similar functionalities, Recall's unique aspect was its ability to take screenshots every few seconds and search these too. Microsoft claimed it "built privacy into Recall’s design" from the beginning, allowing users control over what was captured—such as opting out of capturing certain websites or not capturing private browsing on Microsoft’s browser, Edge. Despite these assurances, the company has now adjusted the feature to address privacy concerns. 

Changes will include making Recall an opt-in feature during the PC setup process, meaning it will be turned off by default. Users will also need to use Windows' "Hello" authentication process to enable the tool, ensuring that only authorized individuals can view or search their timeline of saved activity. Additionally, "proof of presence" will be required to access or search through the saved activity in Recall. These updates are set to be implemented before the launch of Copilot+ PCs on June 18. The adjustments aim to provide users with a clearer choice and enhanced control over their data, addressing the potential privacy risks associated with the feature. 

Microsoft's decision to revise the Recall feature underscores the importance of user feedback and the company's commitment to privacy and security. By making Recall opt-in and incorporating robust authentication measures, Microsoft seeks to balance innovation with the protection of user data, ensuring that AI enhancements do not compromise privacy. As AI continues to evolve, these safeguards are crucial in maintaining user trust and mitigating the risks associated with advanced data collection technologies.

Seattle Public Library Hit by Ransomware Attack, Online Services Disrupted

 

The Seattle Public Library (SPL) has faced a significant cybersecurity incident, with its online services being disrupted due to a ransomware attack. This attack, detected over the weekend, led to the library taking proactive measures by bringing its online catalog offline on Tuesday. By Wednesday morning, while some services had been restored, many critical functionalities remained unavailable, affecting numerous patrons who rely on the library's digital resources. 

The ransomware attack has caused extensive service interruptions. The library's main website is back online, and some digital services, such as Hoopla, are accessible. Hoopla allows library cardholders to remotely borrow audiobooks, movies, music, and other media. However, several essential services are still offline, including e-book access, the loaning system for physical items, Wi-Fi connectivity within library branches, printing services, and public computer usage. 

The library has reverted to manual processes to continue serving its patrons. Librarians are using paper forms to check out physical books, CDs, and DVDs, ensuring that patrons can still access these materials despite the digital outage. In the case of SPL, the specific details of the ransomware attack, including how the library's systems were compromised and whether any data was stolen or accessed, have not been disclosed. The library has prioritized investigating the extent of the breach and restoring services. The SPL has reassured its patrons that the privacy and security of their information are top priorities. 

In a public statement, the library acknowledged the inconvenience caused by the service disruptions and emphasized its commitment to resolving the issue swiftly. "Privacy and security of patron and employee information are top priorities," the library stated. "We are an organization that prides itself on providing you answers, and we are sorry that the information we can share is limited." The incident underscores the growing threat that ransomware poses to public institutions. Libraries, like many other organizations, handle vast amounts of personal data and provide critical services that can be attractive targets for cybercriminals. 

The ransomware attack on the Seattle Public Library is a stark reminder of the vulnerabilities that public institutions face in the digital age. As the library works to restore full functionality, it will likely implement enhanced security measures to prevent future incidents. This incident may also prompt other libraries and public institutions to re-evaluate their cybersecurity protocols and invest in more robust defenses against such attacks. In the broader context, the attack on SPL highlights the importance of cybersecurity awareness and preparedness. Public institutions must continually adapt to the evolving threat landscape to protect their digital assets and ensure uninterrupted service to their communities.

Microsoft's Windows 11 Recall Feature Sparks Major Privacy Concerns

 

Microsoft's introduction of the AI-driven Windows 11 Recall feature has raised significant privacy concerns, with many fearing it could create new vulnerabilities for data theft.

Unveiled during a Monday AI event, the Recall feature is intended to help users easily access past information through a simple search. Currently, it's available on Copilot+ PCs with Snapdragon X ARM processors, but Microsoft is collaborating with Intel and AMD for broader compatibility. 

Recall works by capturing screenshots of the active window every few seconds, recording user activity for up to three months. These snapshots are analyzed by an on-device Neural Processing Unit (NPU) and AI models to extract and index data, which users can search through using natural language queries. Microsoft assures that this data is encrypted with BitLocker and stored locally, not shared with other users on the device.

Despite Microsoft's assurances, the Recall feature has sparked immediate concerns about privacy and data security. Critics worry about the extensive data collection, as the feature records everything on the screen, potentially including sensitive information like passwords and private documents. Although Microsoft claims all data remains on the user’s device and is encrypted, the possibility of misuse remains a significant concern.

Microsoft emphasizes user control over the Recall feature, allowing users to decide what apps can be screenshotted and to pause or delete snapshots as needed. The company also stated that the feature would not capture content from Microsoft Edge’s InPrivate windows or other DRM-protected content. However, it remains unclear if similar protections will apply to other browsers' private modes, such as Firefox.

Yusuf Mehdi, Corporate Vice President & Consumer Chief Marketing Officer at Microsoft, assured journalists that the Recall index remains private, local, and secure. He reiterated that the data would not be used to train AI models and that users have complete control over editing and deleting captured data. Furthermore, Microsoft confirmed that Recall data would not be stored in the cloud, addressing concerns about remote data access.

Despite these reassurances, cybersecurity experts and users remain skeptical. Past instances of data exploitation by large companies have eroded trust, making users wary of Microsoft’s claims. The UK’s Information Commissioner's Office (ICO) has also sought clarification from Microsoft to ensure user data protection.

Microsoft admits that Recall does not perform content moderation, raising significant security concerns. Anything visible on the screen, including sensitive information, could be recorded and indexed. If a device is compromised, this data could be accessible to threat actors, potentially leading to extortion or further breaches.

Cybersecurity expert Kevin Beaumont likened the feature to a keylogger integrated into Windows, expressing concerns about the expanded attack surface. Historically, infostealer malware targets databases stored locally, and the Recall feature's data could become a prime target for such malware.

Given Microsoft’s role in handling consumer data and computing security, introducing a feature that could increase risk seems irresponsible to some experts. While Microsoft claims to prioritize security, the introduction of Recall could complicate this commitment.

In a pledge to prioritize security, Microsoft CEO Satya Nadella stated, "If you're faced with the tradeoff between security and another priority, your answer is clear: Do security." This statement underscores the importance of security over new features, emphasizing the need to protect customers' digital estates and build a safer digital world.

While the Recall feature aims to enhance user experience, its potential privacy risks and security implications necessitate careful consideration and robust safeguards to ensure user data protection.

Navigating Data Protection: What Car Shoppers Need to Know as Vehicles Turn Tech

 

Contemporary automobiles are brimming with cutting-edge technological features catering to the preferences of potential car buyers, ranging from proprietary operating systems to navigation aids and remote unlocking capabilities.

However, these technological strides raise concerns about driver privacy, according to Ivan Drury, the insights director at Edmunds, a prominent car website. Drury highlighted that many of these advancements rely on data, whether sourced from the car's built-in computer or through GPS services connected to the vehicle.

A September report by Mozilla, a data privacy advocate, sheds light on the data practices of various car brands. It reveals that most new vehicles collect diverse sets of user data, which they often share and sell. Approximately 84% of the assessed brands share personal data with undisclosed third parties, while 76% admit to selling customer data.

Only two brands, Renault and Dacia, currently offer users the option to delete their personal data, as per Mozilla's findings. Theresa Payton, founder and CEO of Fortalice Solutions, a cybersecurity advisory firm, likened the current scenario to the "Wild, Wild West" of data collection, emphasizing the challenges faced by consumers in balancing budgetary constraints with privacy concerns.

Tom McParland, a contributor to automotive website Jalopnik, pointed out that data collected by cars may not differ significantly from that shared by smartphones. He noted that users often unknowingly relinquish vast amounts of personal data through their mobile devices.

Despite the challenges, experts suggest three steps for consumers to navigate the complexities of data privacy when considering new car purchases. Firstly, they recommend inquiring about data privacy policies at the dealership. Potential buyers should seek clarification on a manufacturer's data collection practices and inquire about options to opt in or out of data collection, aggregation, and monetization.

Furthermore, consumers should explore the possibility of anonymizing their data to prevent personal identification. Drury advised consulting with service managers at the dealership for deeper insights, as they are often more familiar with technical aspects than salespersons.

Attempts to remove a car's internet connectivity device, as demonstrated in a recent episode of The New York Times' podcast "The Daily," may not effectively safeguard privacy. McParland cautioned against such actions, emphasizing the integration of modern car systems, which could compromise safety features and functionality.

While older, used cars offer an alternative without high-tech features, McParland warned of potential risks associated with aging vehicles. Payton highlighted the importance of finding a balance between risk and reward, as disabling the onboard computer could lead to missing out on crucial safety features.

Facebook's Two Decades: Four Transformative Impacts on the World

 

As Facebook celebrates its 20th anniversary, it's a moment to reflect on the profound impact the platform has had on society. From revolutionizing social media to sparking privacy debates and reshaping political landscapes, Facebook, now under the umbrella of Meta, has left an indelible mark on the digital world. Here are four key ways in which Facebook has transformed our lives:

1. Revolutionizing Social Media Landscape:
Before Facebook, platforms like MySpace existed, but Mark Zuckerberg's creation quickly outshone them upon its 2004 launch. Within a year, it amassed a million users, surpassing MySpace within four years, propelled by innovations like photo tagging. Despite fluctuations, Facebook steadily grew, reaching over a billion monthly users by 2012 and 2.11 billion daily users by 2023. Despite waning popularity among youth, Facebook remains the world's foremost social network, reshaping online social interaction.

2. Monetization and Privacy Concerns:
Facebook demonstrated the value of user data, becoming a powerhouse in advertising alongside Google. However, its data handling has been contentious, facing fines for breaches like the Cambridge Analytica scandal. Despite generating over $40 billion in revenue in the last quarter of 2023, Meta, Facebook's parent company, has faced legal scrutiny and fines for mishandling personal data.

3. Politicization of the Internet:
Facebook's targeted advertising made it a pivotal tool in political campaigning worldwide, with significant spending observed, such as in the lead-up to the 2020 US presidential election. It also facilitated grassroots movements like the Arab Spring. However, its role in exacerbating human rights abuses, as seen in Myanmar, has drawn criticism.

4. Meta's Dominance:
Facebook's success enabled Meta, previously Facebook, to acquire and amplify companies like WhatsApp, Instagram, and Oculus. Meta boasts over three billion daily users across its platforms. When unable to acquire rivals, Meta has been accused of replicating their features, facing regulatory challenges and accusations of market dominance. The company is shifting focus to AI and the Metaverse, indicating a departure from its Facebook-centric origins.

Looking ahead, Facebook's enduring popularity poses a challenge amidst rapid industry evolution and Meta's strategic shifts. As Meta ventures into the Metaverse and AI, the future of Facebook's dominance remains uncertain, despite its monumental impact over the past two decades.

Surge in Police Adoption of Private Cameras for Video Evidence Raises Privacy Concerns

 

Major cities like Los Angeles and Washington, D.C., are gearing up to establish Real-Time Crime Centers, positioned as pivotal hubs for the seamless integration of various police technologies and data. Described as a "nerve center," these facilities typically amalgamate public surveillance video with diverse police technologies such as license plate readers, facial recognition, drone cameras, body camera footage, and gunshot detection software. The proliferation of these centers has become widespread, with at least 135 currently operational across the country, according to reports.

Advocates assert that these centers enhance law enforcement's ability to solve crimes and apprehend suspects efficiently. However, critics express concerns about privacy infringement and fear that the heightened surveillance might disproportionately target marginalized communities, including Black individuals.

These crime centers increasingly blur the boundaries between private and public surveillance sources. In certain cities like Atlanta and Albuquerque, the number of private cameras supplying data to law enforcement significantly surpasses public ones. The Electronic Frontier Foundation, a digital rights advocacy non-profit, highlights the changing landscape, pointing out the surge in camera-equipped devices and the shift from tape to cloud storage for footage. This shift allows police to directly access images from storage companies rather than relying on residents or business owners who control the recording devices.

Notably, companies like Ring, owned by Amazon, have faced scrutiny for sharing audio and video from customer doorbells with police without explicit user consent. The increased availability of camera footage, often accessible through police programs or after specific requests, can introduce novel surveillance methods. For instance, in San Francisco, investigators, while reviewing doorbell camera footage for a hit-and-run case, discovered footage from a nearby Waymo self-driving vehicle. This trend may expand as self-driving cars become more prevalent.

Autonomous machines, beyond cars, are also becoming potential tools for surveillance. In Los Angeles, the robot food delivery company Serve Robotics provided LAPD with footage in a criminal case where the robot itself was the target of an attempted "bot-napping." The ambiguity in company policies raises questions about the sharing of footage in cases where the robots incidentally capture relevant information.

While some private cameras may unintentionally capture pertinent information, others actively seek it. A recent instance involves the city of St. Louis issuing a cease-and-desist letter to an entrepreneur planning to operate a private drone security program marketed as a crime deterrent.

Researchers Claim Apple Was Aware of AirDrop User Identification and Tracking Risks Since 2019

Security researchers had reportedly alerted Apple about vulnerabilities in its AirDrop wireless sharing feature back in 2019. According to these researchers, Chinese authorities recently exploited these vulnerabilities to track users of the AirDrop function. This case has raised concerns about global privacy implications.

The Chinese government allegedly used the compromised AirDrop feature to identify users on the Beijing subway accused of sharing "inappropriate information." The exploit has prompted internet freedom advocates to urge Apple to address the issue promptly and transparently. Pro-democracy activists in Hong Kong have previously used AirDrop, leading to Chinese authorities cracking down on the feature.

Beijing-based Wangshendongjian Technology claimed to have compromised AirDrop, collecting basic identifying information such as device names, email addresses, and phone numbers. Despite Chinese officials presenting this as an effective law enforcement technique, there are calls for Apple to take swift action.

US lawmakers, including Florida Sen. Marco Rubio, have expressed concern about the security of Apple's AirDrop function, calling on the tech giant to act promptly. However, Apple has not responded to requests for comments on the matter.

Researchers from Germany's Technical University of Darmstadt, who identified the flaws in 2019, stated that Apple received their report but did not act on the findings. The researchers proposed a fix in 2021, which Apple has allegedly not implemented.

The Chinese claim has raised alarms among US lawmakers, emphasizing the need for Apple to address security issues promptly. Critics argue that Apple's inaction may be exploited by authoritarian regimes, highlighting the broader implications of tech companies' relationships with such governments.

The Chinese tech firm's exploitation of AirDrop apparently utilized techniques identified by the German researchers in 2019. Experts point out that Apple's failure to add an extra layer of security, known as "salting," allowed the unauthorized access of device-identifying information.

Security experts emphasize that while AirDrop's device-to-device communication is generally secure, users may be vulnerable if they connect with a stranger or accept unsolicited connection requests. The lack of salting in the encryption process makes it easier for unauthorized parties to decipher the exchanged data.

Following the Chinese claim, Senator Ron Wyden criticized Apple for a "blatant failure" to protect users, emphasizing the four-year delay in addressing the security hole in AirDrop. The tech firm behind the AirDrop exploit has a history of collaboration with Chinese law enforcement and security authorities.

The intentional disclosure of the exploit by Chinese officials may serve various motives, including discouraging dissidents from using AirDrop. Experts suggest that Apple may now face challenges in fixing the issue due to potential retaliation from Chinese authorities, given the company's significant presence in the Chinese market. The hack revelation could also provide China with leverage to compel Apple's cooperation with security or intelligence demands.

Seure Messaging Apps: A Safer Alternative to SMS for Enhanced Privacy and Cybersecurity

 

The Short Messaging Service (SMS) has been a fundamental part of mobile communication since the 1990s when it was introduced on cellular networks globally. 

Despite the rise of Internet Protocol-based messaging services with the advent of smartphones, SMS continues to see widespread use. However, this persistence raises concerns about its safety and privacy implications.

Reasons Why SMS Is Not Secure

1. Lack of End-to-End Encryption

SMS lacks end-to-end encryption, with messages typically transmitted in plain text. This leaves them vulnerable to interception by anyone with the necessary expertise. Even if a mobile carrier employs encryption, it's often a weak and outdated algorithm applied only during transit.

2. Dependence on Outdated Technology

SMS relies on Signaling System No. 7 (SS7), a set of signalling protocols developed in the 1970s. This aging technology is highly insecure and susceptible to various cyberattacks. Instances of hackers exploiting SS7 vulnerabilities for malicious purposes have been recorded.

3. Government Access to SMS

SS7 security holes have not been adequately addressed, potentially due to government interest in monitoring citizens. This raises concerns about governments having the ability to read SMS messages. In the U.S., law enforcement can access messages older than 180 days without a warrant, despite efforts to change this.

4. Carrier Storage of Messages

Carriers retain SMS messages for a defined period, and metadata is stored even longer. While laws and policies aim to prevent unauthorized access, breaches can still occur, potentially compromising user privacy.

5. Irreversible Nature of SMS Messages

Once sent, SMS messages cannot be retracted. They persist on the recipient's device indefinitely, unless manually deleted. This lack of control raises concerns about the potential exposure of sensitive information in cases of phone compromise or hacking.

Several secure messaging apps provide safer alternatives to SMS:

1. Signal
 
Signal is a leading secure messaging app known for its robust end-to-end encryption, ensuring only intended recipients can access messages. Developed by the non-profit Signal Foundation, it prioritizes user privacy and does not collect personal data.

2. Telegram

Telegram offers a solid alternative to SMS. While messages are not end-to-end encrypted by default, users can enable Secret Chats for enhanced security. This feature prevents forwarding and limits access to messages, photos, videos, and documents.

3. WhatsApp

Despite its affiliation with Meta, WhatsApp is a popular alternative with billions of active users. It employs end-to-end encryption for message security, surpassing the safety provided by SMS. It's available on major platforms and is widely used among contacts.

In conclusion, SMS is not a recommended option for individuals concerned about personal cybersecurity and privacy. While it offers convenience, its security shortcomings are significant. 

Secure messaging apps with end-to-end encryption are superior alternatives, providing a higher level of protection for sensitive communications. If using SMS is unavoidable, caution and additional security measures are advised to safeguard information.

The Power and Pitfalls of AI-Driven Retail Security Systems


Theft is a major concern for retailers, and the pandemic has only made it worse. With U.S. merchants bracing for an estimated $100 billion in losses this year, innovative solutions are taking center stage. One such solution is the integration of artificial intelligence (AI) into security systems.

The Power and Pitfalls of Retail AI Solutions Against Theft

Traditional surveillance methods have fallen short in detecting and preventing retail theft. AI-driven security systems offer real-time threat detection and response, which is a significant improvement over traditional surveillance methods. Laura Freeman of Watcher Total Protection demonstrated the integration of AI into security systems as a potential game-changer for retailers at a recent Ace Hardware convention. 

These systems are adept at recognizing suspicious activities such as shoplifting, triggering instant alerts, and enabling swift responses. The use of smart tags on high-value items streamlines the review process, allowing merchants to focus on critical footage, and reducing the time spent on investigations.

Retail AI solutions in the battle against theft

In response to a mounting surge in internal theft perpetrated by retail employees, a significant paradigm shift has been instigated, heralding the advent of AI integration. At present, these discerning systems operate with acute precision, engaging in real-time scrutiny of cashier interactions while deftly navigating a labyrinthine array of data stemming from cash registers. 

Their overarching goal? To keenly discern anomalies suggestive of potential theft. As a nuanced and multifaceted rejoinder to this escalating challenge, the strategic deployment of body cameras for clerks stationed in high-crime locales has been orchestrated, complemented by the judicious implementation of facial recognition technology.

Privacy concerns and regulatory measures

While AI-driven security systems offer real-time threat detection and response, they raise ethical concerns about privacy and data misuse. Retailers are grappling with these implications, prompting the need for regulatory frameworks. 

The use of facial recognition technology has been particularly controversial due to its potential for misuse and abuse. In 2019, San Francisco became the first city in the United States to ban facial recognition technology by law enforcement agencies. The European Union has also proposed a ban on facial recognition technology in public spaces.

What's next?

AI-driven security systems offer real-time threat detection and response which is a significant improvement over traditional surveillance methods. However, their use raises ethical concerns about privacy and data misuse. Retailers are grappling with these implications, prompting the need for regulatory frameworks. While AI-driven security systems have great potential to combat retail theft, it is essential to balance their benefits with ethical considerations.