Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Facial Recognition. Show all posts

Meet Chameleon: An AI-Powered Privacy Solution for Face Recognition

 


An artificial intelligence (AI) system developed by a team of researchers can safeguard users from malicious actors' unauthorized facial scanning. The AI model, dubbed Chameleon, employs a unique masking approach to create a mask that conceals faces in images while maintaining the visual quality of the protected image.

Furthermore, the researchers state that the model is resource-optimized, meaning it can be used even with low computing power. While the Chameleon AI model has not been made public yet, the team has claimed they intend to release it very soon.

Researchers at Georgia Tech University described the AI model in a report published in the online pre-print journal arXiv. The tool can add an invisible mask to faces in an image, making them unrecognizable to facial recognition algorithms. This allows users to secure their identities from criminal actors and AI data-scraping bots attempting to scan their faces.

“Privacy-preserving data sharing and analytics like Chameleon will help to advance governance and responsible adoption of AI technology and stimulate responsible science and innovation,” stated Ling Liu, professor of data and intelligence-powered computing at Georgia Tech's School of Computer Science and the lead author of the research paper.

Chameleon employs a unique masking approach known as Customized Privacy Protection (P-3) Mask. Once the mask is applied, the photos cannot be recognized by facial recognition software since the scans depict them "as being someone else."

While face-masking technologies have been available previously, the Chameleon AI model innovates in two key areas:

  1. Resource Optimization:
    Instead of creating individual masks for each photo, the tool develops one mask per user based on a few user-submitted facial images. This approach significantly reduces the computing power required to generate the undetectable mask.
  2. Image Quality Preservation:
    Preserving the image quality of protected photos proved challenging. To address this, the researchers employed Chameleon's Perceptibility Optimization technique. This technique allows the mask to be rendered automatically, without requiring any manual input or parameter adjustments, ensuring the image quality remains intact.

The researchers announced their plans to make Chameleon's code publicly available on GitHub soon, calling it a significant breakthrough in privacy protection. Once released, developers will be able to integrate the open-source AI model into various applications.

Enhancing Home Security with Advanced Technology

 

With global tensions on the rise, ensuring your home security system is up to par is a wise decision. Advances in science and technology have provided a variety of effective options, with even more innovations on the horizon.

Smart Speakers

Smart speakers like Amazon Echo, Google Nest, and Apple HomePod utilize advanced natural language processing (NLP) to understand and process human language. They also employ machine learning algorithms to recognize occupants and detect potential intruders. This voice recognition feature reduces the likelihood of system tampering.

Smart Cameras
Smart cameras offer an even higher level of security. These devices use facial recognition technology to control access to your home and can detect suspicious activities on your property. In response to threats, they can automatically lock doors and alert authorities. These advancements are driven by ongoing research in neural networks and artificial intelligence, which continue to evolve.

Smart Locks
Smart locks, such as those by Schlage, employ advanced encryption methods to prevent unauthorized entry while enhancing convenience for homeowners. These locks can be operated via smartphone and support multiple access codes for family members. The field of cryptography ensures that digital keys and communications between the lock and smartphone remain secure, with rapid advancements in this area.

Future Trends in Smart Home Security Technology

Biometric Security
Biometric technologies, including facial recognition and fingerprint identification, are expected to gain popularity as their accuracy improves. These methods provide a higher level of security compared to traditional keys or passcodes.

Blockchain for Security
Blockchain technology is gaining traction for its potential to enhance the security of smart devices. By decentralizing control and creating immutable records of all interactions, blockchain can prevent unauthorized access and tampering.

Edge Computing
Edge computing processes data locally, at the source, which significantly boosts speed and scalability. This approach makes it more challenging for hackers to steal data and is also more environmentally friendly.

By integrating these advanced technologies, you can significantly enhance the security and convenience of your home, ensuring a safer environment amid uncertain times.

Ban the Scan - Is Facial Recognition a Risk to Civil Liberties?

 

There are numerous voices around the world opposing the use of facial recognition technology. Many people believe facial recognition poses a severe threat to individual privacy, free speech, racial inequality, and data security. People who oppose it have solid grounds for doing so, and they have strong reservations of employing this technology in any form, citing its extremely high false positive rate and its implications for civil and personal liberties, specifically individual privacy.

Critics argue that facial recognition is biassed towards people of color, women, and children. Surveillance cameras are more common in places where immigrants live, which adds fuel to the flames. The explanation is the greater crime rate in those areas. Facial technology has not matured sufficiently, and its usage under such an environment worsens an already complex situation. The flaws in the justice system will expand as a result of the technology's inefficiency, contributing to harsher sentences and higher bails for those affected. 

Forced deployment

Despite its flaws, facial recognition technologies are used by police and other law enforcement agencies across the world. Surveillance is the key industry in which it is most widely applied. It is also commonly used in airports for passenger screening, as well as for housing and employment decisions. In 2020, San Francisco, Boston, and a few other localities restricted the use of facial recognition. 

According to an article on the Harvard blog by Alex Najibi, “police use face recognition to compare suspects’ photos to mugshots and driver’s license images; it is estimated that almost half of American adults – over 117 million people, as of 2016 – have photos within a facial recognition network used by law enforcement. This participation occurs without consent, or even awareness, and is bolstered by a lack of legislative oversight.” 

Private companies are also attempting to capitalise on biometric scanning in various ways and collecting user data for a variety of purposes. It is not new to blame Google and Meta for collecting excessive amounts of user data. The most recent clamour came when the World Coin initiative, founded by OpenAI CEO Sam Altman, suggested iris scanning as a requirement for coin ownership. These private-sector initiatives are troubling. 

Compared to other biometric systems such as fingerprints, iris scanning, and voice recognition, facial recognition has the highest error rate and is the most likely to cause privacy problems and bias against marginalised people and children.

The Electronic Frontier Foundation (EFF) and the Surveillance Technology Oversight Project (S.T.O.P.) oppose the use of facial recognition in any form. S.T.O.P. is based in New York, and its work focuses on civil rights. It also conducts study and activism on issues of surveillance technology abuse. 

Regarding the ban on the scan movement, S.T.O.P. says, "when we say scan, we mean the face scan feature of facial recognition technology. Surveillance, particularly facial recognition. It is a threat to free speech, freedom of association, and other civil liberties. Ban the Scan is a campaign and coalition built around passing two packages of bills that would ban facial recognition in a variety of contexts in New York City and New York State.”

Is Facial Biometrics the Future of Digital Security?

 



Within the dynamic sphere of digital technology, businesses are continually seeking innovative solutions to streamline operations and step up their security measures. One such innovation that has garnered widespread attention is facial biometrics, a cutting-edge technology encompassing face recognition and liveness detection. This technology, now available through platforms like Auth0 marketplace, is revolutionising digital processes and significantly enhancing security protocols.

What's Facial Biometrics?

Facial biometrics operates by analysing unique facial features to verify an individual's identity. Through face recognition, it compares facial characteristics from a provided image with stored templates for authentication purposes. Similarly, face liveness detection distinguishes live human faces from static images or videos, ensuring the authenticity of user interactions. This highlights the technology's versatility, applicable across various domains ranging from smartphone security to border control measures.

Streamlining Digital Processes

One of the key benefits of facial biometrics is its ability to streamline digital processes, starting with digital onboarding procedures. For instance, banks can expedite the verification process for new customers by comparing a selfie with their provided identification documents, ensuring compliance with regulatory requirements such as Know Your Customer (KYC) norms. Moreover, facial biometrics eliminates the need for complex passwords, offering users a secure and user-friendly authentication method. This streamlined approach not only strengthens security but also improves the overall user experience.

A Step-Up In The Security Measures

Beyond simplifying processes, facial biometrics adds an additional layer of security to business operations. By verifying user identities at critical junctures, such as transaction confirmations, businesses can thwart unauthorised access attempts by fraudsters. This proactive stance against potential threats not only safeguards sensitive information but also mitigates financial risks associated with fraudulent activities.

Embracing the Future

As facial biometrics continues to gain momentum, businesses are presented with an array of opportunities to bolster security measures and upgrade user experiences. Organisations can not only mitigate risks but also explore new possibilities for growth in the digital age. With a focus on simplicity, security, and user-centric design, facial biometrics promises to redefine the future of digital authentication and identity verification.

All in all, facial biometrics represents an impactful milestone in the realm of digital security and user convenience. By embracing this technology, businesses can achieve a delicate balance between efficiency and security, staying ahead of unprecedented threats posed by AI bots and malicious actors. However, it is imperative to implement facial biometrics in a manner that prioritises user privacy and data protection. As businesses work out the digital transformation journey, platforms like Auth0 marketplace offer comprehensive solutions tailored to diverse needs, ensuring a seamless integration of facial biometrics into existing frameworks.


Canadian University Vending Machine Malfunction Discloses Use of Facial Recognition

 

A faulty vending machine at a Canadian university has unintentionally exposed the fact that several of them have been covertly utilising facial recognition technology.

Earlier this month, a snack dispenser at the University of Waterloo displayed the error message "Invenda.Vending.FacialRecognition.App.exe" on the screen. 

There was no prior notice that the machine was employing technology or that a camera was keeping an eye on the whereabouts and purchases of the students. Users' consent was not requested before their faces were scanned or processed. 

"We wouldn’t have known if it weren’t for the application error. There’s no warning here,” stated River Stanley, who reported on the discovery for the university’s newspaper.

Invenda, the company that creates the machines, boasts the usage of "demographic detection software," which it claims can assess clients' gender and age. It claims that the technology complies with GDPR, and the European Union's privacy regulations, although it is uncertain whether it fulfils Canadian equivalents. 

Last year in April, the national retailer Canadian Tyre violated British Columbia privacy rules by using facial recognition technology without customer consent. The government's privacy commissioner stated that even if the retailers had acquired consent, the firm failed to show an appropriate justification for collecting facial information. 

In a statement, the University of Waterloo vowed to get rid of the Invenda machines "as soon as possible" and had "asked that the software be disabled" in the meanwhile. 

Meanwhile, students at Ontario University responded by using gum and paper to cover the hole where they believe the camera is positioned.

Promoting Trust in Facial Recognition: Principles for Biometric Vendors

 

Facial recognition technology has gained significant attention in recent years, with its applications ranging from security systems to unlocking smartphones. However, concerns about privacy, security, and potential misuse have also emerged, leading to a call for stronger regulation and ethical practices in the biometrics industry. To promote trust in facial recognition technology, biometric vendors should embrace three key principles that prioritize privacy, transparency, and accountability.
  1. Privacy Protection: Respecting individuals' privacy is crucial when deploying facial recognition technology. Biometric vendors should adopt privacy-centric practices, such as data minimization, ensuring that only necessary and relevant personal information is collected and stored. Clear consent mechanisms must be in place, enabling individuals to provide informed consent before their facial data is processed. Additionally, biometric vendors should implement strong security measures to safeguard collected data from unauthorized access or breaches.
  2. Transparent Algorithms and Processes: Transparency is essential to foster trust in facial recognition technology. Biometric vendors should disclose information about the algorithms used, ensuring they are fair, unbiased, and capable of accurately identifying individuals across diverse demographic groups. Openness regarding the data sources and training datasets is vital, enabling independent audits and evaluations to assess algorithm accuracy and potential biases. Transparency also extends to the purpose and scope of data collection, giving individuals a clear understanding of how their facial data is used.
  3. Accountability and Ethical Considerations: Biometric vendors must demonstrate accountability for their facial recognition technology. This involves establishing clear policies and guidelines for data handling, including retention periods and the secure deletion of data when no longer necessary. The implementation of appropriate governance frameworks and regular assessments can help ensure compliance with regulatory requirements, such as the General Data Protection Regulation (GDPR) in the European Union. Additionally, vendors should conduct thorough impact assessments to identify and mitigate potential risks associated with facial recognition technology.
Biometric businesses must address concerns and foster trust in their goods and services as facial recognition technology spreads. These vendors can aid in easing concerns around facial recognition technology by adopting values related to privacy protection, openness, and accountability. Adhering to these principles can not only increase public trust but also make it easier to create regulatory frameworks that strike a balance between innovation and the defense of individual rights. The development of facial recognition technology will ultimately be greatly influenced by the moral and ethical standards upheld by the biometrics sector.






Clearview: Face Recognition Software Used by US Police


Clearview, a facial recognition company has apparently conducted nearly a million searches, helping US police. Haon Ton, CEO of Clearview has revealed to BBC that the firm now has looked into as much as 30 billion images from various platforms including Facebook, taken without users’ consent. 

Millions of dollars have been fined against the corporation over and over again in Europe and Australia for privacy violations. Critics, however, argue that the police using Clearview to their aid puts everyone into a “perpetual police line-up.” 

"Whenever they have a photo of a suspect, they will compare it to your face[…]It's far too invasive," says Matthew Guariglia from the Electronic Frontier Foundation. 

The figure has not yet been clarified by the police in regard to the million searches conducted by Clearview. But, Miami Police has admitted to using this software for all types of crimes in a rare revelation to the BBC. 

How Does Clearview Works 

Clearview’s system enables a law enforcement customer to upload an image of a face, followed by looking for matches in a database of billions of images it has in store. It then provides links to where the corresponding images appear online. It is regarded as one of the world's most potent and reliable facial recognition companies. 

The firm has now been banned from providing its services to most US companies after the American Civil Liberties Union (ACLU) accused Clearview AI of violating privacy laws. However, there seems to be an exemption for police, with Mr. Ton saying that his software is used by hundreds of police forces across the US. 

Yet, the US police do not routinely reveal if they do use the software, and in fact have banned the software in several US cities like Portland, San Francisco, and Seattle. 

Police frequently portray the use of facial recognition technology to the public as being limited to serious or violent offenses. 

Moreover, in an interview with law enforcement about the efficiency of Clearview, Miami Police admitted to having used the software for all types of crime, from murders to shoplifting. Assistant Chief of Police Armando Aguilar said his team used the software around 450 times a year, and it has helped in solving murder cases. 

Yet, critics claim that there are hardly any rules governing the use of facial recognition by police.

 Facial Recognition Technology is Transforming in Texas

The Facial Recognition Act, a measure that places stringent restrictions on law enforcement's use of facial recognition surveillance, was introduced on September 28. 

The proposed legislation would establish a set of regulations that effectively address both the risks associated with facial recognition technology's failures, such as algorithmic bias and erroneous arrests and those associated with its successes, such as the possibility of widespread surveillance and abuse.

Errors in facial recognition might have drastic effects. Some of the various cases were the rejection of a woman's application for unemployment benefits in Texas, which made it impossible for her to pay her rent, and the arrest of a Black man by police in New Jersey, which could have limited the guy's options for housing and work.

Citizens had not been shielded from pointless facial identification by the laws of the new state. In Texas, businesses are not allowed to gather your biometric data without your permission, but if you refuse, you have no other options. Citizens are obligated to grant the apartment manager's request for approval. 

Researchers have already expended too much time and money to turn around now. In the majority of the U.S., there are even fewer limitations on the use of biometric data. Without regulation, businesses sell biometric information to advertisers and governments. Then, it can be used by state, federal, and private entities to silence our speech, pursue our preferences, and prevent us from exercising our fundamental rights.

To gather evidence against renters, at least one city even installed facial recognition-capable cameras outside a public housing complex. Facial recognition is growing increasingly widespread despite its flaws and potentially harmful effects. A facial recognition solution was introduced by Equifax, which targets leasing offices.

In order to determine if a customer would pay for their purchases, Socure and other companies market a service that combines facial recognition technology with computer code. A facial recognition technology marketed by ODIN is said to be able to recognize people who are homeless and give the police personal information about them. 

Such information includes any existing arrest warrants, which frequently just serve to criminalize poverty and make it harder to acquire housing, as well as claims of prior behavior, which could put armed cops on edge and make effective outreach more difficult. There is no reason why such characteristics are required for that work, notwithstanding ODIN's assertions that its system can remotely check people into shelters using biometric identification and location tracking. Facial recognition doesn't function as intended, and authors can't rely on it to make crucial judgments regarding housing, credit, or law enforcement.

Since the foundation of America, a lot has happened. Urbanization has brought us closer together, and technology has linked everyone on a scale that was previously unimaginable.