An artificial intelligence (AI) system developed by a team of researchers can safeguard users from malicious actors' unauthorized facial scanning. The AI model, dubbed Chameleon, employs a unique masking approach to create a mask that conceals faces in images while maintaining the visual quality of the protected image.
Furthermore, the researchers state that the model is resource-optimized, meaning it can be used even with low computing power. While the Chameleon AI model has not been made public yet, the team has claimed they intend to release it very soon.
Researchers at Georgia Tech University described the AI model in a report published in the online pre-print journal arXiv. The tool can add an invisible mask to faces in an image, making them unrecognizable to facial recognition algorithms. This allows users to secure their identities from criminal actors and AI data-scraping bots attempting to scan their faces.
“Privacy-preserving data sharing and analytics like Chameleon will help to advance governance and responsible adoption of AI technology and stimulate responsible science and innovation,” stated Ling Liu, professor of data and intelligence-powered computing at Georgia Tech's School of Computer Science and the lead author of the research paper.
Chameleon employs a unique masking approach known as Customized Privacy Protection (P-3) Mask. Once the mask is applied, the photos cannot be recognized by facial recognition software since the scans depict them "as being someone else."
While face-masking technologies have been available previously, the Chameleon AI model innovates in two key areas:
The researchers announced their plans to make Chameleon's code publicly available on GitHub soon, calling it a significant breakthrough in privacy protection. Once released, developers will be able to integrate the open-source AI model into various applications.
According to the business, in benchmark testing, the AI models outperform the GPT-4 models. This specific AI model's long-context capabilities, which enable it to process and analyze research papers and health records, are one of its standout qualities.
The paper is available online at arXiv, an open-access repository for academic research, and is presently in the pre-print stage. In a post on X (formerly known as Twitter), Jeff Dean, Chief Scientist at Google DeepMind and Google Research, expressed his excitement about the potential of these models to improve patient and physician understanding of medical issues. I believe that one of the most significant application areas for AI will be in the healthcare industry.”
The AI model has been fine-tuned to boost performance when processing long-context data. A higher quality long-context processing would allow the chatbot to offer more precise and pinpointed answers even when the inquiries are not perfectly posed or when processing a large document of medical records.
Med-Gemini isn’t limited to text-based responses. It seamlessly integrates with medical images and videos, making it a versatile tool for clinicians.
Imagine a radiologist querying Med-Gemini about an X-ray image. The model can provide not only textual information but also highlight relevant areas in the image.
Med-Gemini’s forte lies in handling lengthy health records and research papers. It doesn’t shy away from complex queries or voluminous data.
Clinicians can now extract precise answers from extensive patient histories, aiding diagnosis and treatment decisions.
Med-Gemini builds upon the foundation of Gemini 1.0 and Gemini 1.5 LLM. These models are fine-tuned for medical contexts.
Google’s self-training approach has improved web search results. Med-Gemini delivers nuanced answers, fact-checking information against reliable sources.
Imagine a physician researching a rare disease. Med-Gemini not only retrieves relevant papers but also synthesizes insights.
It’s like having an AI colleague who reads thousands of articles in seconds and distills the essential knowledge.
Med-Gemini empowers healthcare providers to offer better care. It aids in diagnosis, treatment planning, and patient education.
Patients benefit from accurate information, demystifying medical jargon and fostering informed discussions.
As with any AI, ethical use is crucial. Med-Gemini must respect patient privacy, avoid biases, and prioritize evidence-based medicine.
Google’s commitment to transparency and fairness will be critical in its adoption.