Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label ChatGPT-4o. Show all posts

Turned Into a Ghibli Character? So Did Your Private Info

 


A popular trend is taking over social media, where users are sharing cartoon-like pictures of themselves inspired by the art style of Studio Ghibli. These fun, animated portraits are often created using tools powered by artificial intelligence, like ChatGPT-4o. From Instagram to Facebook, users are posting these images excitedly. Big entrepreneurs and celebrities have partaken in this global trend, Sam Altman and Elon Musk to name a few.

But behind the charm of these AI filters lies a serious concern— your face is being collected and stored, often without your full understanding or consent.


What’s Really Happening When You Upload Your Face?

Each time someone uploads a photo or gives camera access to an app, they may be unknowingly allowing tech companies to capture their facial features. These features become part of a digital profile that can be stored, analyzed, and even sold. Unlike a password that you can change, your facial data is permanent. Once it’s out there, it’s out for good.

Many people don’t realize how often their face is scanned— whether it’s to unlock their phone, tag friends in photos, or try out AI tools that turn selfies into artwork. Even images of children and family members are being uploaded, putting their privacy at risk too.


Real-World Cases Show the Dangers

In one well-known case, a company named Clearview AI was accused of collecting billions of images from social platforms and other websites without asking permission. These were then used to create a massive database for law enforcement and private use.

In another incident, an Australian tech company called Outabox suffered a breach in May 2024. Over a million people had their facial scans and identity documents leaked. The stolen data was used for fraud, impersonation, and other crimes.

Retail stores using facial recognition to prevent theft have also become targets of cyberattacks. Once stolen, this kind of data is often sold on hidden parts of the internet, where it can be used to create fake identities or manipulate videos.


The Market for Facial Recognition Is Booming

Experts say the facial recognition industry will be worth over $14 billion by 2031. As demand grows, concerns about how companies use our faces for training AI tools without transparency are also increasing. Some websites can even track down a person’s online profile using just a picture.


How to Protect Yourself

To keep your face and personal data safe, it’s best to avoid viral image trends that ask you to upload clear photos. Turn off unnecessary camera permissions, don’t share high-resolution selfies, and choose passwords or PINs over face unlock for your devices.

These simple steps can help you avoid falling into the trap of giving away something as personal as your identity. Before sharing an AI-edited selfie, take a moment to think— are a few likes worth risking your privacy? Rather respect art and the artists who spend years perfecting their craft and maybe consider commissioning a portrait if you're that enthusiastic about it. 


UIUC Researchers Expose Security Risks in OpenAI's Voice-Enabled ChatGPT-4o API, Revealing Potential for Financial Scams

 

Researchers recently revealed that OpenAI’s ChatGPT-4o voice API could be exploited by cybercriminals for financial scams, showing some success despite moderate limitations. This discovery has raised concerns about the misuse potential of this advanced language model.

ChatGPT-4o, OpenAI’s latest AI model, offers new capabilities, combining text, voice, and vision processing. These updates are supported by security features aimed at detecting and blocking malicious activity, including unauthorized voice replication.

Voice-based scams have become a significant threat, further exacerbated by deepfake technology and advanced text-to-speech tools. Despite OpenAI’s security measures, researchers from the University of Illinois Urbana-Champaign (UIUC) demonstrated how these protections could still be circumvented, highlighting risks of abuse by cybercriminals.

Researchers Richard Fang, Dylan Bowman, and Daniel Kang emphasized that current AI tools may lack sufficient restrictions to prevent misuse. They pointed out the risk of large-scale scams using automated voice generation, which reduces the need for human effort and keeps operational costs low.

Their study examined a variety of scams, including unauthorized bank transfers, gift card fraud, cryptocurrency theft, and social media credential theft. Using ChatGPT-4o’s voice capabilities, the researchers automated key actions like navigation, data input, two-factor authentication, and following specific scam instructions.

To bypass ChatGPT-4o’s data protection filters, the team used prompt “jailbreaking” techniques, allowing the AI to handle sensitive information. They simulated interactions with ChatGPT-4o by acting as gullible victims, testing the feasibility of different scams on real websites.

By manually verifying each transaction, such as those on Bank of America’s site, they found varying success rates. For example, Gmail credential theft was successful 60% of the time, while crypto-related scams succeeded in about 40% of attempts.

Cost analysis showed that carrying out these scams was relatively inexpensive, with successful cases averaging $0.75. More complex scams, like unauthorized bank transfers, cost around $2.51—still low compared to the potential profits such scams might yield.

OpenAI responded by emphasizing that their upcoming model, o1-preview, includes advanced safeguards to prevent this type of misuse. OpenAI claims that this model significantly outperforms GPT-4o in resisting unsafe content generation and handling adversarial prompts.

OpenAI also highlighted the importance of studies like UIUC’s for enhancing ChatGPT’s defenses. They noted that GPT-4o already restricts voice replication to pre-approved voices and that newer models are undergoing stringent evaluations to increase robustness against malicious use.