Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Online Safety. Show all posts

Protect Yourself from AI Scams and Deepfake Fraud

 

In today’s tech-driven world, scams have become increasingly sophisticated, fueled by advancements in artificial intelligence (AI) and deepfake technology. Falling victim to these scams can result in severe financial, social, and emotional consequences. Over the past year alone, cybercrime victims have reported average losses of $30,700 per incident. 

As the holiday season approaches, millennials and Gen Z shoppers are particularly vulnerable to scams, including deepfake celebrity endorsements. Research shows that one in five Americans has unknowingly purchased a product promoted through deepfake content, with the number rising to one in three among individuals aged 18-34. 

Sharif Abuadbba, a deepfake expert at CSIRO’s Data61 team, explains how scammers leverage AI to create realistic imitations of influencers. “Deepfakes can manipulate voices, expressions, and even gestures, making it incredibly convincing. Social media platforms amplify the impact as viewers share fake content widely,” Abuadbba states. 

Cybercriminals often target individuals as entry points to larger networks, exploiting relationships with family, friends, or employers. Identity theft can also harm professional reputations and financial credibility. To counter these threats, experts suggest practical steps to protect yourself and your loved ones. Scammers are increasingly impersonating loved ones through texts, calls, or video to request money. 

With AI voice cloning making such impersonations more believable, a pre-agreed safe word can serve as a verification tool. Jamie Rossato, CSIRO’s Chief Information Security Officer, advises, “Never transfer funds unless the person uses your special safe word.” If you receive suspicious calls, particularly from someone claiming to be a bank or official institution, verify their identity. 

Lauren Ferro, a cybersecurity expert, recommends calling the organization directly using its official number. “It’s better to be cautious upfront than to deal with stolen money or reputational damage later,” Ferro adds. Identity theft is the most reported cybercrime, making MFA essential. This adds an extra layer of protection by requiring both a password and a one-time verification code. Experts suggest using app-based authenticators like Microsoft Authenticator for enhanced security. 

Real-time alerts from your banking app can help detect unauthorized transactions. While banks monitor unusual activities, personal notifications allow you to respond immediately to potential scams. The personal information and media you share online can be exploited to create deepfakes. Liming Zhu, a research director at CSIRO, emphasizes the need for caution, particularly with content involving children. 

Awareness remains the most effective defense against scams. Staying informed about emerging threats and adopting proactive security measures can significantly reduce your risk of falling victim to cybercrime. As technology continues to evolve, safeguarding your digital presence is more important than ever. By adopting these expert tips, you can navigate the online world with greater confidence and security.

Microsoft's Windows 11 Recall Feature Sparks Major Privacy Concerns

 

Microsoft's introduction of the AI-driven Windows 11 Recall feature has raised significant privacy concerns, with many fearing it could create new vulnerabilities for data theft.

Unveiled during a Monday AI event, the Recall feature is intended to help users easily access past information through a simple search. Currently, it's available on Copilot+ PCs with Snapdragon X ARM processors, but Microsoft is collaborating with Intel and AMD for broader compatibility. 

Recall works by capturing screenshots of the active window every few seconds, recording user activity for up to three months. These snapshots are analyzed by an on-device Neural Processing Unit (NPU) and AI models to extract and index data, which users can search through using natural language queries. Microsoft assures that this data is encrypted with BitLocker and stored locally, not shared with other users on the device.

Despite Microsoft's assurances, the Recall feature has sparked immediate concerns about privacy and data security. Critics worry about the extensive data collection, as the feature records everything on the screen, potentially including sensitive information like passwords and private documents. Although Microsoft claims all data remains on the user’s device and is encrypted, the possibility of misuse remains a significant concern.

Microsoft emphasizes user control over the Recall feature, allowing users to decide what apps can be screenshotted and to pause or delete snapshots as needed. The company also stated that the feature would not capture content from Microsoft Edge’s InPrivate windows or other DRM-protected content. However, it remains unclear if similar protections will apply to other browsers' private modes, such as Firefox.

Yusuf Mehdi, Corporate Vice President & Consumer Chief Marketing Officer at Microsoft, assured journalists that the Recall index remains private, local, and secure. He reiterated that the data would not be used to train AI models and that users have complete control over editing and deleting captured data. Furthermore, Microsoft confirmed that Recall data would not be stored in the cloud, addressing concerns about remote data access.

Despite these reassurances, cybersecurity experts and users remain skeptical. Past instances of data exploitation by large companies have eroded trust, making users wary of Microsoft’s claims. The UK’s Information Commissioner's Office (ICO) has also sought clarification from Microsoft to ensure user data protection.

Microsoft admits that Recall does not perform content moderation, raising significant security concerns. Anything visible on the screen, including sensitive information, could be recorded and indexed. If a device is compromised, this data could be accessible to threat actors, potentially leading to extortion or further breaches.

Cybersecurity expert Kevin Beaumont likened the feature to a keylogger integrated into Windows, expressing concerns about the expanded attack surface. Historically, infostealer malware targets databases stored locally, and the Recall feature's data could become a prime target for such malware.

Given Microsoft’s role in handling consumer data and computing security, introducing a feature that could increase risk seems irresponsible to some experts. While Microsoft claims to prioritize security, the introduction of Recall could complicate this commitment.

In a pledge to prioritize security, Microsoft CEO Satya Nadella stated, "If you're faced with the tradeoff between security and another priority, your answer is clear: Do security." This statement underscores the importance of security over new features, emphasizing the need to protect customers' digital estates and build a safer digital world.

While the Recall feature aims to enhance user experience, its potential privacy risks and security implications necessitate careful consideration and robust safeguards to ensure user data protection.

Sharenting: What parents should consider before posting their children’s photos online

 

21st century parenting is firmly grounded in technology. From iPads keeping kids entertained on flights, to apps that allow parents to track their children’s feeds, development, and more, technology has changed what it means to be a parent. But social media has added another dimension. The average child now has a digital footprint that often begins when their parents post an ultrasound photo, inviting friends and family to share in a joyous event through regular “sharenting.” 

However, some parents—especially those that adopted social media at an early age—have fallen into the trap of posting about their children a little too frequently, a condition called ‘oversharenting’. Like anything to do with social media, this comes with several risks. For this reason, it is important for parents to understand how to safely post about their kids.

Sharenting refers to the practice of parents sharing photos of their children online. Usually, images are shared on social media platforms like Instagram and Facebook, and capture quotidian moments in children’s lives, such as first steps, trips to the zoo, school performances, and holidays, for example. But as much as parents may want to share their children’s achievements and lives with friends and family, sharing photos online can be problematic.

There are, of course, some positives about sharenting. For example, parents often build communities online through social media platforms. This can be a great resource for parenting and gives first-time parents a sense of camaraderie during a time when they may feel like they have no idea what they are doing. Similarly, for parents who live far away from other family members and friends, sharing photos of their kids online offers a way to involve these important people in their children’s lives. However, when parents share images that contain personal details about the child, or details that could be embarrassing for the children as they become older, ‘oversharenting’ can become a problem.

As social media platforms like Facebook and Instagram have become more pervasive in society, sharenting has become very normalized. In fact, statistics show that parents are more than willing to share images and videos of their children online. As such, more than 75% of parents have shared their children’s images on social media, and 33% have never asked their children for permission before sharing photos online.

Tips for safely sharing photos online with family and friends

In light of the sharenting dangers outlined here, parents may well be wondering whether any online photo sharing of their children is safe. Of course, this is a very personal choice. Some parents choose not to post any images of their children at all. But for those who wish to continue sharing photos online with family, there are numerous ways to improve the security of these photos and minimize the risks of ‘oversharenting’. Here are some things to remember:

Check privacy settings: Ensure that all posts can only be seen by family and close friends and remove resharing permissions. Allowing strangers and acquaintances to see children’s photos can be a sharenting danger.Have discussions about privacy with friends and family: Be vocal about protecting children’s privacy and set boundaries about how they can engage with posts.

Turn off metadata and geotagging: Not using these functions can minimize other people’s ability to track children through online photo sharing.

Do not include identifiable information: Whether it is in the photo itself or in the captions, be sure not to share details that would allow others to find and track children. This can include things like names, birthdates, schools, places they regularly go to, or even family homes.

Avoid using real names: Avoid giving people online access to children’s full names. Instead, use nicknames or descriptive phrases for kids.

Do not post potentially embarrassing images: Whether they are photos of the children in the bath or dressed in funny outfits, these images may cause problems for the child as they grow up.

Use secure platforms: Instead of sharing photos online, use more secure platforms to show pictures of children to friends and family. For example, WhatsApp protects photos with end-to-end encryption and gives users the option to send photos that can only be opened once.

Avoid showing the child’s face: To avoid ‘oversharenting’, some parents cover their children’s faces before posting their photos to social media. This can be done by using the “stickers” built into apps, like Instagram, to cover their faces or using editing tools to blur or block out their features.

Why Limiting Online Access Risks More Than Teen Safety



In the age of increasing online presence, especially amplified by the COVID-19 pandemic, the safety of young people on the internet has become a prominent concern. With a surge in screen time among youth, online spaces serve as crucial lifelines for community, education, and accessing information that may not be readily available elsewhere.

However, the lack of federal privacy protections exposes individuals, including children, to potential misuse of sensitive data. The widespread use of this data for targeted advertisements has raised concerns among young people and adults alike.

In response, teens are voicing their need for tools to navigate the web safely. They seek more control over their online experiences, including ephemeral content, algorithmic feed management, and the ability to delete collected data. Many emphasise the importance of reporting, blocking, and user filtering tools to minimise unwanted encounters while staying connected. 

Despite these calls, legislative discussions often seem disconnected from the concerns raised by teens. Some proposed bills aimed at protecting children online unintentionally risk limiting teens' access to constitutionally protected expression. Others, under the guise of child protection, may lead to censorship of essential discussions about race, gender, and other critical topics.

Recent legislative efforts at the federal and state levels raise concerns about potential misuse. Some proposals subject teens to constant parental supervision, age-gate them from essential information or even remove access to such information entirely. While the intention is often to enhance safety, these measures could infringe on young people's independence and hinder their development.

In an attempt to address harmful online outcomes, some bills, like the Kids Online Safety Act, could fuel censorship efforts. Fear of legal repercussions may prompt technology companies to restrict access to lawful content, impacting subjects such as LGBTQ+ history or reproductive care.

In some cases, laws directly invoke children's safety to justify blatant censorship. Florida's Stop WOKE Act, for instance, restricts sharing information related to race and gender under the pretext of protecting children's mental health. Despite being blocked by a federal judge, the law has had a chilling effect, with educational institutions refraining from providing resources on Black history and LGBTQ+ history.

Experts argue that restricting access to information doesn't benefit children. Youth need a diverse array of information for literacy, empathy, exposure to different ideas, and overall health. As lawmakers ban books and underfund extracurricular programs, empowering teenagers to access information freely becomes crucial for their development.

To bring it all together, while teens and their allies advocate for more control over their digital lives, some legislative proposals risk stripping away that control. Instead of relying on government judgment, the focus should be on empowering teens and parents to make informed decisions. 


 

GitHub Faces Rise in Malicious Use

 


GitHub, a widely used platform in the tech world, is facing a rising threat from cybercriminals. They're exploiting GitHub's popularity to host and spread harmful content, making it a hub for malicious activities like data theft and controlling compromised systems. This poses a challenge for cybersecurity, as the bad actors use GitHub's legitimacy to slip past traditional defences. 

 Known as ‘living-off-trusted-sites,’ this technique lets cybercriminals blend in with normal online traffic, making it harder to detect. Essentially, they're camouflaging their malicious activities within the usual flow of internet data. GitHub's involvement in delivering harmful code adds an extra layer of complexity. For instance, there have been cases of rogue Python packages (basically, software components) using secret GitHub channels for malicious commands on hacked systems. 

This situation highlights the need for increased awareness and updated cybersecurity strategies to tackle these growing threats. It's a reminder that even widely used platforms can become targets for cybercrime, and staying informed is crucial to staying secure. 

While it's not very common for bad actors to fully control and command systems through GitHub, they often use it as a way to share secret information. This is called a "dead drop resolver." It's like leaving a message in a hidden spot for someone else to pick up. Malware like Drokbk and ShellBox frequently use this technique. 

Another thing they sometimes do is use GitHub to sneakily take information out of a system. This doesn't happen a lot, and experts think it's because there are limits on how much data they can take and they want to avoid getting caught. 

Apart from these tricks, bad actors find other ways to misuse GitHub. For example, they might use a feature called GitHub Pages to trick people into giving away sensitive information. Sometimes, they even use GitHub as a backup communication channel for their secret operations. 

Understanding these tactics is important because it shows how people with bad intentions can use everyday platforms like GitHub for sneaky activities. By knowing about these things, we can be more careful and put in measures to protect ourselves from online threats. 

This trend of misusing popular online services extends beyond GitHub to other familiar platforms like Google Drive, Microsoft OneDrive, Dropbox, Notion, Firebase, Trello, and Discord. It's not just limited to GitHub; even source code and version control platforms like GitLab, BitBucket, and Codeberg face exploitation. 

GitHub acknowledges that there's no one-size-fits-all solution to detect abuse on their platform. They suggest using a combination of strategies influenced by specific factors like available logs, how organisations are structured, patterns of service usage, and the willingness to take risks. To know that this problem isn't unique to GitHub is crucial. Threat actors are using various everyday services to carry out their activities, making it important for users and organisations to be aware and adopt a mix of strategies to detect and prevent abuse. This includes being mindful of how different platforms may be misused and tailoring detection methods accordingly.


Understanding Cold Boot Attacks: Is Defense Possible?

 

Cold boot attacks represent a sophisticated form of cyber threat that specifically targets a computer's Random Access Memory (RAM), presenting a substantial risk to information security. It is imperative to comprehend the mechanics of cold boot attacks and the potential hazards they pose to take necessary precautions. However, if you become a target, mitigating the attack proves extremely challenging due to the requisite physical access to the computer.

Cold boot attacks, although less common, emerge as a potent cyber threat, particularly in their focus on a computer's RAM—a departure from the typical software-centric targets. These attacks have a physical dimension, with the primary objective being to induce a computer shutdown or reset, enabling the attacker to subsequently access the RAM.

When a computer is shut down, one anticipates that the data in RAM, including sensitive information like passwords and encryption keys, vanishes. However, the process is not instantaneous, allowing for the potential retrieval of data remaining in RAM, albeit for a brief period. A critical element of cold boot attacks is the necessity for physical access to the targeted device, elevating the risk in environments where attackers can physically approach machines, such as office spaces. Typically, attackers execute this attack using a specialized bootable USB designed to duplicate the RAM contents, enabling the device to reboot according to the attacker's intentions.

Despite the ominous nature of cold boot attacks, their execution requires a significant investment of skills and time, making it unlikely for the average person to encounter one. Nevertheless, safeguarding your computer from both cyber and physical threats remains a prudent practice.

The essence of a cold boot attack lies in exploiting a unique feature of RAM—the persistence of data even after the computer is powered off. Understanding this attack involves recognizing what happens to the data in RAM during a computer shutdown. The attacker gains physical access to the computer and utilizes a specialized USB to force a shutdown or restart. This USB facilitates the booting or dumping of RAM data for analysis and data extraction. Additionally, malware can be employed to transfer RAM contents to an external device.

The data collected in cold boot attacks encompasses a spectrum from personal information to encryption keys. Speed is paramount in this process, as prolonged power loss to RAM results in data corruption. These attacks pose a significant threat due to their ability to bypass conventional security software, rendering antivirus programs and encryption tools ineffective against them.

To counter cold boot attacks, a combination of physical and software strategies is necessary. Securing the physical space of the computer, employing encryption, and configuring BIOS or UEFI settings to prevent external device booting are recommended. Addressing data remanence is crucial, and techniques like memory scrubbing can be employed to clear RAM of sensitive data after shutdown or reset.

In conclusion, robust defenses against cold boot attacks involve a multi-faceted approach, including strong encryption, physical security measures, and regular updates. Understanding the intricacies of RAM and its data persistence underscores the need for dynamic and proactive cybersecurity measures. Adapting to evolving cyber threats and strengthening defenses is essential in building a resilient digital space that protects against not only cold boot attacks but a range of cyber threats.

Top Five Cybersecurity Challenges in the AI Era

 

The cybersecurity industry is fascinated by artificial intelligence (AI), which has the ability to completely change how security and IT teams respond to cyber crises, breaches, and ransomware assaults. 

However, it's important to have a realistic knowledge of the capabilities and limitations of AI, and there are a number of obstacles that prevent the technology from having an instant, profound influence on cybersecurity. In this article, we examine the limitations of AI in dealing with cybersecurity issues while emphasising the part played by organisations in empowering resilience and data-driven security practices. 

Inaccuracy 

The accuracy of its output is one of AI's main drawbacks in cybersecurity. Even though AI systems, like generative pre-trained transformers like ChatGPT, can generate content that is in line with the internet's zeitgeist, their answers are not necessarily precise or trustworthy. AI systems are excellent at coming up with answers that sound logical, but they struggle to offer accurate and trustworthy solutions. Given that not everything discovered online is factual, relying on unfiltered AI output can be risky. 

Recovery actions' complexity 

Recovery following a cyber attack often involves a complex series of procedures across multiple systems. IT professionals must perform a variety of actions in order to restore security and limit the harm. Entrusting the entire recovery process to an AI system would necessitate a high level of confidence in its dependability. However, existing AI technology is too fragile to manage the plethora of operations required for efficient cyberattack recovery. Directly linking general-purpose AI systems to vital cybersecurity processes is a huge problem that requires extensive research and testing.

General intelligence vs. General knowledge 

Another distinction to make is between general knowledge and general intelligence. While AI systems like ChatGPT excel at delivering general knowledge and generating text, they lack general intelligence. These systems can extrapolate solutions based on prior knowledge, but they lack the problem-solving abilities involving true general intelligence.

While dealing with AI systems via text may appear to humans to be effective, it is not consistent with how we have traditionally interacted with technology. As a result, current generative AI systems are limited in their ability to solve complex IT and security challenges.

Making ChatGPT act erratically 

There is another type of threat that we must be aware of: the nefarious exploitation of existing platforms. The possibility of AI being "jailbroken," which is rarely discussed in the media's coverage of the field, is quite real. 

This entails giving text commands to software like ChatGPT or Google's Bard in order to circumvent its ethical protections and set them free. By doing this, AI chatbots get transformed into powerful assistants for illegal activities. 

While it is critical to avoid the weaponization of general-purpose AI tools, it has proven extremely difficult to regulate. A recent study from Carnegie Mellon University presented a universal jailbreak for all AI models, which might create an almost limitless amount of prompts to circumvent AI safeguards. 

Furthermore, AI developers and users are always attempting to "hack" AI systems and succeeding. Indeed, no known universal solution for jailbreaking exists as of yet, and governments and corporations should be concerned as AI's mass adoption grows.

AI Experts Unearth Infinite ways to Bypass Bard and ChatGPT's Safety Measures

 

Researchers claim to have discovered potentially infinite ways to circumvent the safety measures on key AI-powered chatbots like OpenAI, Google, and Anthropic. 

Large language models, such as those used by ChatGPT, Bard, and Anthropic's Claude, are heavily controlled by tech firms. The devices are outfitted with a variety of safeguards to prevent them from being used for evil purposes, such as educating users on how to assemble a bomb or writing pages of hate speech.

Security analysts from Carnegie Mellon University in Pittsburgh and the Centre for A.I. Safety in San Francisco said last week that they have discovered ways to bypass these guardrails. 

The researchers identified that they might leverage jailbreaks built for open-source systems to attack mainstream and closed AI platforms. 

The report illustrated how automated adversarial attacks, primarily done by appending characters to the end of user inquiries, might be used to evade safety regulations and drive chatbots into creating harmful content, misinformation, or hate speech.

Unlike prior jailbreaks, the researchers' hacks were totally automated, allowing them to build a "virtually unlimited" number of similar attacks.

The researchers revealed their methodology to Google, Anthropic, and OpenAI. According to a Google spokesman, "while this is an issue across LLMs, we've built important guardrails into Bard - like the ones posited by this research - that we'll continue to improve over time." 

Anthropic representatives described jailbreaking measures as an active study area, with more work to be done. "We are experimenting with ways to enhance base model guardrails to make them more "harmless," said a spokesperson, "while also studying extra levels of defence." 

When Microsoft's AI-powered Bing and OpenAI's ChatGPT were made available, many users relished in finding ways to break the rules of the system. Early hacks were soon patched up by IT companies, including one where the chatbot was instructed to respond as if it had no content moderation.

The researchers did point out that it was "unclear" whether prominent model manufacturers would ever be able to entirely prevent such conduct. In addition to the safety of making potent open-source language models available to the public, this raises concerns about how AI systems are controlled.