Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI. Show all posts

Meta Introduces AI Features For Ray-Ban Glasses in Europe

 

Meta has officially introduced certain AI functions for its Ray-Ban Meta augmented reality (AR) glasses in France, Italy, and Spain, marking a significant step in the company's spread of its innovative wearable technology across Europe. 

Starting earlier this week, customers in these nations were able to interact with Meta's AI assistant solely through their voice, allowing them to ask general enquiries and receive responses through the glasses. 

As part of Meta's larger initiative to make its AI assistant more widely available, this latest deployment covers French, Italian, and Spanish in addition to English. The announcement was made nearly a year after the Ray-Ban Meta spectacles were first released in September 2023.

In a blog post outlining the update, Meta stated, "We are thrilled to introduce Meta AI and its cutting-edge features to regions of the EU, and we look forward to expanding to more European countries soon.” However, not all of the features accessible in other regions will be included in the European rollout. 

While customers in the United States, Canada, and Australia benefit from multimodal AI capabilities on their Ray-Ban Meta glasses, such as the ability to gain information about objects in view of the glasses' camera, these functions will not be included in the European update at present.

For example, users in the United States can ask their glasses to identify landmarks in their surroundings, such as "Tell me more about this landmark," but these functionalities are not available in Europe due to ongoing regulatory issues. 

Meta has stated its commitment to dealing with Europe's complicated legal environment, specifically the EU's AI Act and the General Data Protection Regulation (GDPR). The company indicated that it is aiming to offer multimodal capabilities to more countries in the future, but there is no set date. 

While the rollout in France, Italy, and Spain marks a significant milestone, Meta's journey in the European market is far from done. As the firm navigates the regulatory landscape and expands its AI solutions, users in Europe can expect more updates and new features for their Ray-Ban Meta glasses in the coming months. 

As Meta continues to grow its devices and expand its AI capabilities, all eyes will be on how the firm adjusts to Europe's legal system and how this will impact the future of AR technology worldwide.

PyPI Attack: Hackers Use AI Models to Deliver JarkaStealer via Python Libraries

PyPI Attack: Hackers Use AI Models to Deliver JarkaStealer via Python Libraries

Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. 

The supply chain campaign shows the advancement of cyber threats attacking developers and the urgent need for caution in open-source activities. 

Experts have found two malicious packages uploaded to the Python Index (PyPI) repository pretending to be popular artificial intelligence (AI) models like OpenAI Chatgpt and Anthropic Claude to distribute an information stealer known as JarkaStealer. 

About attack vector

Called gptplus and claudeai-eng, the packages were uploaded by a user called "Xeroline" last year, resulting in 1,748 and 1,826 downloads. The two libraries can't be downloaded from PyPI. According to Kaspersky, the malicious packages were uploaded to the repository by one author and differed only in name and description. 

Experts believe the package offered a way to access GPT-4 Turbo and Claude AI API but contained malicious code that, upon installation, started the installation of malware. 

Particularly, the "__init__.py" file in these packages included Base64-encoded data that included code to download a Java archive file ("JavaUpdater.jar") from a GitHub repository, also downloading the Java Runtime Environment (JRE) from a Dropbox URL in case Java isn't already deployed on the host, before running the JAR file.

The impact

Based on information stealer JarkaStealer, the JAR file can steal a variety of sensitive data like web browser data, system data, session tokens, and screenshots from a wide range of applications like Steam, Telegram, and Discord. 

In the last step, the stolen data is archived, sent to the attacker's server, and then removed from the target's machine.JarkaStealer is known to offer under a malware-as-a-service (MaaS) model through a Telegram channel for a cost between $20 and $50, however, the source code has been leaked on GitHub. 

ClickPy stats suggest packages were downloaded over 3,500 times, primarily by users in China, the U.S., India, Russia, Germany, and France. The attack was part of an all-year supply chain attack campaign. 

How JarkaStealer steals

  • Steals web browser data- cookies, browsing history, and saved passwords. 
  • Compromises system data and setals OS details and user login details.
  • Steals session tokens from apps like Discord, Telegram, and Steam.
  • Captures real-time desktop activity through screenshots.

The stolen information is compressed and transmitted to a remote server controlled by the hacker, where it is removed from the target’s device.

India Faces Rising Ransomware Threat Amid Digital Growth

 


India, with rapid digital growth and reliance on technology, is in the hit list of cybercriminals. As one of the world's biggest economies, the country poses a distinct digital threat that cyber-crooks might exploit due to security holes in businesses, institutions, and personal users.

India recently saw a 51 percent surge in ransomware attacks in 2023 according to the Indian Computer Emergency Response Team, or CERT-In. Small and medium-sized businesses have been an especially vulnerable target, with more than 300 small banks being forced to close briefly in July after falling prey to a ransomware attack. For millions of Indians using digital banking for daily purchases and payments, such glitches underscore the need for further improvement in cybersecurity measures. A report from Kaspersky shows that 53% of SMBs operating in India have experienced the incidents of ransomware up till now this year, with more than 559 million cases being reported over just two months, starting from April and May this year.

Cyber Thugs are not only locking computers in businesses but extending attacks to individuals, even if it is personal electronic gadgets, stealing sensitive and highly confidential information. A well-organised group of attacks in the wave includes Mallox, RansomHub, LockBit, Kill Security, and ARCrypter. Such entities take advantage of Indian infrastructure weaknesses and focus on ransomware-as-a-service platforms that support Microsoft SQL databases. Recovery costs for affected organisations usually exceeded ₹11 crore and averaged ₹40 crore per incident in India, according to estimates for 2023. The financial sector, in particular the National Payment Corporation of India (NPCI), has been attacked very dearly, and it is crystal clear that there is an imperative need to strengthen the digital financial framework of India.

Cyber Defence Through AI

Indian organisations are now employing AI to fortify their digital defence. AI-based tools process enormous data in real time and report anomalies much more speedily than any manual system. From financial to healthcare sectors, high-security risks make AI become more integral in cybersecurity strategies in the sector. Lenovo's recent AI-enabled security initiatives exemplify how the technology has become mainstream with 71% of retailers in India adopting or planning to adopt AI-powered security.

As India pushes forward on its digital agenda, the threat of ransomware cannot be taken lightly. It will require intimate collaboration between government and private entities, investment in education in AI and cybersecurity, as well as creating safer environments for digital existence. For this, the government Cyber Commando initiative promises forward movement, but collective endeavours will be crucial to safeguarding India's burgeoning digital economy.


Embargo Ransomware Uses Custom Rust-Based Tools for Advanced Defense Evasion

 


Researchers at ESET claim that Embargo ransomware is using custom Rust-based tools to overcome cybersecurity defences built by vendors such as Microsoft and IBM. An instance of this new toolkit was observed during a ransomware incident targeting US companies in July 2024 and was composed of a loader and an EDR killer, namely MDeployer and MS4Killer, respectively, and was observed during a ransomware attack targeting US companies. 

Unlike other viruses, MS4Killer was customized for each victim's environment, excluding only selected security solutions. This makes it particularly dangerous to those who are unaware of its existence. It appears that the tools were created together and that some of the functionality in the tools overlaps. This report has revealed that the ransomware payloads of MDeployer, MS4Killer and Embargo were all made in Rust, which indicates that this language is the programming language that the group favours. 

During the summer of 2024, the first identification of the Embargo gang took place. This company appears to have a good amount of resources, being able to develop custom tools as well as set up its own infrastructure to help communicate with those affected. A double extortion method is used by the group - as well as encrypting the victims' data and extorting data from them, they threaten to publish those data on a leak site, demonstrating their intention to leak their data. 

Moreover, ESET considers Embargo to be a provider of ransomware-as-a-service (RaaS) that provides threats to users. The group is also able to adjust quickly during attacks. “The main purpose of the Embargo toolkit is to secure successful deployment of the ransomware payload by disabling the security solution in the victim’s infrastructure. Embargo puts a lot of effort into that, replicating the same functionality at different stages of the attack,” the researchers wrote. 

“We have also observed the attackers’ ability to adjust their tools on the fly, during an active intrusion, for a particular security solution,” they added. MDeployer is the main malicious loader Embargo attempts to deploy on victims’ machines in the compromised network. Its purpose is to facilitate ransomware execution and file encryption. It executes two payloads, MS4Killer and Embargo ransomware, and decrypts two encrypted files a.cache and b.cache that were dropped by an unknown previous stage. 

When the ransomware finishes encrypting the system, MDeployer terminates the MS4Killer process, deletes the decrypted payloads and a driver file dropped by MS4Killer, and finally reboots the system. Another feature of MDeployer is when it is executed with admin privileges as a DLL file, it attempts to reboot the victim’s system into Safe Mode to disable selected security solutions. As most cybersecurity defenses are not in effect in Safe Mode, it helps threat actors avoid detection. 

MS4Killer is a defense evasion tool that terminates security product processes using a technique known as bring your own vulnerable driver (BYOVD). MS4Killer terminates security products from the kernel by installing and abusing a vulnerable driver that is stored in a global variable. The process identifier of the process to terminate is passed to s4killer as a program argument. 

Embargo has extended the tool’s functionality with features such as running in an endless loop to constantly scan for running processes and hardcoding the list of process names to kill in the binary. After disabling the security tooling, Embargo affiliates can run the ransomware payload without worrying whether their payload gets detected. During attacks, the group can also adjust to the environment quickly, which is another advantage.

Basically, what Embargo toolkit does is that it offers a method of ensuring the successful deployment of the ransomware payload and prevents the security solution from being enabled in the victim's infrastructure on the day of deployment. This is something that Embargo invests a lot of time and effort into, replicating the same functionality at different stages of the attack process," wrote the researchers. They added that the attackers also showed a capability to modify their tools on the fly, during an active intrusion, by adjusting the settings on different security solutions on the fly. 

As part of Embargo's campaign against victims in the compromised network, MDeployer is one of the main malicious loaders that it attempts to deploy on victims' machines. With the use of this tool, ransomware can be executed and files can be encrypted easily. During the execution process, two payloads are executed, MS4Killer and Embargo ransomware, which decrypt two encrypted files a.cache and b.cache that have been left over from an unknown earlier stage onto the system.

After its encryption process, the MDeployer program systematically terminates the MS4Killer process, erases any decrypted payloads, and removes a driver previously introduced by MS4Killer. Upon completing these actions, the MDeployer initiates a system reboot. This process helps ensure that no remnants of the decryption or defence-evasion components persist on the system, potentially aiding threat actors in maintaining operational security. In scenarios where MDeployer is executed as a DLL file with administrative privileges, it has an additional capability: rebooting the compromised system into Safe Mode. 

This mode restricts numerous core functionalities, which is often leveraged by threat actors to minimize the effectiveness of cybersecurity defences and enhance stealth. Since most security tools do not operate in Safe Mode, this functionality enables attackers to evade detection more effectively and hinder any active defences, making detection and response significantly more challenging. The MS4Killer utility functions as a defense-evasion mechanism that specifically targets security product processes for termination. This is achieved using a technique referred to as "bring your own vulnerable driver" (BYOVD), wherein threat actors exploit a known vulnerable driver. 

By installing and leveraging this driver, which is maintained within a global variable, MS4Killer is able to terminate security processes from the kernel level, bypassing higher-level protections. The identifier for the targeted process is supplied as an argument to the MS4Killer program. To further enhance MS4Killer’s effectiveness, Embargo has incorporated additional capabilities, such as enabling the tool to run continuously in a loop. This looping function allows it to monitor for active processes that match a predefined list, which is hardcoded within the binary, and terminate them as they appear. 

By persistently disabling security tools, Embargo affiliates can then deploy ransomware payloads with minimal risk of detection or interference, creating an environment highly conducive to successful exploitation.

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

Downside of Tech: Need for Upgraded Security Measures Amid AI-driven Cyberattacks


Technological advancements have brought about an unparalleled transformation in our lives. However, the flip side to this progress is the escalating threat posed by AI-driven cyberattacks

Rising AI Threats

Artificial intelligence, once considered a tool for enhancing security measures, has become a threat. Cybercriminals are leveraging AI to orchestrate more sophisticated and pervasive attacks. AI’s capability to analyze vast amounts of data at lightning speed, identify vulnerabilities, and execute attacks autonomously has rendered traditional security measures obsolete. 

Sneha Katkar from Quick Heal notes, “The landscape of cybercrime has evolved significantly with AI automating and enhancing these attacks.”

Rising AI Threats

From January to April 2024, Indians lost about Rs 1,750 crore to fraud, as reported by the Indian Cybercrime Coordination Centre. Cybercrime has led to major financial setbacks for both people and businesses, with phishing, ransomware, and online fraud becoming more common.

As AI technology advances rapidly, there are rising concerns about its ability to boost cyberattacks by generating more persuasive phishing emails, automating harmful activities, and creating new types of malware.

Cybercriminals employed AI-driven tools to bypass security protocols, resulting in the compromise of sensitive data. Such incidents underscore the urgent need for upgraded security frameworks to counter these advanced threats.

The rise of AI-powered malware and ransomware is particularly concerning. These malicious programs can adapt, learn, and evolve, making them harder to detect and neutralize. Traditional antivirus software, which relies on signature-based detection, is often ineffective against such threats. As Katkar pointed out, “AI-driven cyberattacks require an equally sophisticated response.”

Challenges in Addressing AI

One of the critical challenges in combating AI-driven cyberattacks is the speed at which these attacks can be executed. Automated attacks can be carried out in a matter of minutes, causing significant damage before any countermeasures can be deployed. This rapid execution leaves organizations with little time to react, highlighting the need for real-time threat detection and response systems.

Moreover, the use of AI in phishing attacks has added a new layer of complexity. Phishing emails generated by AI can mimic human writing styles, making them indistinguishable from legitimate communications. This sophistication increases the likelihood of unsuspecting individuals falling victim to these scams. Organizations must therefore invest in advanced AI-driven security solutions that can detect and mitigate such threats.

AI-Powered Malware Targets Crypto Wallets with Image Scans

 



A new variant of the Rhadamanthys information stealer malware has been identified, which now poses a further threat to cryptocurrency users by adding AI to seed phrase recognition. The bad guys behind the malware were not enough in themselves, but when added into this malware came another functionality that includes optical character recognition or OCR scans for images and seed phrase recognition-the total key information needed to access cryptocurrency wallets.

According to Recorded Future's Insikt Group, Rhadamanthys malware now can scan for seed phrase images stored inside of infected devices in order to extract this information and yet further exploitation.

So, basically this means their wallets may now get hacked through this malware because their seed phrases are stored as images and not as text.


Evolution of Rhadamanthys

First discovered in 2022, Rhadamanthys has proven to be one of the most dangerous information-stealing malware available today that works under the MaaS model. It is a type of service allowing cyber criminals to rent their malware to other cyber criminals for a subscription fee of around $250 per month. The malware lets the attackers steal really sensitive information, including system details, credentials, browser passwords, and cryptocurrency wallet data.

The malware author, known as "kingcrete," continues to publish new versions through Telegram and Jabber despite the ban on underground forums like Exploit and XSS, in which mainly users from Russia and the former Soviet Union were targeted.

The last one, Rhadamanthys 0.7.0, which was published in June 2024, is a big improvement from the structural point of view. The malware is now equipped with AI-powered recognition of cryptocurrency wallet seed phrases by image. This has made the malware look like a very effective tool in the hands of hackers. Client and server-side frameworks were fully rewritten, making them fast and stable. Additionally, the malware now has the strength of 30 wallet-cracking algorithms and enhanced capabilities of extracting information from PDF and saved phrases.

Rhadamanthys also has a plugin system allowing it to further enhance its operations through keylogging ability, cryptocurrency clipping ability- wallet address alteration, and reverse proxy setups. The foregoing tools make it flexible for hackers to snoop for secrets in a stealthy manner.


Higher Risks for Crypto Users in Term of Security

Rhadamanthys is a crucial threat for anyone involved with cryptocurrencies, as the attackers are targeting wallet information stored in browsers, PDFs, and images. The worrying attack with AI at extracting seed phrases from images indicates attackers are always inventing ways to conquer security measures.

This evolution demands better security practices at the individual and organization level, particularly with regards to cryptocurrencies. Even for simple practices, like never storing sensitive data within an image or some other file without proper security, would have prevented this malware from happening.


Broader Implications and Related Threats

Rhdimanthys' evolving development is part of a larger evolutionary progress in malware evolution. Some other related kinds of stealer malware, such as Lumma and WhiteSnake, have also released updates recently that would further provide additional functionalities in extracting sensitive information. For instance, the Lumma stealer bypasses new security features implemented in newly designed browsers, whereas WhiteSnake stealer has been updated to obtain credit card information stored within web browsers.

These persistent updates on stealer malware are a reflection of the fact that cyber threats are becoming more mature. Also, other attacks, such as the ClickFix campaign, are deceiving users into running malicious code masqueraded as CAPTCHA verification systems.

With cybercrime operatives becoming more sophisticated and their tools being perfected day by day, there has never been such a challenge for online security. The user needs to be on the alert while getting to know what threats have risen in cyberspace to prevent misuse of personal and financial data.


Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

Ethics and Tech: Data Privacy Concerns Around Generative AI

Ethics and Tech: Data Privacy Concerns Around Generative AI

The tech industry is embracing Generative AI, but the conversation around data privacy has become increasingly important. The recent “State of Ethics and Trust in Technology” report by Deloitte highlights the pressing ethical considerations that accompany the rapid adoption of these technologies. 30% of organizations have adjusted new AI projects, and 25% have modified existing ones in response to the AI Act, the report mentions.

The Rise of Generative AI

54% of professionals believe that generative AI poses the highest ethical risk among emerging technologies. Additionally, 40% of respondents identified data privacy as their top concern. 

Generative AI, which includes technologies like GPT-4, DALL-E, and other advanced machine learning models, has shown immense potential in creating content, automating tasks, and enhancing decision-making processes. 

These technologies can generate human-like text, create realistic images, and even compose music, making them valuable tools across industries such as healthcare, finance, marketing, and entertainment.

However, the capabilities of generative AI also raise significant data privacy concerns. As these models require vast amounts of data to train and improve, the risk of mishandling sensitive information increases. This has led to heightened scrutiny from both regulatory bodies and the public.

Key Data Privacy Concerns

Data Collection and Usage: Generative AI systems often rely on large datasets that may include personal and sensitive information. The collection, storage, and usage of this data must comply with stringent privacy regulations such as GDPR and CCPA. Organizations must ensure that data is anonymized and used ethically to prevent misuse.

Transparency and Accountability: One of the major concerns is the lack of transparency in how generative AI models operate. Users and stakeholders need to understand how their data is being used and the decisions being made by these systems. Establishing clear accountability mechanisms is crucial to build trust and ensure ethical use.

Bias and Discrimination: Generative AI models can inadvertently perpetuate biases present in the training data. This can lead to discriminatory outcomes, particularly in sensitive areas like hiring, lending, and law enforcement. Addressing these biases requires continuous monitoring and updating of the models to ensure fairness and equity.

Security Risks: The integration of generative AI into various systems can introduce new security vulnerabilities. Cyberattacks targeting AI systems can lead to data breaches, exposing sensitive information. Robust security measures and regular audits are essential to safeguard against such threats.

Ethical Considerations and Trust

80% of respondents are required to complete mandatory technology ethics training, marking a 7% increase since 2022.  Nearly three-quarters of IT and business professionals rank data privacy among their top three ethical concerns related to generative AI:

  • Developing and implementing ethical frameworks for AI usage is crucial. These frameworks should outline principles for data privacy, transparency, and accountability, guiding organizations in the responsible deployment of generative AI.
  • Engaging with stakeholders, including employees, customers, and regulatory bodies, is essential to build trust. Open dialogues about the benefits and risks of generative AI can help in addressing concerns and fostering a culture of transparency.
  • The dynamic nature of AI technologies necessitates continuous monitoring and improvement. Regular assessments of AI systems for biases, security vulnerabilities, and compliance with privacy regulations are vital to ensure ethical use.

AI-powered Ray-Ban Meta Smart Glasses Raise Concerns About Data Privacy

AI-powered Ray-Ban Meta Smart Glasses Raise Concerns About Data Privacy

Ray-Ban Meta smart glasses are the new wearable tech in the market. Launched in 2021, these AI-powered smart glasses have sparked debates in the community. Though useful, the tech has raised concerns over data security and privacy among users.

Feature of Smart Glasses

The AI-powered glasses are filled with a range of advanced features that improve user experience. These features include open-ear speakers, a touch panel, camera. The glasses can also play music, click images take videos, and also offer real-time info via the Meta AI assistant. These features give an idea of a future where tech is involved in our daily lives.

Data Privacy and Security: Concerns

Meta makes most of its money from advertising, this raises concerns about how images clicked through glasses will be used by the company. Meta has a history of privacy and data security concerns, users are skeptical about how their data will be used if Mera captures the images without consent.

Another issue adding injury to this concern is Meta smart glasses introducing AI. AI has already caused controversies over its inaccurate information, its easy manipulation, and racial biases.

When users capture images or videos via smart glasses, Meta Cloud processes them with AI. Meta's website says "All photos processed with AI are stored and used to improve Meta products and will be used to train Meta’s AI with help from trained reviewers"

According to Meta, the processing analyses text, objects, and other contents of the image, and any info collected is used under Meta's Privacy Policy. In simple terms, images sent to clouds can be used to train Meta's AI, a potential for misuse.

What do Users Think?

The evolving tech like smart glasses has had a major impact on how we script our lives, but it has also sparked debates around privacy and user surveillance.

For instance, people in Canada can be photographed publically without their consent, but if the purpose is commercial, suitable restrictions are applied to prevent harm or distress.

Meta has released guidelines to encourage users to exercise caution and respect rights of the others while wearing the glasses. The guidelines suggest giving a formal announcement if you want to use the camera for live streaming and turning off the device when entering a private place.

Meta's reliability on user behavior to assure privacy standards is not enough to combat the concerns around surveillance, consent, and data misuse. Meta's history of privacy battles and its data-driven business model raise questions about whether the current measures can uphold privacy in the evolving digital landscape.

AI In Wrong Hands: The Underground Demand for Malicious LLMs

AI In Wrong Hands: The Underground Demand for Malicious LLMs

In recent times, Artificial Intelligence (AI) has offered various perks across industries. But, as with any powerful tool, threat actors are trying to use it for malicious reasons. Researchers suggest that the underground market for illicit large language models is enticing, highlighting a need for strong safety measures against AI misuse. 

These underground markets that deal with malicious large language models (LLMs) are called Mallas. This blog dives into the details of this dark industry and discusses the impact of these illicit LLMs on cybersecurity. 

The Rise of Malicious LLMs

LLMs, like OpenAI' GPT-4 have shown fine results in natural language processing, bringing applications like chatbots for content generation. However, the same tech that supports these useful apps can be misused for suspicious activities. 

Recently, researchers from Indian University Bloomington found 212 malicious LLMs on underground marketplaces between April and September last year. One of the models "WormGPT" made around $28,000 in just two months, revealing a trend among threat actors misusing AI and a rising demand for these harmful tools. 

How Uncensored Models Operate 

Various LLMs in the market were uncensored and built using open-source standards, few were jailbroken commercial models. Threat actors used Mallas to write phishing emails, build malware, and exploit zero days. 

Tech giants working in the AI models industry have built measures to protect against jailbreaking and detecting malicious attempts. But threat actors have also found ways to jump the guardrails and trick AI models like Google Meta, OpenAI, and Anthropic into providing malicious info. 

Underground Market for LLMs

Experts found two uncensored LLMs: DarkGPT, which costs 78 cents per 50 messages, and Escape GPT, a subscription model that charges $64.98 a month. Both models generate harmful code that antivirus tools fail to detect two-thirds of the time. Another model "WolfGPT" costs $150, and allows users to write phishing emails that can escape most spam detectors. 

The research findings suggest all harmful AI models could make malware, and 41.5% could create phishing emails. These models were built upon OpenAI's GPT-3.5 and GPT-4, Claude Instant, Claude-2-100k, and Pygmalion 13B. 

To fight these threats, experts have suggested a dataset of prompts used to make malware and escape safety features. AI companies should release models with default censorship settings and allow access to illicit models only for research purposes.

83% of Businesses Hit by Ransomware – Are You Next?


 

Ransomware continues to be a critical threat to businesses worldwide, with a staggering 83% of organisations reporting they experienced at least one ransomware attack in the last year. Alarmingly, almost half of those affected (46%) faced four or more attacks, and 14% encountered ten or more. These attacks, which involve malicious software encrypting valuable data until a ransom is paid, are causing serious disruptions. According to recent research by Onapsis, 61% of organisations impacted by ransomware faced downtime of at least 24 hours, highlighting the critical nature of these incidents. The downtime can cripple operations, leading to financial losses and operational challenges.

ERP Systems Becoming a Prime Target

A key finding from the research reveals that 89% of organisations affected by ransomware reported that their Enterprise Resource Planning (ERP) systems were compromised. ERP systems, which manage vital business functions such as accounting, supply chain management, and human resources, have become attractive targets for cybercriminals. These systems are business-critical, and the increasing frequency of attacks on them underscores the need for dedicated security solutions. In fact, 93% of respondents agreed that securing ERP applications should be a top priority, emphasising the urgency of investing in ERP-specific cybersecurity measures.

AI-Enabled Threats Amplify Concerns

There are growing concerns about the role of artificial intelligence (AI) in amplifying cyber threats. Gartner’s 2024 risk report highlighted AI-enhanced attacks as a top concern for businesses. As attackers leverage AI to craft more sophisticated and damaging threats, the risk to systems like ERP is only expected to increase. Mariano Nunez, CEO of Onapsis, pointed out that ransomware groups are increasingly focusing on disrupting ERP systems because of the immense leverage they gain from causing downtime, which can cost organisations millions of dollars per hour.

How Organisations Are Responding to Ransomware

In response to these rising threats, many organisations have been forced to reconsider their cybersecurity strategies. According to the research, 96% of businesses have adjusted their security approaches as a direct result of ransomware attacks. These adjustments have taken various forms: 57% of companies invested in new security solutions, 54% ramped up employee training on cybersecurity, and 53% added more cybersecurity staff internally to strengthen their defences. Additionally, 36% sought external help by hiring threat research teams to stay ahead of potential risks.

Ransom Demands and Communication with Attackers

When it comes to handling ransom demands, the approach varies across organisations. The study revealed that 69% of respondents communicated with the attackers behind the ransomware incidents. However, when it comes to paying the ransom, businesses are divided: 34% pay every time, 21% pay occasionally, and 45% refuse to pay at all. For those that do pay, the process often involves working with third-party experts like ransomware brokers—83% of organisations that paid a ransom sought help from such intermediaries to facilitate negotiations.

The prevalence of ransomware has forced organisations to acknowledge that their traditional security measures may no longer suffice. The combination of frequent attacks, the targeting of critical ERP systems, and the emerging threat of AI-enhanced attacks calls for a more proactive and specialised approach to cybersecurity. Businesses are investing heavily in solutions and expertise to mitigate the risks, but with ransomware attacks continuing to evolve, ongoing vigilance and adaptation will be key to safeguarding digital assets in the years ahead.


Australia’s Proposed Mandatory Guardrails for AI: A Step Towards Responsible Innovation


Australia has proposed a set of 10 mandatory guardrails aimed at ensuring the safe and responsible use of AI, particularly in high-risk settings. This initiative is a significant step towards balancing innovation with ethical considerations and public safety.

The Need for AI Regulation

AI technologies have the potential to revolutionise various sectors, from healthcare and finance to transportation and education. However, with great power comes great responsibility. The misuse or unintended consequences of AI can lead to significant ethical, legal, and social challenges. Issues such as bias in AI algorithms, data privacy concerns, and the potential for job displacement are just a few of the risks associated with unchecked AI development.

Australia’s proposed guardrails are designed to address these concerns by establishing a clear regulatory framework that promotes transparency, accountability, and ethical AI practices. These guardrails are not just about mitigating risks but also about fostering public trust and providing businesses with the regulatory certainty they need to innovate responsibly.

The Ten Mandatory Guardrails

Accountability Processes: Organizations must establish clear accountability mechanisms to ensure that AI systems are used responsibly. This includes defining roles and responsibilities for AI governance and oversight.

Risk Management: Implementing comprehensive risk management strategies is crucial. This involves identifying, assessing, and mitigating potential risks associated with AI applications.

Data Protection: Ensuring the privacy and security of data used in AI systems is paramount. Organizations must adopt robust data protection measures to prevent unauthorized access and misuse.

Human Oversight: AI systems should not operate in isolation. Human oversight is essential to monitor AI decisions and intervene when necessary to prevent harm.

Transparency: Transparency in AI operations is vital for building public trust. Organizations should provide clear and understandable information about how AI systems work and the decisions they make.

Bias Mitigation: Addressing and mitigating bias in AI algorithms is critical to ensure fairness and prevent discrimination. This involves regular audits and updates to AI models to eliminate biases.

Ethical Standards: Adhering to ethical standards in AI development and deployment is non-negotiable. Organizations must ensure that their AI practices align with societal values and ethical principles.

Public Engagement: Engaging with the public and stakeholders is essential for understanding societal concerns and expectations regarding AI. This helps in shaping AI policies that are inclusive and reflective of public interests.

Regulatory Compliance: Organizations must comply with existing laws and regulations related to AI. This includes adhering to industry-specific standards and guidelines.

Continuous Monitoring: AI systems should be continuously monitored and evaluated to ensure they operate as intended and do not pose unforeseen risks.

Blockchain Meets AI: The Impact of the Artificial Superintelligence Alliance


The Artificial Superintelligence Alliance (ASA), comprising leading AI and blockchain projects such as SingularityNET, Fetch.ai, and Ocean Protocol, has taken a significant step forward by launching a unified token. This move aims to create a more cohesive and efficient decentralized AI ecosystem, with far-reaching implications for various sectors, including the burgeoning field of gambling.

The Vision Behind the Alliance

The ASA’s primary objective is to foster collaboration and integration among decentralized AI systems. By merging their respective tokens—AGIX (SingularityNET), OCEAN (Ocean Protocol), and FET (Fetch.ai)—into a single token called ASI, the alliance seeks to streamline operations and enhance interoperability. This unified token is designed to facilitate seamless interactions between different AI platforms, thereby accelerating the development and deployment of advanced AI solutions.

Decentralized AI: The Future of Technology

Decentralized AI represents a paradigm shift from traditional, centralized AI models. In a decentralized framework, AI systems are distributed across a network of nodes, ensuring greater transparency, security, and resilience. This approach mitigates the risks associated with central points of failure and enhances the robustness of AI applications.

The ASA’s initiative aligns with the broader trend towards decentralization in the tech industry. By leveraging blockchain technology, the alliance aims to create a trustless environment where AI agents can interact and collaborate without the need for intermediaries. This not only reduces operational costs but also fosters innovation by enabling a more open and inclusive ecosystem.

The Role of the ASI Token

The introduction of the ASI token is a pivotal aspect of the ASA’s strategy. This unified token serves as the backbone of the alliance’s decentralized AI ecosystem, facilitating transactions and interactions between different AI platforms. The ASI token is designed to be highly versatile, supporting a wide range of use cases, from data sharing and AI model training to decentralized finance (DeFi) applications.

One of the most intriguing applications of the ASI token is in the gambling industry. The integration of AI and blockchain technology has the potential to revolutionize online gambling by enhancing transparency, fairness, and security. AI algorithms can be used to analyze vast amounts of data, providing insights that can improve the user experience and optimize betting strategies. Meanwhile, blockchain technology ensures that all transactions are immutable and verifiable, reducing the risk of fraud and manipulation.

What it means for the Gambling Industry?

The gambling industry stands to benefit significantly from the advancements brought about by the ASA. By leveraging AI and blockchain technology, online gambling platforms can offer a more secure and transparent environment for users. AI-driven analytics can provide personalized recommendations and insights, enhancing the overall user experience. Additionally, the use of blockchain technology ensures that all transactions are recorded on a public ledger, providing an added layer of security and trust.

The ASI token can also facilitate seamless transactions within the gambling ecosystem. Users can utilize the token to place bets, participate in games, and access various services offered by online gambling platforms. The interoperability of the ASI token across different AI platforms further enhances its utility, making it a valuable asset for users and developers alike.

Adopting a Connected Mindset: A Strategic Imperative for National Security

 

In today's rapidly advancing technological landscape, connectivity goes beyond being just a buzzword—it has become a strategic necessity for both businesses and national defense. As security threats grow more sophisticated, an integrated approach that combines technology, strategic planning, and human expertise is essential. Embracing a connected mindset is crucial for national security, and here's how it can be effectively implemented.What is a Connected Mindset?A connected mindset involves understanding that security is not an isolated function but a comprehensive effort that spans multiple domains and disciplines. It requires seamless collaboration between government, private industry, and academia to address security challenges. This approach recognizes that modern threats are interconnected and complex, necessitating a comprehensive response.Over the past few decades, security threats have evolved significantly. While traditional threats like military aggression still exist, newer challenges such as cyber threats, economic espionage, and misinformation have emerged. Cybersecurity has become a major concern as both state and non-state actors develop new methods to exploit vulnerabilities in digital infrastructure. Attacks on critical systems can disrupt essential services, leading to widespread chaos and posing risks to public safety. The recent rise in ransomware attacks on healthcare, financial sectors, and government entities underscores the need for a comprehensive approach to these challenges.The Central Role of TechnologyAt the core of the connected mindset is technology. Advances in artificial intelligence (AI), machine learning, and big data analytics provide valuable tools for detecting and countering threats. However, these technologies need to be part of a broader strategy that includes human insight and collaborative efforts. AI can process large datasets to identify patterns and anomalies indicating potential threats, while machine learning algorithms can predict vulnerabilities and suggest proactive measures. Big data analytics enable real-time insights into emerging risks, facilitating faster and more effective responses.Despite the critical role of technology, human expertise remains indispensable. Cybersecurity professionals, intelligence analysts, and policymakers must collaborate to interpret data, evaluate risks, and devise strategies. Public-private partnerships are vital for fostering this cooperation, as the private sector often possesses cutting-edge technology and expertise, while the government has access to critical intelligence and regulatory frameworks. Together, they can build a more resilient security framework.To implement a connected mindset effectively, consider the following steps:
  • Promote Continuous Education and Training: Regular training programs are essential to keep professionals up-to-date with the latest threats and technologies. Cybersecurity certifications, workshops, and simulations can help enhance skills and preparedness.
  • Encourage Information Sharing: Establishing robust platforms and protocols for information sharing between public and private sectors can enhance threat detection and response times. Shared information must be timely, accurate, and actionable.
  • Invest in Advanced Technology: Governments and organizations should invest in AI, machine learning, and advanced cybersecurity tools to stay ahead of evolving threats, ensuring real-time threat analysis capabilities.
  • Foster Cross-Sector Collaboration: Cultivating a culture of collaboration is crucial. Regular meetings, joint exercises, and shared initiatives can build stronger partnerships and trust.
  • Develop Supportive Policies: Policies and regulations should encourage a connected mindset by promoting collaboration and innovation while protecting data privacy and supporting effective threat detection.
A connected mindset is not just a strategic advantage—it is essential for national security. As threats evolve, adopting a holistic approach that integrates technology, human insight, and cross-sector collaboration is crucial. By fostering this mindset, we can create a more resilient and secure future capable of addressing the complexities of modern security challenges. In a world where physical and digital threats increasingly overlap, a connected mindset paves the way for enhanced national security and a safer global community.

Cyberattacks Skyrocket in India, Are We Ready for the Digital Danger Ahead?


 

India is experiencing a rise in cyberattacks, particularly targeting its key sectors such as finance, government, manufacturing, and healthcare. This increase has prompted the Reserve Bank of India (RBI) to urge banks and financial institutions to strengthen their cybersecurity measures.

As India continues to digitise its infrastructure, it has become more vulnerable to cyberattacks. Earlier this year, hackers stole and leaked 7.5 million records from boAt, a leading Indian company that makes wireless audio and wearable devices. This is just one example of how cybercriminals are targeting Indian businesses and institutions.

The RBI has expressed concern about the growing risks in the financial sector due to rapid digitization. In 2023 alone, India’s national cybersecurity team, CERT-In, handled about 16 million cyber incidents, a massive increase from just 53,000 incidents in 2017. Most banks and non-banking financial companies (NBFCs) now see cybersecurity as a major challenge as they move towards digital technology. The RBI’s report highlights that the speed at which information and rumours can spread digitally could threaten financial stability. Cybercriminals are increasingly focusing on financial institutions rather than individual customers.

The public sector, including government agencies, has also seen a dramatic rise in cyberattacks. Many organisations report that these attacks have increased by at least 50%. Earlier this year, a hacking group targeted government agencies and energy companies using a type of malware known as HackBrowserData. Additionally, countries like Pakistan and China have been intensifying their cyberattacks on Indian organisations, with operations like the recent Cosmic Leopard campaign.

According to a report by Cloudflare, 83% of organisations in India experienced at least one cybersecurity incident in the last year, placing India among the top countries in Asia facing such threats. Globally, India is the fifth most breached nation, bringing attention  to the bigger picture which screams for stronger cybersecurity measures.

Indian companies are most worried about threats related to cloud computing, connected devices, and software vulnerabilities. The adoption of new technologies like artificial intelligence (AI) and cloud computing, combined with the shift to remote work, has accelerated digital transformation, but it also increases the need for stronger security measures.

Manu Dwivedi, a cybersecurity expert from PwC India, points out that AI-powered phishing and sophisticated social engineering techniques have made ransomware a top concern for organisations. As more companies use cloud services and open-source software, the risk of cyberattacks grows. Dwivedi also stresses the importance of protecting against insider threats, which requires a mix of strategy, culture, training, and governance.

AI is playing a growing role in both defending against and enabling cyberattacks. While AI has the potential to improve security, it also introduces new risks. Cybercriminals are beginning to use AI to create more advanced malware that can avoid detection. Dwivedi warns that as AI continues to evolve, it may become harder to track how these tools are being misused by attackers.

Partha Gopalakrishnan, founder of PG Advisors, emphasises the need for India to update its cybersecurity laws. The current law, the Information Technology Act of 2000, is outdated and does not fully address today’s digital threats. Gopalakrishnan also stressed upon the growing demand for AI skills in India, suggesting that businesses should focus on training in both AI and cybersecurity to close the skills gap. He warns that as AI becomes more accessible, it could empower a wider range of people to carry out sophisticated cyberattacks.

India’s digital growth presents great opportunities, but it also comes with strenuous challenges. It’s crucial for Indian businesses and government agencies to develop comprehensive cybersecurity strategies and stay vigilant.


AI Revolutionizing Accounting: Experts Urge Accountants to Embrace Technology for Future Success

 

 
Artificial Intelligence (AI) is capable of handling repetitive tasks, but accountants who embrace and integrate technology can concentrate on more valuable activities beyond basic number-crunching, according to Md Sajid Khan, Director - India at the Association of Certified Chartered Accountants (ACCA).

Khan emphasized in an interview with The Economic Times, "AI will replace those professional accountants who fail to understand and incorporate these technologies."

However, he noted that AI cannot replace strategic thinking, decision-making, or emotional intelligence. "Accountants are not just about crunching numbers; with the right technology, they can avoid routine tasks and focus on more impactful work," Khan added.

"There is a significant shortage of accounting professionals in India, leading to a surge in demand. The market, valued at $13.6 billion in 2020, is expected to grow to USD 20 billion by 2025," Khan stated.

Despite the growing demand, unemployment remains a pressing issue. Khan pointed out that the challenge lies in finding individuals with the right skills. "While employment is a concern, employability is equally critical," he explained.

Khan also highlighted the need for employers to access talent that meets global standards, even when outsourcing. He remarked, "Application and communication skills are often lacking in today's graduates, largely due to the rapidly changing landscape."

He further emphasized that effective collaboration and diversity are now key factors in the industry. Citing a McKinsey & Co report, Khan mentioned that CFOs are currently investing in technology, with a focus on building team capacity.

Khan noted that whether it's a "Big 4, Big 5, or Big 6," the critical factor is the extent to which local talent and values are integrated. He said, "Whether it's an Indian firm or a global one with a presence in India, what truly matters is how effectively Indian talent is utilized."

Khan also discussed the trend of businesses splitting into accounting and consultancy functions, attributing it to both the need for specialized expertise and regulatory demands. "Clear independence between different functions is essential to meet these standards," he told ETCFO.

He explained that while businesses used to rely on generalist consultants, there is now a growing preference for specialists who offer targeted skills and expert advice in specific fields. "This shift towards specialization is being driven by clients seeking more in-depth knowledge and expertise," Khan concluded.

AI-Enhanced Crypto Scams: A New Challenge for ASIC


The Australian Securities and Investments Commission (ASIC) has been at the forefront of combating crypto scams, working tirelessly to protect consumers from fraudulent schemes. Despite a reported decline in the number of scams since April, ASIC continues to emphasize the persistent threat posed by crypto scams, especially with the advent of AI-enhanced fraud techniques.

The Rise of Crypto Scams

Cryptocurrencies, with their promise of high returns and decentralized nature, have become a lucrative target for scammers. These scams range from fake initial coin offerings (ICOs) and Ponzi schemes to phishing attacks and fraudulent exchanges. The anonymity and lack of regulation in the crypto space make it an attractive playground for cybercriminals.

ASIC has been vigilant in identifying and shutting down these scams. Over the past year, the regulator has taken down more than 600 crypto-related scams, reflecting the scale of the problem. However, the battle is far from over.

Monthly Decline in Scams: A Positive Trend

Since April, ASIC has reported a monthly decline in the number of crypto scams. This trend is a positive indicator of the effectiveness of the regulator’s efforts and increased public awareness. Educational campaigns and stricter regulations have played a significant role in this decline. Investors are becoming more cautious and better informed about the risks associated with crypto investments.

The Persistent Threat of AI-Enhanced Scams

Despite the decline, ASIC warns that the threat of crypto scams remains significant. One of the emerging concerns is the use of artificial intelligence (AI) by scammers. AI-enhanced scams are more sophisticated and harder to detect. These scams can create realistic fake identities, automate phishing attacks, and even manipulate market trends to deceive investors.

AI tools can generate convincing fake websites, social media profiles, and communication that can easily trick even the most cautious investors. The use of AI in scams represents a new frontier in cybercrime, requiring regulators and consumers to stay one step ahead.

ASIC’s Ongoing Efforts

ASIC continues to adapt its strategies to combat the evolving nature of crypto scams. The regulator collaborates with international bodies, law enforcement agencies, and tech companies to share information and develop new tools for detecting and preventing scams. Public awareness campaigns remain a cornerstone of ASIC’s strategy, educating investors on how to identify and avoid scams.

Protecting Yourself from Crypto Scams

  • Before investing in any cryptocurrency or ICO, thoroughly research the project, its team, and its track record. Look for reviews and feedback from other investors.
  • Check if the platform or exchange is registered with relevant regulatory bodies. Legitimate companies will have transparent operations and verifiable credentials.
  • If an investment opportunity promises unusually high returns with little to no risk, it’s likely a scam. Always be skeptical of offers that seem too good to be true.
  • Only use reputable and secure platforms for trading and storing your cryptocurrencies. Enable two-factor authentication and other security measures.
  • Keep up-to-date with the latest news and developments in the crypto space. Awareness of common scam tactics can help you avoid falling victim.

X Confronts EU Legal Action Over Alleged AI Privacy Missteps

 


X, the artificial intelligence technology company of Elon Musk, has reportedly been accused of unlawfully feeding personal information about its users to its artificial intelligence technology without their consent according to a privacy campaign group based in Vienna. This complaint has been filed by a group of individuals known as Noyb.

In early September, Ireland's Data Protection Commission (DPC) filed a lawsuit against X over its data collection practices to train its artificial intelligence systems. A series of privacy complaints against X, the company formerly known as Twitter, have been filed after it was revealed the platform was using data obtained from European users to train an artificial intelligence chatbot for its Grok AI product without their consent. 

In the past couple of weeks, a social media user discovered that X had begun quietly processing the posts of regional users for AI training purposes late last month. In response to the revelation, TechCrunch reported that the Irish Data Protection Commission (DPC), responsible for ensuring that X complies with the General Data Protection Regulation (GDPR), expressed "surprise" at the revelation. As Musk's company, X has recently announced, all its users can choose whether Grok can access their public posts, the website's artificial intelligence chatbot that is operated by Musk's company X. 

If a user wishes to opt out of receiving communications from them, he or she must uncheck a box in their privacy settings. Despite this, Judge Leonie Reynolds observed that it appeared clear that X had begun processing its EU users' data to train its AI systems on May 7 only to offer the option to opt out from July 16. Additionally, she added, that not all users had access to the feature when it was first introduced. 

 An organization called NOYB has filed several lawsuits against X on behalf of consumers, a long-standing thorn in Big Tech's side and a persistent privacy activist group. Max Schrems, the head of NOYB, is a privacy activist who successfully challenged Meta's transfer of EU data to the US as violating the EU's stringent GDPR laws in a lawsuit he filed against Meta in 2017. As a result of this case, Meta has been fined €1.2 billion as well as faced logistical challenges, in June, due to complaints from NOYB, Meta was forced to pause the use of EU users’ data to train the AI systems it has since developed. 

There is another issue that NOYB wants to address. They argue that X did not obtain the consent of European Union users before using their data to teach Grok to train Grok. It has been reported that NOYB's spokesperson has told The Daily Upside that the company may find itself facing a fine of up to 4% of its annual revenue as a result of these complaints. Additionally, the punitive measures would also aggravate the situation, as X has a lot less money to play with than Meta does:  

It should be noted that X is no longer a publicly traded company, so this means that it is difficult to determine how its cash reserves are doing. However, people know that Musk bought the company in 2022, and when he bought it, it took on roughly $25 billion in debt with a very high leverage ratio.  In the years since the deal was made, the banks that helped finance the transaction have had an increasingly difficult time unloading their shares of the debt, and Fidelity has recently announced a discount on its stake, which gives a hint as to how the firm might be valued. 

As of last March, Fidelity's stake had dropped to a value of 67% less than it was when the company acquired the company. Although Musk was the one who bought Twitter, even before he acquired Twitter, the company had struggled to remain consistently profitable for many years as it was a small fish in a big tech pond. 

A key goal of NOYB is to conduct a full-scale investigation into how X was able to train its generative artificial intelligence model, Grok, without any consultation with its users to achieve a better understanding of what they did. Companies that interact directly with end users only need to display them with a yes/no prompt before using their contact information, Schrems told The Information. There are many other things they do this for regularly, so it would be very possible to train AI in this manner as well. 

The Grok2 beta is scheduled to be released on January 1st 2024, and this legal action comes only a few days before Grok 2 is set to launch its beta version. In the last few years, major tech companies have faced ethical challenges associated with the training of large amounts of data. It was widely reported in June 2024 that Meta was suing 11 European countries over its new privacy policies, which showed the company's intent to use the data generated by each account to train a machine learning algorithm upon the data. 

As a result of this particular case, the GDPR is intended to protect European citizens against unexpected uses of their data, such as those that could affect their right to privacy and their freedom to be free from intrusion. Noyb contends that X's use of a legitimate interest as a legal basis for its data collection and use may not be valid. The company cites a ruling by the top court of Europe last summer, which held that user consent is mandatory for similar cases involving data usage to target ads. 

The report outlines further concerns that providers of generative AI systems are frequently claiming they are unable to comply with other key GDPR requirements, such as the right to be forgotten, or the right to access personal data that has been collected. OpenAI's ChatGPT is also being widely criticized for many of the same concerns specifically related to GDPR.