Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Artificial Intelligence. Show all posts

Quantum Computing Meets AI: A Lethal Combination

 

Quantum computers are getting closer to Q-day — the day when they will be able to crack existing encryption techniques — as we continue to assign more infrastructure functions to artificial intelligence (AI). This could jeopardise autonomous control systems that rely on AI and ML for decision-making, as well as the security of digital communications. 

As AI and quantum converge to reveal remarkable novel technologies, they will also combine to develop new attack vectors and quantum cryptanalysis.

How far off is this threat?

For major organisations and governments, the transition to post-quantum cryptography (PQC) will take at least ten years, if not much more. Since the last encryption standard upgrade, the size of networks and data has increased, enabling large language models (LLMs) and related specialised technologies. 

While generic versions are intriguing and even enjoyable, sophisticated AI will be taught on expertly picked data to do specialised tasks. This will quickly absorb all of the previous research and information created, providing profound insights and innovations at an increasing rate. This will complement, not replace, human brilliance, but there will be a disruptive phase for cybersecurity.

If a cryptographically relevant quantum computer becomes available before PQC is fully deployed, the repercussions are unknown in the AI era. Regular hacking, data loss, and even disinformation on social media will bring back memories of the good old days before AI driven by evil actors became the main supplier of cyber carcinogens.

When AI models are hijacked, the combined consequence of feeding live AI-controlled systems personalised data with malicious intent will become a global concern. The debate in Silicon Valley and political circles is already raging over whether AI should be allowed to carry out catastrophic military operations. Regardless of existing concerns, this is undoubtedly the future. 

However, most networks and economic activity require explicit and urgent defensive actions. To take on AI and quantum, critical infrastructure design and networks must advance swiftly and with significantly increased security. With so much at stake and new combined AI-quantum attacks unknown, one-size-fits-all upgrades to libraries such as TLS will not suffice. 

Internet 1.0 was built on old 1970s assumptions and limitations that predated modern cloud technology and its amazing redundancy. The next version must be exponentially better, anticipating the unknown while assuming that our current security estimations are incorrect. The AI version of Stuxnet should not surprise cybersecurity experts because the previous iteration had warning indications years ago.

Malicious Python Packages Target Developers Using AI Tools





The rise of generative AI (GenAI) tools like OpenAI’s ChatGPT and Anthropic’s Claude has created opportunities for attackers to exploit unsuspecting developers. Recently, two Python packages falsely claiming to provide free API access to these chatbot platforms were found delivering a malware known as "JarkaStealer" to their victims.


Exploiting Developers’ Interest in AI

Free and free-ish generative AI platforms are gaining popularity, but the benefits of most of their advanced features cost money. This led certain developers to look for free alternatives, many of whom didn't really check the source to be sure. Cybercrime follows trends and the trend is that malicious code is being inserted into open-source software packages that at least initially may appear legitimate.

As George Apostopoulos, a founding engineer at Endor Labs, describes, attackers target less cautious developers, lured by free access to popular AI tools. "Many people don't know better and fall for these offers," he says.


The Harmful Python Packages

Two evil Python packages, "gptplus" and "claudeai-eng," were uploaded to the Python Package Index, PyPI, one of the official repositories of open-source Python projects. The GPT-4 Turbo model by OpenAI and Claude chatbot by Anthropic were promised by API integrations from the user "Xeroline.".

While the packages seemed to work by connecting users to a demo version of ChatGPT, their true functionality was much nastier. The code also contained an ability to drop a Java archive (JAR) file which delivered the JarkaStealer malware to unsuspecting victims' systems.


What Is JarkaStealer?

The JarkaStealer is an infostealer malware that can extract sensitive information from infected systems. It has been sold on the Dark Web for as little as $20, but its more elaborate features can be bought for a few dollars more, which is designed to steal browser data and session tokens along with credentials for apps like Telegram, Discord, and Steam. It can also take screenshots of the victim's system, often revealing sensitive information.

Though the malware's effectiveness is highly uncertain, it is cheap enough and readily available to many attackers as an attractive tool. Its source code is even freely accessible on platforms like GitHub for an even wider reach.


Lessons for Developers

This incident points to risks in downloading unverified packages of open source, more so when handling emerging technologies such as AI. Development firms should screen all software sources to avoid shortcuts that seek free premium tools. Taking precautionary measures can save individuals and organizations from becoming victims of such attacks.

With regard to caution and best practices, developers are protected from malicious actors taking advantage of the GenAI boom.

How Agentic AI Will Change the Way You Work



Artificial intelligence is entering a groundbreaking phase that could drastically change the way we work. For years, AI prediction and content creation have been utilised, but the spotlight has shifted toward the most advanced: agentic AI. Such intelligent systems are not merely human tools but can act, decide, and bring order to complex tasks on their own. The third wave of AI could take the workplaces by a storm; hence, understanding what's coming into existence is important.


A Quick History of AI Evolution

To grasp the significance of agentic AI, let’s revisit AI’s journey. The first wave, predictive AI, helped businesses forecast trends and make data-based decisions. Then came generative AI, which allowed machines to create content and have human-like conversations. Now, we’re in the third wave: agentic AI. Unlike its predecessors, this AI can perform tasks on its own, interact with other AI systems, and even collaborate without constant human supervision.


What makes agentic AI special

Imagine agentic AI as an upgrade to the norm. The traditional AI systems follow prompts-they are there to respond to questions or generate text. Agentic AI, however, takes initiative. Agents are capable of handling a whole task, say solving problems for customers or organising schedules, but within set rules. They can even collaborate with other AI agents to deliver the result much more efficiently. For instance, in customer service, an AI that is agentic can answer questions, process returns, and help users without some human stepping in.


How Will Workplaces Change?

Agentic AI introduces a new way of working. Imagine an office where AI agents manage distinct tasks, like analysing data or communicating with clients; humans will supervise. Such a change is already generating new jobs, like the role of the AI trainer and coordinator, coaching those systems to improve their performance. It can either be a fully automatic job or a transformed one that will bring humans and AI together to deliver something.


Real-Life Applications

Agentic AI is already doing so much for many areas. It can, for example, help compile a patient summary in healthcare or solve claims in finance. Imagine an assistant AI negotiating with a company's AI for the best car rental deal. It could participate in meetings alongside colleagues, suggesting insights and ideas based on what it knows. The possibilities are endless, and humans could redefine efficiency in combination with their AI counterparts.


Challenges and Responsibilities

With great power comes great responsibility. If an AI agent comes to the wrong decision, results might be dire. Therefore, with substantial power, companies set substantial bounds on what these systems can do and cannot do. Critical decisions will be approved by a human to ensure safety and trust are achieved. Furthermore, transparency will be ensured— one must know if they are interacting with an AI rather than a human.


Adapting the Future

With the rise of agentic AI, it's not just a question of new technology, but the way in which work will change. Professionals will need to acquire new competencies, such as how to manage and cooperate with agents, while organisations need to re-design workflows to include these intelligent systems. This shift promises to benefit early adopters more than laggards.

Agentic AI represents more than just a technological breakthrough; rather it's an opportunity to make workplaces smarter, more innovative, and highly efficient. Are we ready for this future? Only time will tell.

 

AI-Powered Dark Patterns: What's Up Next?

 

The rapid growth of generative AI (artificial intelligence) highlights how urgent it is to address privacy and ethical issues related to the use of these technologies across a range of sectors. Over the past year, data protection conferences have repeatedly emphasised AI's expanding role in the privacy and data protection domains as well as the pressing necessity for Data Protection Officers (DPOs) to handle the issues it presents for their businesses. 

These issues include the creation of deepfakes and synthetic content that could sway public opinion or threaten specific individuals as well as the public at large, the leakage of sensitive personal information in model outputs, the inherent bias in generative algorithms, and the overestimation of AI capabilities that results in inaccurate output (also known as AI hallucinations), which often refer to real individuals. 

So, what are the AI-driven dark patterns? These are deceptive UI strategies that use AI to influence application users into making decisions that favour the company rather than the user. These designs employ user psychology and behaviour in more sophisticated ways than typical dark patterns. 

Imagine getting a video call from your bank manager (created by a deepfake) informing you of some suspicious activity on your account. The AI customises the call for your individual bank branch, your bank manager's vocal patterns, and even their look, making it quite convincing. This deepfake call could tempt you to disclose sensitive data or click on suspicious links. 

Another alarming example of AI-driven dark patterns may be hostile actors creating highly targeted social media profiles that exploit your child's flaws. The AI can analyse your child's online conduct and create fake friendships or relationships that could trick the child into disclosing personal information or even their location to these people. Thus, the question arises: what can we do now to minimise these ills? How do we prevent future scenarios in which cyber criminals and even ill-intentioned organisations contact us and our loved ones via technologies on which we have come to rely for daily activities? 

Unfortunately, the solution is not simple. Mitigating AI-driven dark patterns necessitates a multifaceted approach that includes consumers, developers, and regulatory organisations. The globally recognised privacy principles of data quality, data collection limitation, purpose specification, use limitation, security, transparency, accountability, and individual participation are universally applicable to all systems that handle personal data, including training algorithms and generative AI. We must now test these principles to discover if they can actually protect us from this new, and often thrilling, technology.

Prevention tips 

First and foremost, we must educate people on AI-driven dark trends and fraudulent techniques. This can be accomplished by public awareness campaigns, educational tools at all levels of the education system, and the incorporation of warnings into user interfaces, particularly on social media platforms popular with young people. Cigarette firms must disclose the risks of their products, as should AI-powered services to which our children are exposed.

We should also look for ways to encourage users, particularly young and vulnerable users, to be critical consumers of information they come across online, especially when dealing with AI systems. In the twenty-first century, our educational systems should train members of society to question (far more) the source and intent of AI-generated content. 

Give the younger generation, and even the older ones, the tools they need to control their data and customise their interactions with AI systems. This might include options that allow users or parents of young users to opt out of AI-powered suggestions or data collection. Governments and regulatory agencies play an important role to establish clear rules and regulations for AI development and use. The European Union plans to propose its first such law this summer. The long-awaited EU AI Act puts many of these data protection and ethical concerns into action. This is a positive start.

Tech Expert Warns AI Could Surpass Humans in Cyber Attacks by 2030

 

Jacob Steinhardt, an assistant professor at the University of California, Berkeley, shared insights at a recent event in Toronto, Canada, hosted by the Global Risk Institute. During his keynote, Steinhardt, an expert in electrical engineering, computer science, and statistics, discussed the advancing capabilities of artificial intelligence in cybersecurity.

Steinhardt predicts that by the end of this decade, AI could surpass human abilities in executing cyber attacks. He believes that AI systems will eventually develop "superhuman" skills in coding and finding vulnerabilities within software.

Exploits, or weak spots in software and hardware, are commonly exploited by cybercriminals to gain unauthorized access to systems. Once these access points are found, attackers can execute ransomware attacks, locking out users or encrypting sensitive data in exchange for a ransom. 

Traditionally, identifying these exploits requires painstakingly reviewing lines of code — a task that most humans find tedious. Steinhardt points out that AI, unlike humans, does not tire, making it particularly suited to the repetitive process of exploit discovery, which it could perform with remarkable accuracy.

Steinhardt’s talk comes amid rising cybercrime concerns. A 2023 report by EY Canada indicated that 80% of surveyed Canadian businesses experienced at least 25 cybersecurity incidents within the year. While AI holds promise as a defensive tool, Steinhardt warns that it could also be exploited for malicious purposes.

One example he cited is the misuse of AI in creating "deep fakes"— digitally manipulated images, videos, or audio used for deception. These fakes have been used to scam individuals and businesses by impersonating trusted figures, leading to costly fraud incidents, including a recent case involving a British company tricked into sending $25 million to fraudsters.

In closing, Steinhardt reflected on AI’s potential risks and rewards, calling himself a "worried optimist." He estimated a 10% chance that AI could lead to human extinction, balanced by a 50% chance it could drive substantial economic growth and "radical prosperity."

The talk wrapped up the Hinton Lectures in Toronto, a two-evening series inaugurated by AI pioneer Geoffrey Hinton, who introduced Steinhardt as the ideal speaker for the event.

AI-Driven Deepfake Scams Cost Americans Billions in Losses

 


As artificial intelligence (AI) technology advances, cybercriminals are now capable of creating sophisticated "deepfake" scams, which result in significant financial losses for the companies that are targeted. On a video call with her chief financial officer, in which other members of the firm also took part, an employee of a Hong Kong-based firm was instructed to send US$25 million to fraudsters in January 2024, after offering instruction to her chief financial officer in the same video call. 

Fraudsters, however, used deepfakes to fool her into sending the money by creating one that replicated these likenesses of the people she was supposed to be on a call with: they created an imitation that mimicked her likeness on the phone. The number of scammers continues to rise, and artificial intelligence, as well as other sophisticated tools, are raising the risk that victims potentially being scammed. It is estimated that over $12.5 billion in American citizens were swindled online in the past year, which is up from $10.3 billion in 2022, according to the FBI's Internet Crime Complaint Center. 

A much higher figure may be possible, but the actual price could be much higher. During the investigation of a particular case, the FBI found out that only 20% of the victims had reported these crimes to the authorities. It appears that scammers are continuing to erect hurdles with new ruses, techniques, and policies, and artificial intelligence is playing an increasingly prominent role. 

Based on a recent FBI analysis, 39% of victims last year were swindled using manipulated or doctored videos that were used to manipulate what a victim did or said, thereby misrepresenting what they said or did. Currently, video scams have been used to perpetrate investment frauds, as well as romance swindles, as well as other types of scams. The number of scammers continues to rise, and artificial intelligence, as well as other sophisticated tools, are raising the risk that victims potentially being scammed.

It is estimated that Americans were scammed out of $12.5 billion online last year, which is an increase from $10.3 billion in 2022, according to the FBI's Internet Crime Complaint Center, but the totals could be much higher due to increased awareness. An FBI official recently broke an interesting case in which only 20% of the victims had reported these crimes to the authorities. Today, scammers perpetrate many different scams, and AI is becoming more prominent in that threat. 

According to the FBI's assessment last year, 39% of victims were swindled based on fake or doctored videos altered with artificial intelligence technology to manipulate or misrepresent what someone did or said during the initial interaction. In investment scams and other ways, the videos are being used to deceive people into believing they are in love, for example. It appears that in several recent instances, fraudsters have modified publicly available videos and other footage using deepfake technology in an attempt to cheat people out of their money, a case that has been widely documented in the news.

In his response, Romero indicated that artificial intelligence could allow scammers to process much larger quantities of data and, as a result, try more combinations of passwords in their attempts to hack into victims' accounts. For this reason, it is extremely important that users implement strong passwords, change them frequently, and use two-factor authentication when they are using a computer. The Internet Crime Complaint Center of the FBI received more than 880,000 complaint forms last year from Americans who were victims of online fraud. 

In fact, according to Social Catfish, 96% of all money lost in scams is never recouped, mainly because most scammers live overseas and cannot return the money. The increasing prevalence of cryptocurrency in criminal activities has made it a favoured medium for illicit transactions, particularly investment-related crimes. Fraudsters often exploit the anonymity and decentralized nature of digital currencies to orchestrate schemes that demand payment in cryptocurrency. A notable tactic includes enticing victims into fraudulent recovery programs, where perpetrators claim to assist in recouping funds lost in prior cryptocurrency scams, only to exploit the victims further. 

The surge in such deceptive practices complicates efforts to differentiate between legitimate and fraudulent communications. Falling victim to sophisticated scams, such as those involving deepfake technology, can result in severe consequences. The repercussions may extend beyond significant financial losses to include legal penalties for divulging sensitive information and potential harm to a company’s reputation and brand integrity. 

In light of these escalating threats, organizations are being advised to proactively assess their vulnerabilities and implement comprehensive risk management strategies. This entails adopting a multi-faceted approach to enhance security measures, which includes educating employees on the importance of maintaining a sceptical attitude toward unsolicited requests for financial or sensitive information. Verifying the legitimacy of such requests can be achieved by employing code words to authenticate transactions. 

Furthermore, companies should consider implementing advanced security protocols, and tools such as multi-factor authentication, and encryption technologies. Establishing and enforcing stringent policies and procedures governing financial transactions are also essential steps in mitigating exposure to fraud. Such measures can help fortify defenses against the evolving landscape of cybercrime, ensuring that organizations remain resilient in the face of emerging threats.

AI Data Breach Reveals Trust Issues with Personal Information

 


Insight AI technology is being explored by businesses as a tool for balancing the benefits it brings with the risks that are associated. Amidst this backdrop, NetSkope Threat Labs has recently released the latest edition of its Cloud and Threat Report, which focuses on using AI apps within the enterprise to prevent fraud and other unauthorized activity. There is a lot of risk associated with the use of AI applications in the enterprise, including an increased attack surface, which was already discussed in a serious report, and the accidental sharing of sensitive information that occurs when using AI apps. 

As users and particularly as individuals working in the cybersecurity as well as privacy sectors, it is our responsibility to protect data in an age when artificial intelligence has become a popular tool. An artificial intelligence system, or AI system, is a machine-controlled program that is programmed to think and learn the same way humans do through the use of simulation. 

AI systems come in various forms, each designed to perform specialized tasks using advanced computational techniques: - Generative Models: These AI systems learn patterns from large datasets to generate new content, whether it be text, images, or audio. A notable example is ChatGPT, which creates human-like responses and creative content. - Machine Learning Algorithms: Focused on learning from data, these models continuously improve their performance and automate tasks. Amazon Alexa, for instance, leverages machine learning to enhance voice recognition and provide smarter responses. - Robotic Vision: In robotics, AI is used to interpret and interact with the physical environment. Self-driving cars like those from Tesla use advanced robotics to perceive their surroundings and make real-time driving decisions. - Personalization Engines: These systems curate content based on user behavior and preferences, tailoring experiences to individual needs.  Instagram Ads, for example, analyze user activity to display highly relevant ads and recommendations. These examples highlight the diverse applications of AI across different industries and everyday technologies. 

In many cases, artificial intelligence (AI) chatbots are good at what they do, but they have problems detecting the difference between legitimate commands from their users and manipulation requests from outside sources. 

In a cybersecurity report published on Wednesday, researchers assert that artificial intelligence has a definite Achilles' heel that should be exploited by attackers shortly. There have been a great number of public chatbots powered by large language models, or LLMs for short, that have been emerging just over the last year, and this field of LLM cybersecurity is at its infancy stage. However, researchers have already found that these models may be susceptible to a specific form of attack referred to as "prompt injection," which occurs when a bad actor sneakily provides commands to the model without the model's knowledge. 

In some instances, attackers hide prompts inside webpages that the chatbot reads later, so that the chatbot might download malware, assist with financial fraud, or repeat dangerous misinformation that is passed on to people by the chatbot. 

What is Artificial Intelligence?


AI (artificial intelligence) is one of the most important areas of study in technology today. AI focuses on developing systems that mimic human intelligence, with the ability to learn, reason, and solve problems autonomously. The two basic types of AI models that can be used for analyzing data are predictive AI models and generative AI models. 

 A predictive artificial intelligence function is a computational capability that uses existing data to make predictions about future outcomes or behaviours based on historical patterns and data. A creative AI system, however, has the capability of creating new data or content that is similar to the input it has been trained on, even if there was no content set in the dataset before it was trained. 

 A philosophical discord exists between Leibnitz and the founding fathers of artificial intelligence in the early 1800s, although the conception of the term "artificial intelligence" as we use it today has existed since the early 1940s, and became famous with the development of the "Turing test" in 1950. It has been quite some time since we have experienced a rapid period of progress in the field of artificial intelligence, a trend that has been influenced by three major factors: better algorithms, increased networked computing power, and a greater capacity to capture and store data in unprecedented quantities. 

Aside from technological advancements, the very way we think about intelligent machines has changed dramatically since the 1960s. This has resulted in a great number of developments that are taking place today. Even though most people are not aware of it, AI technologies are already being utilized in very practical ways in our everyday lives, even though they may not be aware of it. As a characteristic of AI, after it becomes effective, it stops being referred to as AI and becomes mainstream computing as a result.2 For instance, there are several mainstream AI technologies on which you can take advantage, including having the option of being greeted by an automated voice when you call, or being suggested a movie based on your preferences. The fact that these systems have become a part of our lives, and we are surrounded by them every day, is often overlooked, even though they are supported by a variety of AI techniques, including speech recognition, natural language processing, and predictive analytics that make their work possible. 

What's in the news? 


There is a great deal of hype surrounding artificial intelligence and there is a lot of interest in the media regarding it, so it is not surprising to find that there are an increasing number of users accessing AI apps in the enterprise. The rapid adoption of artificial intelligence (AI) applications in the enterprise landscape is significantly raising concerns about the risk of unintentional exposure to internal information. A recent study reveals that, between May and June 2023, there was a weekly increase of 2.4% in the number of enterprise users accessing at least one AI application daily, culminating in an overall growth of 22.5% over the observed period. Among enterprise AI tools, ChatGPT has emerged as the most widely used, with daily active users surpassing those of any other AI application by a factor of more than eight. 

In organizations with a workforce exceeding 1,000 employees, an average of three different AI applications are utilized daily, while organizations with more than 10,000 employees engage with an average of five different AI tools each day. Notably, one out of every 100 enterprise users interacts with an AI application daily. The rapid increase in the adoption of AI technologies is driven largely by the potential benefits these tools can bring to organizations. Enterprises are recognizing the value of AI applications in enhancing productivity and providing a competitive edge. Tools like ChatGPT are being deployed for a variety of tasks, including reviewing source code to identify security vulnerabilities, assisting in the editing and refinement of written content, and facilitating more informed, data-driven decision-making processes. 

However, the unprecedented speed at which generative AI applications are being developed and deployed presents a significant challenge. The rapid rollout of these technologies has the potential to lead to the emergence of inadequately developed AI applications that may appear to be fully functional products or services. In reality, some of these applications may be created within a very short time frame, possibly within a single afternoon, often without sufficient oversight or attention to critical factors such as user privacy and data security. 

The hurried development of AI tools raises the risk that confidential or sensitive information entered into these applications could be exposed to vulnerabilities or security breaches. Consequently, organizations must exercise caution and implement stringent security measures to mitigate the potential risks associated with the accelerated deployment of generative AI technologies. 

Threat to Privacy


Methods of Data Collection 

AI tools generally employ one of two methods to collect data: Data collection is very common in this new tech-era. This is when the AI system is programmed to collect specific data. Examples include online forms, surveys, and cookies on websites that gather information directly from users. 

Another comes Indirect collection, this involves collecting data through various platforms and services. For instance, social media platforms might collect data on users' likes, shares, and comments, or a fitness app might gather data on users' physical activity levels. 

As technology continues to undergo ever-increasing waves of transformation, security, and IT leaders will have to constantly seek a balance between the need to keep up with technology and the need for robust security. Whenever enterprises integrate artificial intelligence into their business, key considerations must be taken into account so that IT teams can achieve maximum results. 

As a fundamental aspect of any IT governance program, it is most important to determine what applications are permissible, in conjunction with implementing controls that not only empower users but also protect the organization from potential risks. Keeping an environment in a secure state requires organizations to monitor AI app usage, trends, behaviours, and the sensitivity of data regularly to detect emerging risks as soon as they emerge.

A second effective way of protecting your company is to block access to non-essential or high-risk applications. Further, policies that are designed to prevent data loss should be implemented to detect sensitive information, such as source code, passwords, intellectual property, or regulated data, so that DLP policies can be implemented. A real-time coaching feature that integrates with the DLP system reinforces the company's policies regarding how AI apps are used, ensuring users' compliance at all times. 

A security plan must be integrated across the organization, sharing intelligence to streamline security operations and work in harmony for a seamless security program. Businesses must adhere to these core cloud security principles to be confident in their experiments with AI applications, knowing that their proprietary corporate data will remain secure throughout the experiment. As a consequence of this approach, sensitive information is not only protected but also allows companies to explore innovative applications of AI that are beyond the realm of mainstream tasks such as the creation of texts or images.  

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

AI-Powered Hack Poses Threat to Billions of Gmail Accounts

 


Currently, there is a cyberattack powered by artificial intelligence that targets Gmail's huge network of 2.5 billion users, which is currently making waves. As a way of tricking people into sharing sensitive information, hackers use advanced techniques, including realistic artificial intelligence-generated scam calls posing as Google Support and impersonating the company's representatives. It has been reported that a new and sophisticated scam has been targeting Gmail users, intending to steal personal information by tricking users into approving fake account recovery requests by posing as Gmail employees. 

A technology consultant and blogger, Sam Mitrovic, shared a detailed blog post detailing his experience with the scam, which emphasized how easy it would be for users to fall victim to this AI-based deception based on clever deception techniques. It begins with an unexpected email or text message telling users that an automated recovery request has been sent to their Gmail account, and they will be asked to agree to it. 

As Mitrovic's case illustrates, the majority of recovery requests come from other countries, such as the United States in Mitrovic's case. It's still not over for Mitrovic though, because about 40 minutes after declining the request, the scammers make their second move-a phone call from what appears to be an official Google number that they pretend to be. The email message appears highly authentic since it uses personal information such as names, addresses, or past communications to convey a strong sense of authenticity. They use several methods to trick users into clicking on malicious links or providing sensitive information, such as login credentials, payment information, and other sensitive information to the attackers. 

A Microsoft solution consultant Sam Mitrovic recently posted an article in his blog about his personal experience with this alarming trend as he highlighted to his readers how difficult it can be to identify these scams. The first notification Mitrovic received from a phishing scam asked him to approve a recovery attempt for a Gmail account. This was a classic phishing attempt aimed at stealing login credentials from Mitrovic. He wisely ignored the alert, knowing that there was a potential danger involved. 

As a result, the attackers were persistent and didn't let up; not long after getting the notification, he got a new notification informing him that he had missed a call from "Google Sydney." The following week, he received the same notification, along with a phone call from the same number. It was the second time he had picked up the phone. Mitrovic said that the American voice on the other end of the line informed him that something suspicious had happened with his Google account a week ago, and someone had accessed it during that period. Apparently, the Google employee, who offered to send an email outlining what happened, did so promptly, and that message arrived from an official Google email address within a short period. 

A key point that Mitrovic stresses is the importance of being vigilant in preventing these scams from taking place. Users of Gmail are strongly advised to take precautionary measures in light of the increasing sophistication of AI-driven cyber threats. One critical recommendation is to avoid approving account recovery requests that were not personally initiated. 

If a recovery notification is received unexpectedly, it should not be approved, as this could be an indication that the account is being targeted for unauthorized access. In the case of phone calls purporting to be from Google, it is important to remain vigilant. Google rarely contacts users directly unless they are engaging with Google Business services. 

Should a call be received claiming to be from Google, it is recommended to immediately hang up and verify the phone number independently before continuing any interaction. Users should also pay close attention to email addresses in communications that appear to be from Google. Spoofed emails may seem legitimate, but careful inspection of details such as the “To” field or the domain name can reveal whether the email is fake. It is advisable to regularly review the security settings of one's Gmail account and examine recent security activity for unfamiliar logins or suspicious behaviour. This can be done by navigating to the “Security” tab within Gmail account settings, where recent login activity and security alerts are displayed. 

For more technologically inclined users, examining the original email headers can provide valuable insights into whether the email was sent from a legitimate Google server. This level of scrutiny can help identify phishing or spoofing attempts with greater accuracy. By following these steps, Gmail users can enhance their security posture and better protect themselves from AI-based scams. The key takeaway is to exercise caution and thoroughly verify any unusual activity or communications related to their accounts. 

The rise of AI-powered hacking techniques poses a significant threat to the security of Gmail users worldwide. As these sophisticated scams become more prevalent and harder to detect, users need to remain vigilant and proactive in protecting their accounts. By carefully reviewing recovery requests, verifying any communication claiming to be from Google, and regularly monitoring account security settings, users can minimize the risk of falling victim to these advanced cyberattacks. Staying informed and exercising caution is critical in safeguarding personal information and maintaining the integrity of online accounts amidst this evolving threat landscape.

AI Deepfakes Pose New Threats to Cryptocurrency KYC Compliance

 


ProKYC is a recently revealed artificial intelligence (AI)-powered deep fake tool that nefarious actors can use to circumvent high-level Know Your Customer (KYC) protocols on cryptocurrency exchanges, presenting as a very sophisticated method to circumvent high-level KYC protocols. A recent report from cybersecurity firm Cato Networks refers to this development as an indication that cybercriminals have stepped up their tactics to get ahead of law enforcement. 

It has been common practice for identity fraud to involve people buying forged documents on the dark web to commit the crime. There is a difference in approach, however, between ProKYC and another company. Fraudsters can use the tool in order to create entirely new identities, which they can use for fraud purposes. Cato Networks report that the AI tool is aimed at targeting crypto exchanges and financial institutions with the special purpose of exploiting them. 

When a new user registers with one of these organizations, they use technology to verify that he is who he claims to be. During this process, a government-issued identification document, such as a passport or driver's license, must be uploaded and matched with a live webcam image that is displayed on the screen. A design in ProKYC maximizes the ability of customers to bypass these checks by generating a fake identity, as well as a deepfakes video. Thereby, criminals are able to circumvent the facial recognition software, allowing them to commit fraud. 

As noted in the press release from Cato Networks, this method introduces a new level of sophistication to the crypto fraud industry. A Cato Networks report published on Oct. 9 reported that Etay Maor, the company's chief security strategist, believes that the new tool represents a significant step forward in terms of what cybercriminals are doing to get around two-factor authentication and KYC mechanisms. 

In the past, fraudsters were forced to buy counterfeit identification documents on the dark web, but with AI-based tools, they can create brand-new ID documents from scratch. This new tool was developed by Cato specifically for crypto exchanges and financial firms whose KYC protocols require matching photos of a new user's face to their government-issued identification documents, such as a passport or a driver's license taken from the webcam of their computers.  

Using the tool of ProKYC, we have been able to generate fake ID documents, as well as accompanying deepfake videos, in order to pass the facial recognition challenges used by some of the largest crypto exchanges around the world. The user creates an artificially intelligent generated face, and then adds that AI-generated profile picture to a template of a passport that is based on an Australian passport. 

The next step is the ProKYC tool, which uses artificial intelligence (AI) to create a fake video and image of the artificially generated person, which is used to bypass the KYC protocols on the Dubai-based crypto exchange Bybit, which is not in compliance with the Eurozone.  It has been reported recently by the cybersecurity company Cato Networks that a deepfake AI tool that can create fake fake accounts is being used by exchanges to evade KYC checks that are being conducted. 

There is a tool called ProKYC that can be downloaded for the price of 629 dollars a year and used by fraudsters to create fake identification documents and generate videos that look almost real. This package includes a camera, a virtual emulator, facial animations, fingerprints, and an image generation program that generates the documents that need to be verified. A recent report highlights the emergence of an advanced AI deepfake tool, custom-built to exploit financial companies’ KYC protocols. 

This tool, designed to circumvent biometric face checks and document cross-verification, has raised concerns by breaching security measures that were previously impenetrable, even by the most sophisticated AI systems. The deepfake, created with a tool known as ProKYC, was showcased in a blog post by Cato Networks. It demonstrates how AI can generate counterfeit ID documents capable of bypassing KYC verification at exchanges like Bybit. 

In one instance, the system accepted a fictitious name, a fraudulent document, and an artificially generated video, allowing the user to complete the platform’s verification process seamlessly. Despite the severity of this challenge, Cato Networks notes that certain methods can still detect these AI-generated identities. 

Techniques such as having human analysts scrutinize unusually high-quality images and videos or identifying inconsistencies in facial movements and image quality are potential safeguards. Legal Ramifications of Identity Fraud The legal consequences of identity fraud, particularly in the United States, are stringent. Penalties can reach up to 15 years in prison, along with substantial fines, depending on the crime's scope and gravity. With the rise of AI tools like ProKYC, combating identity fraud is becoming more difficult for law enforcement, raising the stakes for financial institutions. Rising Activity Among Scammers 

In addition to these developments, September saw a marked increase in deepfake AI activity among crypto scammers. Gen Digital, the parent company of Norton, Avast, and Avira, reported a spike in the use of deepfake videos to deceive investors into fraudulent cryptocurrency schemes. This uptick underscores the need for stronger security measures and regulatory oversight to protect the growing number of investors in the crypto sector. 

The advent of AI-powered tools such as ProKYC marks a new era in cyber fraud, particularly within the cryptocurrency industry. As cybercriminals increasingly leverage advanced technology to evade KYC protocols, financial institutions and exchanges must remain vigilant and proactive. Collaboration among cybersecurity firms, regulatory agencies, and technology developers will be critical to staying ahead of this evolving threat and ensuring robust defenses against identity fraud.

Voice Cloning and Deepfake Threats Escalate AI Scams Across India

 


The rapid advancement of AI technology in the past few years has brought about several benefits for society, but these advances have also led to sophisticated cyber threats. India is experiencing explosive growth in digital adoption, making it one of the most sought-after targets for a surge in artificial intelligence-based scams. This is an alarming reality of today's cybercriminals who are exploiting these emerging technologies in alarming ways to exploit the trust of unsuspecting individuals through voice cloning schemes, the manipulation of public figures' identities and deepfakes. 

There is an increasing problem with scammers finding new ways to deceive the public as AI capabilities become more refined, making it increasingly difficult to tell between genuine and manipulated content as the abilities of AI become more refined. In terms of cyber security expertise and everyday users, the line between reality and digital fabrication is becoming blurred, presenting a serious challenge to both professionals and those who work in the field. 

Among the many high-profile case studies involving voice cloning in the country and the use of deep-fake technology, the severity of these threats and the number of people who have fallen victim to sophisticated methods of deception have led to a rise in these criminal activities in India. It is believed that the recent trend in AI-driven fraud shows that more security measures and public awareness are urgently needed to combat AI-driven fraud to prevent it from spreading.

The scam occurred last year when a scammer swindled a 73-year-old retired government employee in Kozhikode, Kerala, out of 40,000 rupees by using an AI-generated deepfake video that a deep learning algorithm had generated. He created the illusion of an emergency that led to his loss by blending voice manipulation with video manipulation. It is important to realize that the problem runs much deeper than that. 

In Delhi, cybercrime groups have used voice cloning to swindle 50,000 rupees from Lakshmi Chand Chawla, an elderly lady from Yamuna Vihar, by using the practice of voice cloning. It was on October 24 that Chawla received a WhatsApp message saying that his cousin's son had been kidnapped by thugs. This was made believable by recording a voice record of the child who was cloned using artificial intelligence, crying for help to convince the jury. 

The panicked Chawla transferred 20,000 rupees through Paytm to withdraw the funds. It was not until he contacted his cousin, that he realized that the child was never in danger, even though he thought so at first. It is clear from all of these cases that scammers are exploiting AI to gain people's trust in their business. Scammers are no longer anonymous voices, they sound like friends or family members who are in crisis right now.

The McAfee company has released the 'Celebrity Hacker Hot List 2024', which shows which Indian celebrities have name searches that generate the highest level of "risky" searches on the Internet. In this year's results, it was evident that the more viral an individual is, the more appealing their names are to cybercriminals, who are seeking to exploit their fame by creating malicious sites and scams based on their names. These scams have affected many people, leading to big data breaches, financial losses, and the theft of sensitive personal information.  

There is no doubt that Orhan Awatramani, also known as Orry, is on top of the list for India. He has gained a great deal of popularity fast, and in addition to being associated with other high-profile celebrities, he has also gotten a great deal of attention in the media, making him an attractive target for cybercriminals. Especially in this context, it illustrates how cybercriminals can utilize the increase in unverified information about public figures who are new or are upcoming to lure consumers in search of the latest information. 

It has been reported that Diljit Dosanjh, an actor and singer, is being targeted by fraudsters in connection with his upcoming 'Dil-Luminati' concert tour, which is set to begin next month. This is unfortunately not an uncommon occurrence that happens due to overabundant fan interest and a surge in search volume at large-scale events like these, which often leads to fraudulent ticketing websites, discount schemes, resale schemes, and phishing scams.  

As generative artificial intelligence has gained traction, as well as deepfakes, the cybersecurity landscape has become even more complex. Several celebrities are being misled, and this is affecting their careers. Throughout the year, Alia Bhatt has been subject to several incidents that are related to deep fakes, while actors Ranveer Singh and Aamir Khan have falsely been portraying themselves as endorsing political parties in the course of election-related deep fakes. It has been discovered that prominent celebrities such as Virat Kohli and Shahrukh Khan have appeared in deepfake content designed to promote betting apps. 

It is known that scammers are utilizing tactics such as malicious URLs, deceptive messages, and artificially intelligent image, audio, and video scams to take advantage of fans' curiosity. This leads to financial losses as well as damaging the reputation of the impacted celebrities and damaging customer confidence.   There is a disturbing shift in how fraud is being handled (AI-Generated Representative Image) that is alarming (PIQuarterly News) As alarming as voice cloning scams may seem, the danger doesn't end there, as there are many dangers in front of us.

Increasingly, deepfake technology is pushing the boundaries to even greater heights, blending reality with electronic manipulation at an ever-increasing pace, resulting in increasingly difficult detection methods. In recent years, the same technology has been advancing into real-time video deception, starting with voice cloning. Facecam.ai is one of the most striking examples of this technology, which enables users to create deepfake videos that they can live-stream using just one image while users do so. It caused a lot of buzz, highlighting the ability to convincingly mimic a person's face in real time, and showcasing how easily it can be done.

Uploading a photo allowed users to seamlessly swap faces in the video stream without downloading anything. Despite its popularity, the tool had to be shut down after a backlash over its potential usefulness had been raised. It is important to note that this does not mean that the problem has been resolved. The rise of artificial intelligence has led to the proliferation of numerous platforms that offer sophisticated capabilities for creating deepfake videos and manipulating identities, posing serious risks to digital security. 

Although some platforms like Facecam. Ai—which gained popularity for allowing users to create live-streaming deep fake videos using deep fake images—has been taken down due to concerns over misuse, other tools continue to operate with dangerous potential. Notably, platforms like Deep-Live-Cam are still thriving, enabling individuals to swap faces during live video calls. This technology allows users to impersonate anyone, whether it be a celebrity, a politician, or even a friend or family member. What is particularly alarming is the growing accessibility of these tools. As deepfake technology becomes more user-friendly, even those with minimal technical skills can produce convincing digital forgeries. 

The ease with which such content can be created heightens the potential for abuse, turning what might seem like harmless fun into tools for fraud, manipulation, and reputational harm. The dangers posed by these tools extend far beyond simple pranks. As the availability of deepfake technology spreads, the opportunities for its misuse expand exponentially. Fraudulent activities, including impersonation in financial transactions or identity theft, are just a few examples of the potential harm. Manipulation of public opinion, personal relationships, or professional reputations is also at risk, especially as these tools become more widespread and increasingly difficult to regulate. 

The global implications of these scams are already being felt. In one high-profile case, scammers in Hong Kong used a deepfake video to impersonate the Chief Financial Officer of a company, leading to a financial loss of more than $25 million. This case underscores the magnitude of the problem: with the rise of such advanced technology, virtually anyone—not just high-profile individuals—can become a victim of deepfake-related fraud. As artificial intelligence continues to blur the lines between real and fake, society is entering a new era where deception is not only easier to execute but also harder to detect. 

The consequences of this shift are profound, as it fundamentally challenges trust in digital interactions and the authenticity of online communications. To address this growing threat, experts are discussing potential solutions such as Personhood Credentials—a system designed to verify and authenticate that the individual behind a digital interaction is, indeed, a real person. One of the most vocal proponents of this idea is Srikanth Nadhamuni, the Chief Technology Officer of Aadhaar, India's biometric-based identity system.

Nadhamuni co-authored a paper in August 2024 titled "Personhood Credentials: Artificial Intelligence and the Value of Privacy-Preserving Tools to Distinguish Who is Real Online." In this paper, he argues that as deepfakes and voice cloning become increasingly prevalent, tools like Aadhaar, which relies on biometric verification, could play a critical role in ensuring the authenticity of digital interactions.Nadhamuni believes that implementing personhood credentials can help safeguard online privacy and prevent AI-generated scams from deceiving people. In a world where artificial intelligence is being weaponized for fraud, systems rooted in biometric verification offer a promising approach to distinguishing real individuals from digital impersonators.

Embargo Ransomware Shifts Focus to Cloud Platforms

 


In a recent security advisory, Microsoft advised that the ransomware threat actor Storm-0501 has recently switched tactics, targeting hybrid cloud environments now to compromise the entire system of victimization. It is becoming increasingly apparent that cybercriminals are finding out how difficult it is to secure hybrid cloud environments. 

In the latest case, an extremely cruel group called Storm-0501 has stepped forward in an attempt to steal from the most vulnerable organizations in the US, including schools, hospitals, and law enforcement. The group is known for its cash-grab operations. As an affiliate of different strains of ransomware as a service (RaaS), Storm-0501 has been around since 2021, as per Microsoft Threat Intelligence's new report on it.

This ransomware operates as affiliates of a variety of RaaS strains such as BlackCat/ALPHV, LockBit, and Embargo, among others. The Storm-0501 ransomware gang is well-known for its operations in on-premise networks, but now the group is focusing on extending its reach to cloud infrastructures as they look to compromise whole networks with their campaigns. 

Since Storm-0501 was first discovered in 2021, it has been associated with the Sabbath ransomware group as an affiliate. There are several notable ransomware groups, such as Hive, BlackCat, LockBit, and Hunters International, that have been involved in these operations from time to time, but it has been growing rapidly. 

There have been recent reports that the group has been using Embargo ransomware as a means of executing their operations. As a result of the group's broad range of targets within the United States, the group has selected a wide array of sectors for its attacks, including hospitals, government agencies, manufacturing companies, transportation companies, and law enforcement agencies. 

As part of their attack pattern, the group usually exploits weak credentials and privileged accounts, enabling them to steal sensitive information from compromised networks and to deploy ransomware to guarantee their success. Earlier this week, Microsoft team members shared information about a recent attack on Microsoft Entra ID (formerly Azure AD) that was performed by Storm-0501 threat actors. 

The credential-synching component of this on-premises Microsoft application is responsible for synchronizing the passwords and other sensitive data between the objects in Active Directory and Entra ID, assuming the credentials of the user are the same for both on-premises and cloud environments. This report warns that once Storm-0501 was able to migrate into the cloud at a later point in time, it was then capable of manipulating, exfiltrating, and setting up persistent backdoors to commit ransomware attacks. 

As a result of exploiting weak usernames and passwords, the attacker gains access to cloud environments via privileged accounts, which sets out to steal data as well as execute a ransomware payload on the target machine. It is Microsoft's position that the Storm-0501 is obtaining initial access to the network by stealing or buying credentials for access, or by exploiting known vulnerabilities that have already been discovered. 

It is worth noting that CVE-2022-47966 has been used in recent attacks against Zoho ManageEngine, CVE-2023-4966 has been used against Citrix NetScaler, and CVE-2023-29300 or CVE-2023-38203 may have been used against ColdFusion 2016. As the adversary moves laterally, it uses frameworks like Impacket and Cobalt Strike, steals data through Rclone binaries renamed to mimic known Windows tools, and disables security agents using PowerShell command-line functions. 

Storm-0501 is malware that has been designed to exploit stolen Microsoft Entra ID credentials (formerly known as Azure AD credentials) to move from on-premise to cloud environments, compromise synchronization accounts for persistence, and hijack sessions for recurrence. Using a Microsoft Entra Connect Sync account is an essential part of synchronizing data between on-premises AD (Active Directory) and Microsoft Entra ID (Entra ID cloud-based). 

These accounts allow a wide range of sensitive actions to be taken on behalf of the On-Premise AD account. In the case that the attacker has gained access to the credentials for the Directory Synchronization Account, he or she has the capability of changing cloud passwords through specialized tools like AADInternals, thus bypassing any additional security measures. 

An unauthorized user may exploit the Storm-0501 vulnerability if the account of a domain admin or other high-privileged user on-premises also exists in the cloud environment and is not properly protected (e.g. it does not implement multi-factor authentication). As soon as the malicious actor has gained access to the cloud infrastructure, they plant a persistent backdoor by creating a new federated domain inside of the Microsoft Entra tenant, which allows them to log in as any user that has the "Immutableid" property set to their benefit. 

A final step would be for the attackers to either install Embargo ransomware in the victim's on-premises infrastructure and cloud-based environments or keep backdoor access available for later use to the victim. In response to the growing prevalence of hybrid cloud environments, Microsoft's Threat Intel team has warned, "As organizations continue to work with multiple platforms to protect their data, securing resources across them becomes a growing challenge."

Keeper Security, vice president of security and infrastructure, said that a zero-trust framework is a highly effective means of achieving this goal for enterprise cybersecurity teams and that it can be achieved by progressively advancing towards one. Using this model, access is restricted based on the customers' roles, making sure that users only have access to the resources they need for their specific roles. 

This minimizes the possibility of malicious actors getting access to those resources," Tiquet stated in an email. "It is widely believed that weak credentials remain one of the most vulnerable entry points in hybrid cloud environments that are likely to be exploited by groups such as Storm-0501." A centralised approach to endpoint device management (EDM) is also vital to the success of the strategy, according to him. Keeping all environments patched - be it cloud-based or on-premises - is one of the best ways to prevent attackers from exploiting known vulnerabilities by ensuring a consistent level of security patching." 

In addition to my previous statement, he added that advanced monitoring tools will allow teams to detect potentially malicious threats across hybrid cloud environments before they can become breaches. SlashNext Security's field CTO Stephen Kowski provided a similar list of recommendations in a statement he sent via e-mail. Embargo, whose contact information can be found here, is a threat group that uses Rust-based malware in its ransomware-as-a-service (RaaS) operation, which accepts affiliates who access companies and deploy the payload, sharing part of the profit with the affiliate. 

As far back as August 2024, an Embargo ransomware affiliate attacked the American Radio Relay League (ARRL) and claimed to have received $1 million for a decryptor that worked once it was provided to them. The theft of sensitive data from Firstmac Limited, an Australian company that deals with mortgages, investment management and investment strategy, was reported to the cybercrime reporting agency earlier this month. When the deadline to negotiate a solution had passed, an Embargo subsidiary was discovered to have breached the company.

Meta Unveils its First Open AI Model That Can Process Images

 

Meta has released new versions of its renowned open source AI model Llama, including small and medium-sized models capable of running workloads on edge and mobile devices. 

Llama 3.2 models were showcased at the company's annual Meta Connect event. They can support multilingual text production and vision apps like image recognition. 

“This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding,” stated Mark Zuckerberg, CEO of Meta.

Llama 3.2 is based on the huge open source model Llama 3.1, which was released in late July. The previous Llama model was the largest open-source AI model in history, with 405 billion parameters (parameters are the adjustable variables within an AI model that help it learn patterns from data). The size shows the AI's ability to interpret and generate human-like text. 

The new Llama models presented at Meta Connect 2024 are significantly reduced in size. Meta explained that they choose to develop smaller models because not all researchers have the required computational resources and expertise to run a model as large as Llama 3.1.

In terms of performance, Meta's new Llama 3.2 models compete with industry-leading systems from Anthropic and OpenAI. The 3B model exceeds Google's Gemma 2 2.6B and Microsoft's Phi 3.5-mini in tasks such as instruction following and content summarisation. The 90B version, the largest of the models, surpasses both Claude 3-Haiku and GPT-4o-mini on a variety of benchmarks, including the widely used MMLU test, an industry-leading AI model evaluation tool. 

How to access Llama 3.2 models 

The new Llama 3.2 models are open source, so anyone can download and use them to power AI applications. The models can be downloaded straight from llama.com or Hugging Face, a popular open source repository platform. Llama 3.2 models are also available through a number of cloud providers, including Google Cloud, AWS, Nvidia, Microsoft Azure, and Grow, among others.

According to figures published in early September, demand for Meta's Llama models from cloud customers increased tenfold between January and July, and is expected to rise much more in the wake of the new 3.2 line of models. Meta partner Together AI is providing free access to the vision version of Llama 3.2 11B on its platform till the end of the year. 

Vipul Ved Prakash, founder and CEO of Together AI, stated that the new multimodal models will drive the adoption of open-source AI among developers and organisations. 

“We’re thrilled to partner with Meta to offer developers free access to the Llama 3.2 vision model and to be one of the first API providers for Llama Stack,” Prakash noted. “With Together AI's support for Llama models and Llama Stack, developers and enterprises can experiment, build, and scale multimodal applications with the best performance, accuracy, and cost.”

Growing Focus on Data Privacy Among GenAI Professionals in 2024

 


Recent reports published by Deloitte and Deloitte Consulting, highlighting the significance of data privacy as it pertains to Generative Artificial Intelligence (GenAI), have been widely cited. As the survey found, there has been a significant increase in professionals' concerns about data privacy; only 22% ranked it as their top concern at the beginning of 2023, and the number will rise to 72% by the end of 2024. 

Technology is advancing at an exponential rate, and as a result, there is a growing awareness of its potential risks. There has been a surge in concerns over data privacy caused by generative AI across several industries, according to a new report by Deloitte. Only 22% of professionals ranked it as among their top three concerns last year, these numbers have risen to 72% this year, according to a recent study. 

There was also strong concern regarding data provenance and transparency among professionals, with 47% and 40% informing us that they considered them to be among their top three ethical GenAI concerns for this year, respectively. The proportion of respondents concerned about job displacement, however, was only 16%. It is becoming increasingly common for staff to be curious about how AI technology operates, especially when it comes to sensitive data. 

Almost half of security professionals surveyed by HackerOne in September believe AI is risky, with many of them believing leaks of training data threaten their networks' security. It is noteworthy that 78% of business leaders ranked "safe and secure" as one of their top three ethical technology principles. This represents a 37% increase from the year 2023, which shows the importance of security to businesses today.

As a result of Deloitte's 2024 "State of Ethics and Trust in Technology " report, the results of the survey were reported in a report which surveyed over 1,800 business and technical professionals worldwide, asking them to rate the ethical principles that they apply to technological processes and, specifically, to their use of GenAI. It is becoming increasingly important for technological leaders to carefully examine the talent needs of their organizations, as they assist in guiding the adoption of generative AI. There are also ethical considerations that should be included on this checklist as well. 

A Deloitte report highlights the effectiveness of GenAI in eliminating the "expertise barrier": more people will be able to make more use of their data more happily and cost-effectively, according to Sachin Kulkarni, managing director, of risk and brand protection, at Deloitte. There may be a benefit to this, though as a result there may be an increased risk of data leaks as a result of this action." 

Furthermore, there has been concern expressed about the effects of generative AI on transparency, data provenance, intellectual property ownership, and hallucinations among professionals. Even though job displacement is often listed as a top concern by respondents, only 16% of those asked are reporting job displacement to be true. As a result of the assessment of emerging technology categories, business and IT professionals have concluded that cognitive technologies, which include large language models, artificial intelligence, neural networks, and generative AI, among others, pose the greatest ethical challenges.  

This category had a significant achievement over other technology verticals, including virtual reality, autonomous vehicles, and robotics. However, respondents stated that they considered cognitive technologies to be the most likely to bring about social good in the future. Flexential's survey published earlier this month found that several executives, in light of the huge reliance on data, are concerned about how generative AI tools can increase cybersecurity risks by extending their attack surface as a result, according to the report. 

In Deloitte's annual report, however, the percentage of professionals reporting that they use GenAI internally grew by 20% between last year and this year, reflecting an increase in the use of GenAI by their employees over the previous year. 94% of the respondents said they had incorporated it into their organization's processes in some way or another. Nevertheless, most respondents indicated that these technologies are either still in the pilot phase or are limited in their usage, with only 12% saying that they are used extensively. 

Gartner research published last year also found that about 80% of GenAI projects fail to make it past proof-of-concept as a result of a lack of resources. Europe has been impacted by the recent EU Artificial Intelligence Act and 34% of European respondents have reported that their organizations have taken action over the past year to change their use of AI to adapt to the Act's requirements. 

According to the survey results, however, the impact of the Act is more widespread, with 26% of respondents from the South Asian region changing their lifestyles because of it, and 16% of those from the North and South American regions did the same. The survey also revealed that 20 per cent of respondents based in the U.S. had altered the way their organization was operating as a result of the executive order. According to the survey, 25% of South Asians, 21% of South Americans, and 12% of Europeans surveyed had the same perspective. 

The report explains that "Cognitive technologies such as artificial intelligence (AI) have the potential to provide society with the greatest benefits, but are also the most vulnerable to misuse," according to its authors. The accelerated adoption of GenAI technology is overtaking the capacity of organizations to effectively govern it at a rapid pace. GenAI tools can provide a great deal of help to businesses in a range of areas, from choosing which use cases to apply them to quality assurance, to implementing ethical standards. 

Companies should prioritize both of these areas." Despite artificial intelligence being widely used, policymakers want to make sure that they won't get themselves into trouble with its use, especially when it comes to legislation because any use of it can lead to a lot of problems. 34% of respondents reported that regulatory compliance was their most important reason for implementing ethics policies and guidelines to comply with regulations, while regulatory penalties topped the list of concerns about not complying with such policies and guidelines. 

A new piece of legislation in the EU, known as the Artificial Intelligence Act, entered into force on August 1. The Act, which takes effect today, is intended to ensure that artificial intelligence systems that are used in high-risk environments are safe, transparent, and ethical. If a company does not comply with the regulations, it could face financial penalties ranging from €35 million ($38 million), which is equivalent to 7% of global turnover, to €7.5 million ($8.1 million), which is equivalent to 1.5% of global turnover. 

Over a hundred companies have already signed the EU Artificial Intelligence Pact, with Amazon, Google, Microsoft, and OpenAI among them; they have also volunteered to begin implementing the requirements of the bill before any deadlines established by law. Both of these actions demonstrate that they are committed to the responsible implementation of artificial intelligence in society, and also help them to avoid future legal challenges in the future. 

The United States released a similar executive order in October 2023 with broad guidelines regarding the protection and enhancement of military, civil, and personal privacy as well as protecting the security of government agencies while fostering AI innovation and competition across the entire country. Even though this is not a law, many companies operating in the U.S. have made policy changes to ensure compliance with regulatory changes and comply with public expectations regarding the privacy and security of AI.