Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Algorithm. Show all posts

Behind the Search Bar: How Google Algorithm Shapes Our Perspectives

Behind the Search Bar: How Google Shapes Our Perspectives

Search engines like Google have become the gateway to information. We rely on them for everything from trivial facts to critical news updates. However, what if these seemingly neutral tools were subtly shaping the way we perceive the world? According to the BBC article "The 'bias machine': How Google tells you what you want to hear," there's more to Google's search results than meets the eye.

The Power of Algorithms

At the heart of Google's search engine lies an intricate web of algorithms designed to deliver the most relevant results based on a user's query. These algorithms analyze a myriad of factors, including keywords, website popularity, and user behaviour. The goal is to present the most pertinent information quickly. However, these algorithms are not free from bias.

One key concern is the called "filter bubble" phenomenon. This term, coined by internet activist Eli Pariser, describes a situation where algorithms selectively guess what information a user would like to see based on their past behaviour. This means that users are often presented with search results that reinforce their existing beliefs, creating a feedback loop of confirmation bias.

Confirmation Bias in Action

Imagine two individuals with opposing views on climate change. If both search "climate change" on Google, they might receive drastically different results tailored to their browsing history and past preferences. The climate change skeptic might see articles questioning the validity of climate science, while the believer might be shown content supporting the consensus on global warming. This personalization of search results can deepen existing divides, making it harder for individuals to encounter and consider alternative viewpoints.

How Does It Affect People at Large?

The implications of this bias extend far beyond individual search results. In a society increasingly polarized by political, social, and cultural issues, the reinforcement of biases can contribute to echo chambers where divergent views are rarely encountered or considered. This can lead to a more fragmented and less informed public.

Moreover, the power of search engines to influence opinions has not gone unnoticed by those in positions of power. Political campaigns, advertisers, and interest groups have all sought to exploit these biases to sway public opinion. By strategically optimizing content for search algorithms, they can ensure their messages reach the most receptive audiences, further entrenching bias.

How to Address the Bias?

While search engine bias might seem like an inescapable feature of modern life, users do have some agency. Awareness is the first step. Users can take steps to diversify their information sources. Instead of relying solely on Google, consider using multiple search engines, and news aggregators, and visiting various websites directly. This can help break the filter bubble and expose individuals to a wider range of perspectives.

Social Media Content Fueling AI: How Platforms Are Using Your Data for Training

 

OpenAI has admitted that developing ChatGPT would not have been feasible without the use of copyrighted content to train its algorithms. It is widely known that artificial intelligence (AI) systems heavily rely on social media content for their development. In fact, AI has become an essential tool for many social media platforms.

For instance, LinkedIn is now using its users’ resumes to fine-tune its AI models, while Snapchat has indicated that if users engage with certain AI features, their content might appear in advertisements. Despite this, many users remain unaware that their social media posts and photos are being used to train AI systems.

Social Media: A Prime Resource for AI Training

AI companies aim to make their models as natural and conversational as possible, with social media serving as an ideal training ground. The content generated by users on these platforms offers an extensive and varied source of human interaction. Social media posts reflect everyday speech and provide up-to-date information on global events, which is vital for producing reliable AI systems.

However, it's important to recognize that AI companies are utilizing user-generated content for free. Your vacation pictures, birthday selfies, and personal posts are being exploited for profit. While users can opt out of certain services, the process varies across platforms, and there is no assurance that your content will be fully protected, as third parties may still have access to it.

How Social Platforms Are Using Your Data

Recently, the United States Federal Trade Commission (FTC) revealed that social media platforms are not effectively regulating how they use user data. Major platforms have been found to use personal data for AI training purposes without proper oversight.

For example, LinkedIn has stated that user content can be utilized by the platform or its partners, though they aim to redact or remove personal details from AI training data sets. Users can opt out by navigating to their "Settings and Privacy" under the "Data Privacy" section. However, opting out won’t affect data already collected.

Similarly, the platform formerly known as Twitter, now X, has been using user posts to train its chatbot, Grok. Elon Musk’s social media company has confirmed that its AI startup, xAI, leverages content from X users and their interactions with Grok to enhance the chatbot’s ability to deliver “accurate, relevant, and engaging” responses. The goal is to give the bot a more human-like sense of humor and wit.

To opt out of this, users need to visit the "Data Sharing and Personalization" tab in the "Privacy and Safety" settings. Under the “Grok” section, they can uncheck the box that permits the platform to use their data for AI purposes.

Regardless of the platform, users need to stay vigilant about how their online content may be repurposed by AI companies for training. Always review your privacy settings to ensure you’re informed and protected from unintended data usage by AI technologies

NIST Approves IBM's Quantum-Safe Algorithms for Future Data Security

 


In a defining move for digital security, the National Institute of Standards and Technology (NIST) has given its official approval to three quantum-resistant algorithms developed in collaboration with IBM Research. These algorithms are designed to safeguard critical data and systems from the emerging threats posed by quantum computing.

The Quantum Computing Challenge

Quantum computing is rapidly approaching, bringing with it the potential to undermine current encryption techniques. These advanced computers could eventually decode the encryption protocols that secure today’s digital communications, financial transactions, and sensitive information, making them vulnerable to breaches. To mitigate this impending risk, cybersecurity experts are striving to develop encryption methods capable of withstanding quantum computational power.

IBM's Leadership in Cybersecurity

IBM has been at the forefront of efforts to prepare the digital world for the challenges posed by quantum computing. The company highlights the necessity of "crypto-agility," the capability to  modify cryptographic methods to prepare in the face of rapid development of security challenges. This flexibility is especially crucial as quantum computing technology continues to develop, posing new threats to traditional security measures.

NIST’s Endorsement of Quantum-Safe Algorithms

NIST's recent endorsement of three IBM-developed algorithms is a crucial milestone in the advancement of quantum-resistant cryptography. The algorithms, known as ML-KEM for encryption and ML-DSA and SLH-DSA for digital signatures, are integral to IBM's broader strategy to ensure the resilience of cryptographic systems in the quantum era.

To facilitate the transition to quantum-resistant cryptography, IBM has introduced two essential tools: the IBM Quantum Safe Explorer and the IBM Quantum Safe Remediator. The Quantum Safe Explorer helps organisations identify which cryptographic methods are most susceptible to quantum threats, guiding their prioritisation of updates. The Quantum Safe Remediator, on the other hand, provides solutions to help organisations upgrade their systems with quantum-resistant cryptography, ensuring continued security during this transition.

As quantum computing technology advances, the urgency for developing encryption methods that can withstand these powerful machines becomes increasingly clear. IBM's contributions to the creation and implementation of quantum-safe algorithms are a vital part of the global effort to protect digital infrastructure from future threats. With NIST's approval, these algorithms represent a meaningful leap forward in securing sensitive data and systems against quantum-enabled attacks. By promoting crypto-agility and offering tools to support a smooth transition to quantum-safe cryptography, IBM is playing a key role in building a more secure digital future.


Many Passwords Can Be Cracked in Under an Hour, Study Finds


 

If you're not using strong, random passwords, your accounts might be more vulnerable than you think. A recent study by cybersecurity firm Kaspersky shows that a lot of passwords can be cracked in less than an hour due to advancements in computer processing power.

Kaspersky's research team used a massive database of 193 million passwords from the dark web. These passwords were hashed and salted, meaning they were somewhat protected, but still needed to be guessed. Using a powerful Nvidia RTX 4090 GPU, the researchers tested how quickly different algorithms could crack these passwords.

The results are alarming: simple eight-character passwords, made up of same-case letters and digits, could be cracked in as little as 17 seconds. Overall, they managed to crack 59% of the passwords in the database within an hour.

The team tried several methods, including the popular brute force attack, which attempts every possible combination of characters. While brute force is less effective for longer and more complex passwords, it still easily cracked many short, simple ones. They improved on brute force by incorporating common character patterns, words, names, dates, and sequences.

With the best algorithm, they guessed 45% of passwords in under a minute, 59% within an hour, and 73% within a month. Only 23% of passwords would take longer than a year to crack.

To protect your accounts, Kaspersky recommends using random, computer-generated passwords and avoiding obvious choices like words, names, or dates. They also suggest checking if your passwords have been compromised on sites like HaveIBeenPwned? and using unique passwords for different websites.

This research serves as a reminder of the importance of strong passwords in today's digital world. By taking these steps, you can significantly improve your online security and keep your accounts safe from hackers.


How to Protect Your Passwords

The importance of strong, secure passwords cannot be overstated. As the Kaspersky study shows, many common passwords are easily cracked with modern technology. Here are some tips to better protect your online accounts:

1. Use Random, Computer-Generated Passwords: These are much harder for hackers to guess because they don't follow predictable patterns.

2. Avoid Using Common Words and Names: Hackers often use dictionaries of common words and names to guess passwords.

3. Check for Compromised Passwords: Websites like HaveIBeenPwned? can tell you if your passwords have been leaked in a data breach.

4. Use Unique Passwords for Each Account: If one account gets hacked, unique passwords ensure that your other accounts remain secure.

Following these tips can help you stay ahead of hackers and protect your personal information. With the increasing power of modern computers, taking password security seriously is more important than ever.


Google Confirms Leak of 2,500 Internal Documents on Search Algorithm

 

In a significant incident, Google has confirmed the leak of 2,500 internal documents, exposing closely guarded information about its search ranking algorithm. This breach was first highlighted by SEO experts Rand Fishkin and Mike King of The Verge, who sought confirmation from Google via email. After multiple requests, Google spokesperson Davis Thompson acknowledged the leak, urging caution against making inaccurate assumptions based on potentially out-of-context, outdated, or incomplete information.  

The leaked data has stirred considerable interest, particularly as it reveals that Google considers the number of clicks when ranking web pages. This contradicts Google’s longstanding assertion that such metrics are not part of their ranking criteria. Despite this revelation, The Verge report indicates that it remains unclear which specific data points are actively used in ranking. It suggests that some of the information might be outdated, used strictly for training, or collected without being directly applied to search algorithms. 

Thompson responded to the allegations by emphasizing Google's commitment to transparency about how Search works and the factors their systems consider. He also highlighted Google's efforts to protect the integrity of search results from manipulation. This response underscores the complexity of Google's algorithm and the company's ongoing efforts to balance transparency and safeguarding its proprietary technology. The leak comes when the intricacies of Google's search algorithm are under intense scrutiny. 

Recent documents and testimony in the US Department of Justice antitrust case have already provided glimpses into the signals Google uses when ranking websites. This incident adds another layer of insight, though it also raises questions about the security of sensitive information within one of the world’s largest tech companies. Google’s decisions about search rankings have far-reaching implications. From small independent publishers to large online businesses, many rely on Google’s search results for visibility and traffic. 

The revelation of these internal documents not only impacts those directly involved in SEO and digital marketing but also sparks broader discussions about data security and the transparency of algorithms that significantly influence online behaviour and commerce. As the fallout from this leak continues, it serves as a reminder of the delicate balance between protecting proprietary information and the public’s interest in understanding the mechanisms that shape their online experiences. Google’s ongoing efforts to clarify and defend its practices will be crucial in navigating the challenges posed by this unprecedented exposure of its internal workings.

Deciphering the Impact of Neural Networks on Artificial Intelligence Evolution

 

Artificial intelligence (AI) has long been a frontier of innovation, pushing the boundaries of what machines can achieve. At the heart of AI's evolution lies the fascinating realm of neural networks, sophisticated systems inspired by the complex workings of the human brain. 

In this comprehensive exploration, we delve into the multifaceted landscape of neural networks, uncovering their pivotal role in shaping the future of artificial intelligence. Neural networks have emerged as the cornerstone of AI advancement, revolutionizing the way machines learn, adapt, and make decisions. 

Unlike traditional AI models constrained by rigid programming, neural networks possess the remarkable ability to glean insights from vast datasets through adaptive learning mechanisms. This paradigm shift has ushered in a new era of AI characterized by flexibility, intelligence, and innovation. 

At their core, neural networks mimic the interconnected neurons of the human brain, with layers of artificial nodes orchestrating information processing and decision-making. These networks come in various forms, from Feedforward Neural Networks (FNN) for basic tasks to complex architectures like Convolutional Neural Networks (CNN) for image recognition and Generative Adversarial Networks (GAN) for creative tasks. 

Each type offers unique capabilities, allowing AI systems to excel in diverse applications. One of the defining features of neural networks is their ability to adapt and learn from data patterns. Through techniques such as machine learning and deep learning, these systems can analyze complex datasets, identify intricate patterns, and make intelligent judgments without explicit programming. This adaptive learning capability empowers AI systems to continuously evolve and improve their performance over time, paving the way for unprecedented levels of sophistication. 

Despite their transformative potential, neural networks are not without challenges and ethical dilemmas. Issues such as algorithmic bias, opacity in decision-making processes, and data privacy concerns loom large, underscoring the need for responsible development and governance frameworks. By addressing these challenges head-on, we can ensure that AI advances in a manner that aligns with ethical principles and societal values. 

As we embark on this journey of exploration and innovation, it is essential to recognize the immense potential of neural networks to shape the future of artificial intelligence. By fostering a culture of responsible development, collaboration, and ethical stewardship, we can harness the full power of neural networks to tackle complex challenges, drive innovation, and enrich the human experience. 

The evolution of artificial intelligence is intricately intertwined with the transformative capabilities of neural networks. As these systems continue to evolve and mature, they hold the promise of unlocking new frontiers of innovation and discovery. By embracing responsible development practices and ethical guidelines, we can ensure that neural networks serve as catalysts for positive change, empowering AI to fulfill its potential as a force for good in the world.

Where is AI Leading Content Creation?


Artificial Intelligence (AI) is reshaping the world of social media content creation, offering creators new possibilities and challenges. The fusion of art and technology is empowering creators by automating routine tasks, allowing them to channel their energy into more imaginative pursuits. AI-driven tools like Midjourney, ElevenLabs, Opus Clip, and Papercup are democratising content production, making it accessible and cost-effective for creators from diverse backgrounds.  

Automation is at the forefront of this revolution, freeing up time and resources for creators. These AI-powered tools streamline processes such as research, data analysis, and content production, enabling creators to produce high-quality content more efficiently. This democratisation of content creation fosters diversity and inclusivity, amplifying voices from various communities. 

Yet, as AI takes centre stage, questions arise about authenticity and originality. While AI-generated content can be visually striking, concerns linger about its soul and emotional depth compared to human-created content. Creators find themselves navigating this terrain, striving to maintain authenticity while leveraging AI-driven tools to enhance their craft. 

AI analytics are playing a pivotal role in content optimization. Platforms like YouTube utilise AI algorithms for A/B testing headlines, predicting virality, and real-time audience sentiment analysis. Creators, armed with these insights, refine their content strategies to tailor messages, ultimately maximising audience engagement. However, ethical considerations like algorithmic bias and data privacy need careful attention to ensure the responsible use of AI analytics in content creation. 

The rise of virtual influencers, like Lil Miquela and Shudu Gram, poses a unique challenge to traditional content creators. While these virtual entities amass millions of followers, they also threaten the livelihoods of human creators, particularly in influencer marketing campaigns. Human creators, by establishing genuine connections with their audience and upholding ethical standards, can distinguish themselves from virtual counterparts, maintaining trust and credibility. 

As AI continues its integration into content creation, ethical and societal concerns emerge. Issues such as algorithmic bias, data privacy, and intellectual property rights demand careful consideration for the responsible deployment of AI technologies. Upholding integrity and ethical standards in creative practices, alongside collaboration between creators, technologists, and policymakers, is crucial to navigating these challenges and fostering a sustainable content creation ecosystem. 

In this era of technological evolution, the impact of AI on social media content creation is undeniable. As we embrace the possibilities it offers, addressing ethical concerns and navigating through the intricacies of this digitisation is of utmost importance for creators and audiences alike.

 

FBI Alerts: Hackers Exploit AI for Advanced Attacks

The Federal Bureau of Investigation (FBI) has recently warned against the increasing use of artificial intelligence (AI) in cyberattacks. The FBI asserts that hackers are increasingly using AI-powered tools to create sophisticated and more harmful malware, which makes cyber defense more difficult.

According to sources, the FBI is concerned that malicious actors are harnessing the capabilities of AI to bolster their attacks. The ease of access to open-source AI programs has provided hackers with a potent arsenal to devise and deploy attacks with greater efficacy. The agency's spokesperson noted, "AI-driven cyberattacks represent a concerning evolution in the tactics employed by malicious actors. The utilization of AI can significantly amplify the impact of their attacks."

Cybercriminals now have much easier access to the market thanks to AI and hacking tactics. It used to take a lot of knowledge and time to create complex malware, which restricted the range of assaults. Even less experienced hackers may now produce effective and evasive malware thanks to integrating AI algorithms with malware development.

The FBI's suspicions are supported by instances showing AI-assisted hacks' disruptive potential. protection researchers have noted that malware can quickly and automatically adapt thanks to AI, making it difficult for conventional protection measures to stay up. Because AI can learn and adapt in real time, hackers can design malware that can avoid detection by changing its behavior in response to changing security procedures.

The usage of AI-generated deepfake content, which may be exploited for sophisticated phishing attempts, raises even more concerns. These assaults sometimes include impersonating reliable people or organizations, increasing the possibility that targets may be compromised.

Cybersecurity professionals underline the need to modify defensive methods as the threat landscape changes. Cybersecurity expert: "The use of AI in cyberattacks necessitates a parallel development of AI-driven defense mechanisms." To combat the increasing danger, AI-powered security systems that can analyze patterns, find abnormalities, and react in real time are becoming essential.

Although AI has enormous potential to positively revolutionize industries, because of its dual-use nature, caution must be taken to prevent malevolent implementations. The partnership between law enforcement, cybersecurity companies, and technology specialists becomes essential in order to keep one step ahead of hackers as the FBI underscores the growing threat of AI-powered attacks.

The Risks and Ethical Implications of AI Clones


The rapid advancement of artificial intelligence (AI) technology has opened up a world of exciting possibilities, but it also brings to light important concerns regarding privacy and security. One such emerging issue is the creation of AI clones based on user data, which carries significant risks and ethical implications that must be carefully addressed.

AI clones are virtual replicas designed to mimic an individual's behavior, preferences, and characteristics using their personal data. This data is gathered from various digital footprints, such as social media activity, browsing history, and online interactions. By analyzing and processing this information, AI algorithms can generate personalized clones capable of simulating human-like responses and behaviors.

While the concept of AI clones may appear intriguing, it raises substantial concerns surrounding privacy and consent. The primary risk stems from potential misuse or unauthorized access to personal data, as creating AI clones often necessitates extensive information about an individual. Such data may be vulnerable to breaches or unauthorized access, leading to potential misuse or abuse.

Furthermore, AI clones can be exploited for malicious purposes, including social engineering or impersonation. In the wrong hands, these clones could deceive individuals, manipulate their opinions, or engage in fraudulent activities. The striking resemblance between AI clones and real individuals makes it increasingly challenging for users to distinguish between genuine interactions and AI-generated content, intensifying the risks associated with targeted scams or misinformation campaigns.

Moreover, the ethical implications of AI clones are significant. Creating and employing AI clones without explicit consent or individuals' awareness raises questions about autonomy, consent, and the potential for exploitation. Users may not fully comprehend or anticipate the consequences of their data being utilized to create AI replicas, particularly if those replicas are employed for purposes they do not endorse or approve.

Addressing these risks necessitates a multifaceted approach. Strengthening data protection laws and regulations is crucial to safeguard individuals' privacy and prevent unauthorized access to personal information. Transparency and informed consent should form the cornerstone of AI clone creation, ensuring that users possess complete knowledge and control over the use of their data.

Furthermore, AI practitioners and technology developers must adhere to ethical standards that encompass secure data storage, encryption, and effective access restrictions. To prevent potential harm and misuse, ethical considerations should be deeply ingrained in the design and deployment of AI systems.

By striking a delicate balance between the potential benefits and potential pitfalls of AI clones, we can harness the power of this technology while safeguarding individuals' privacy, security, and ethical rights. Only through comprehensive safeguards and responsible practices can we navigate the complex landscape of AI clones and protect against their potential negative implications.

Promoting Trust in Facial Recognition: Principles for Biometric Vendors

 

Facial recognition technology has gained significant attention in recent years, with its applications ranging from security systems to unlocking smartphones. However, concerns about privacy, security, and potential misuse have also emerged, leading to a call for stronger regulation and ethical practices in the biometrics industry. To promote trust in facial recognition technology, biometric vendors should embrace three key principles that prioritize privacy, transparency, and accountability.
  1. Privacy Protection: Respecting individuals' privacy is crucial when deploying facial recognition technology. Biometric vendors should adopt privacy-centric practices, such as data minimization, ensuring that only necessary and relevant personal information is collected and stored. Clear consent mechanisms must be in place, enabling individuals to provide informed consent before their facial data is processed. Additionally, biometric vendors should implement strong security measures to safeguard collected data from unauthorized access or breaches.
  2. Transparent Algorithms and Processes: Transparency is essential to foster trust in facial recognition technology. Biometric vendors should disclose information about the algorithms used, ensuring they are fair, unbiased, and capable of accurately identifying individuals across diverse demographic groups. Openness regarding the data sources and training datasets is vital, enabling independent audits and evaluations to assess algorithm accuracy and potential biases. Transparency also extends to the purpose and scope of data collection, giving individuals a clear understanding of how their facial data is used.
  3. Accountability and Ethical Considerations: Biometric vendors must demonstrate accountability for their facial recognition technology. This involves establishing clear policies and guidelines for data handling, including retention periods and the secure deletion of data when no longer necessary. The implementation of appropriate governance frameworks and regular assessments can help ensure compliance with regulatory requirements, such as the General Data Protection Regulation (GDPR) in the European Union. Additionally, vendors should conduct thorough impact assessments to identify and mitigate potential risks associated with facial recognition technology.
Biometric businesses must address concerns and foster trust in their goods and services as facial recognition technology spreads. These vendors can aid in easing concerns around facial recognition technology by adopting values related to privacy protection, openness, and accountability. Adhering to these principles can not only increase public trust but also make it easier to create regulatory frameworks that strike a balance between innovation and the defense of individual rights. The development of facial recognition technology will ultimately be greatly influenced by the moral and ethical standards upheld by the biometrics sector.






Demanding Data Privacy Measures, FBI Cyber Agent Urges Users

 

The FBI maintains a close eye on cyber security risks, but officials emphasized that in order to be more proactive with the prevention, they need the assistance of both people and businesses.

Every one of us can simply navigate that large and somewhat disorganized ecology thanks to algorithms. These algorithms are really beneficial at their best. At their worst, they are tools of mass deception that might seriously harm us, our loved ones, and our society.

These algorithms don't result in immediate or obvious improvements. Instead, they encourage persistent micro-manipulations that, with time, significantly alter our culture, politics, and attitudes. It makes little difference if you can fend off the manipulation or decide not to use the apps that use these algorithms. Your environment will change, but not in ways that are advantageous to you; rather, it will change in ways that are advantageous to the people who own and manage the platforms, when enough of your neighbors and friends make these very imperceptible adjustments in attitudes and conduct.

Over the years, numerous government officials have voiced comparable cautions, and two presidential administrations have made various attempts to resolve these security worries.TikTok has long maintained that it does not adhere to Chinese government content filtering regulations and that it retains user data from American users in the United States. But, the business has come under more and more criticism lately, and in July it finally admitted that non-American staff members did indeed have access to customer data from Americans.

Data privacy advocates have long raised concerns about these algorithms, but they have had little luck in enacting significant change. The American Data Privacy and Protection Act (ADPPA) would, for the first time, begin to hold the developers of these algorithms responsible and force them to show that their engagement formulas are not damaging the public. Because to these worries, the U.S. Senate overwhelmingly passed a law barring the software on all federally-issued devices. At least 11 other states have already ordered similar bans on state-owned devices.

Consumers currently have little control over how and by whom their equally important personal data is used for the benefit of others. A law similar to the ADPPA would offer a procedure to begin comprehending how these algorithms function, allowing users to have an impact on how they operate and are used.



A New Era is Emerging in Cybersecurity, but Only the Best Algorithms will Survive

 

The industry identified that basic fingerprinting could not maintain up with the rate of these developments, and the requirement to be everywhere, at all times, pushed the acceptance of AI technology to deal with the scale and complexity of modern business security. 

Since then, the AI defence market has become crowded with vendors promising data analytics, looking for "fuzzy matches": close matches to previously encountered threats, and eventually using machine learning to detect similar attacks. While this is an advancement over basic signatures, using AI in this manner does not hide the fact that it is still reactive. It may be capable of recognizing attacks that are very similar to previous incidents, but it is unable to prevent new attack infrastructure and techniques that the system has never seen before.

Whatever you call it, this system is still receiving the same historical attack data. It recognises that in order to succeed, there must be a "patient zero" — or first victim. Supervised machine learning is another term for "pretraining" an AI on observed data (ML). This method does have some clever applications in cybersecurity. For example, in threat investigation, supervised ML has been used to learn and mimic how a human analyst conducts investigations — asking questions, forming and revising hypotheses, and reaching conclusions — and can now carry out these investigations autonomously at speed and scale.

But what about tracking down the first traces of an attack? What about detecting the first indication that something is wrong?

The issue with utilising supervised ML in this area is that it is only as good as its historical training set — not with new things. As a result, it must be constantly updated, and the update must be distributed to all customers. This method also necessitates sending the customer's data to a centralised data lake in the cloud to be processed and analysed. When an organisation becomes aware of a threat, it is frequently too late.

As a result, organisations suffer from a lack of tailored protection, a high number of false positives, and missed detections because this approach overlooks one critical factor: the context of the specific organisation it is tasked with protecting.

However, there is still hope for defenders in the war of algorithms. Today, thousands of organisations utilise a different application of AI in cyber defence, taking a fundamentally different approach to defending against the entire attack spectrum — including indiscriminate and known attacks, as well as targeted and unknown attacks.

Unsupervised machine learning involves the AI learning the organisation rather than training it on what an attack looks like. In this scenario, the AI learns its surroundings from the inside out, down to the smallest digital details, understanding "normal" for the specific digital environment in which it is deployed in order to identify what is not normal.

This is AI that comprehends "you" in order to identify your adversary. It was once thought to be radical, but it now protects over 8,000 organisations worldwide by detecting, responding to, and even avoiding the most sophisticated cyberattacks.

Consider last year's widespread Hafnium attacks on Microsoft Exchange Servers. Darktrace's unmonitored ML identified and disrupted a series of new, unattributed campaigns in real time across many of its customer environments, with no prior threat intelligence associated with these attacks. Other organisations, on the other hand, were caught off guard and vulnerable to the threat until Microsoft revealed the attacks a few months later.

This is where unsupervised ML excels — autonomously detecting, investigating, and responding to advanced and previously unseen threats based on a unique understanding of the organization in question. Darktrace's AI research centre in Cambridge, UK, tested this AI technology against offensive AI prototypes. These prototypes, like ChatGPT, can create hyperrealistic and contextualised phishing emails and even choose a suitable sender to spoof and fire the emails.

The conclusions are clear: as attackers begin to weaponize AI for nefarious reasons, security teams will require AI to combat AI. Unsupervised machine learning will be critical because it learns on the fly, constructing a complex, evolving understanding of every user and device across the organisation. With this bird's-eye view of the digital business, unsupervised AI that recognises "you" will detect offensive AI as soon as it begins to manipulate data and will take appropriate action.

Offensive AI may be exploited for its speed, but defensive AI will also contribute to the arms race. In the war of algorithms, the right approach to ML could mean the difference between a strong security posture and disaster.

Prometheus Ransomware's Bugs Inspired Researchers to Try to Build a Near-universal Decryption Tool

 

Prometheus, a ransomware variant based on Thanos that locked up victims' computers in the summer of 2021, contained a major "vulnerability" that prompted IBM security researchers to attempt to create a one-size-fits-all ransomware decryptor that could work against numerous ransomware variants, including Prometheus, AtomSilo, LockFile, Bandana, Chaos, and PartyTicket. 

Despite the fact that the IBM researchers were able to erase the work of many ransomware versions, the panacea decryptor never materialised. According to Andy Piazza, IBM worldwide head of threat intelligence, the team's efforts indicated that while some ransomware families may be reverse-engineered to produce a decryption tool, no organisation should rely on decryption alone as a response to a ransomware assault. 

“Hope is not a strategy,” Piazza said at RSA Conference 2022, held in San Francisco in person for the first time in two years. 

Aaron Gdanski, who was assisted by security researcher Anne Jobman, stated he became interested in developing a Prometheus decryption tool when one of IBM Security's clients got infected with the ransomware. He started by attempting to comprehend the ransomware's behaviour: Did it persist in the environment? Did it upload any files? And, more particularly, how did it produce the keys required to encrypt files? 

Gdanski discovered that Prometheus' encryption process relied on both "a hardcoded initialization vector that did not vary between samples" and the computer's uptime by using the DS-5 debugger and disassembler. Gdanski also discovered that Prometheus generated its seeds using a random number generator that defaulted to Environment.

“If I could obtain the seed at the time of encryption, I could use the same algorithm Prometheus did to regenerate the key it uses,” Gdanski stated. 

Gdanski had a starting point to focus his investigation after obtaining the startup time on an afflicted system and the recorded timestamp on an encrypted file. Gdanski developed a seed from Prometheus after some further computations and tested it on sections of encrypted data. Gdanski's efforts were rewarded with some fine-tuning. Gdanski also discovered that the seed changed based on when a file was encrypted. That meant that a single decryption key would not work, but he was able to gradually generate a series of seeds that could be used for decryption by sorting the encrypted files by the last write time on the system. 

Gdanski believes the result might be applied to other ransomware families that rely on similar flawed random number generators. “Any time a non-cryptographically secure random number generator is used, you’re probably able to recreate a key,” Gdanski stated. 

However, Gdanski stressed that this problem is unusual in his experience. As Piazza emphasised, the best protection against ransomware isn't hoping that the ransomware used in an assault is badly executed, it’s preventing a ransomware attack before it happens.