Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Data Privacy. Show all posts

Ransomware Gangs Actively Recruiting Pen Testers: Insights from Cato Networks' Q3 2024 Report

 

Cybercriminals are increasingly targeting penetration testers to join ransomware affiliate programs such as Apos, Lynx, and Rabbit Hole, according to Cato Network's Q3 2024 SASE Threat Report, published by its Cyber Threats Research Lab (CTRL).

The report highlights numerous Russian-language job advertisements uncovered through surveillance of discussions on the Russian Anonymous Marketplace (RAMP). Speaking at an event in Stuttgart, Germany, on November 12, Etay Maor, Chief Security Strategist at Cato Networks, explained:"Penetration testing is a term from the security side of things when we try to reach our own systems to see if there are any holes. Now, ransomware gangs are hiring people with the same level of expertise - not to secure systems, but to target systems."

He further noted, "There's a whole economy in the criminal underground just behind this area of ransomware."

The report details how ransomware operators aim to ensure the effectiveness of their attacks by recruiting skilled developers and testers. Maor emphasized the evolution of ransomware-as-a-service (RaaS), stating, "[Ransomware-as-a-service] is constantly evolving. I think they're going into much more details than before, especially in some of their recruitment."

Cato Networks' team discovered instances of ransomware tools being sold, such as locker source code priced at $45,000. Maor remarked:"The bar keeps going down in terms of how much it takes to be a criminal. In the past, cybercriminals may have needed to know how to program. Then in the early 2000s, you could buy viruses. Now you don't need to even buy them because [other cybercriminals] will do this for you."

AI's role in facilitating cybercrime was also noted as a factor lowering barriers to entry. The report flagged examples like a user under the name ‘eloncrypto’ offering a MAKOP ransomware builder, an offshoot of PHOBOS ransomware.

The report warns of the growing threat posed by Shadow AI—where organizations or employees use AI tools without proper governance. Of the AI applications monitored, Bodygram, Craiyon, Otter.ai, Writesonic, and Character.AI were among those flagged for security risks, primarily data privacy concerns.

Cato CTRL also identified critical gaps in Transport Layer Security (TLS) inspection. Only 45% of surveyed organizations utilized TLS inspection, and just 3% inspected all relevant sessions. This lapse allows attackers to leverage encrypted TLS traffic to evade detection.

In Q3 2024, Cato CTRL noted that 60% of CVE exploit attempts were blocked within TLS traffic. Prominent vulnerabilities targeted included Log4j, SolarWinds, and ConnectWise.

The report is based on the analysis of 1.46 trillion network flows across over 2,500 global customers between July and September 2024. It underscores the evolving tactics of ransomware gangs and the growing challenges organizations face in safeguarding their systems.

New SMTP Cracking Tool for 2024 Sold on Dark Web Sparks Email Security Alarm

 

A new method targeting SMTP (Simple Mail Transfer Protocol) servers, specifically updated for 2024, has surfaced for sale on the dark web, sparking significant concerns about email security and data privacy.

This cracking technique is engineered to bypass protective measures, enabling unauthorized access to email servers. Such breaches risk compromising personal, business, and government communications.

The availability of this tool showcases the growing sophistication of cybercriminals and their ability to exploit weaknesses in email defenses. Unauthorized access to SMTP servers not only exposes private correspondence but also facilitates phishing, spam campaigns, and cyber-espionage.

Experts caution that widespread use of this method could result in increased phishing attacks, credential theft, and malware distribution. "Organizations and individuals must prioritize strengthening email security protocols, implementing strong authentication, and closely monitoring for unusual server activity," they advise.

Mitigating these risks requires consistent updates to security patches, enforcing multi-factor authentication, and using email encryption. The emergence of this dark web listing highlights the ongoing threats cybercriminals pose to critical communication systems.

As attackers continue to innovate, the cybersecurity community emphasizes vigilance and proactive defense strategies to safeguard sensitive information. This development underscores the urgent need for robust email security measures in the face of evolving cyber threats.

Hacker Claims to Publish Nokia Source Code

 

The Finnish telecoms equipment firm Nokia is looking into the suspected release of source code material on a criminal hacking site. See also: Gartner Market Guide for DFIR Retainer Services.

An attacker going by the handle "IntelBroker," who is also the proprietor of the current iteration of BreachForums, revealed on Thursday what he said was a cache of "Nokia-related source code" stolen from a third-party breach. The data consists of two folders: "nokia_admin1" and "nokia_etl_summary-data."

IntelBroker initially stated in a Last week's BreachForums post that he was selling the code, characterising it as a collection of "SSH keys, source code, RSA keys, Bitbucket logins, SMTP accounts, Webhooks, and hardcoded credentials."

A Nokia spokesperson stated that the company is "aware of reports that an unauthorised actor has alleged to have gained access to certain third-party contractor data, and possibly Nokia data." We will continue to constantly watch the situation." Last week on Tuesday, the hacker told Hackread that the data would cost $20,000.

IntelBroker told Bleeping Computer that the data came from Nokia's third-party service provider SonarQube. The hacker claimed to have gained access using a default password. SonarQube did not immediately reply to a request for comment.

In 2023, IntelBroker published online data stolen from a health insurance marketplace used by members of Congress, their families, and staffers. Earlier this year, he sparked a probe at the Department of State by uploading online papers purportedly stolen from government contractor Acuity. 

Third-party breaches at major firms are becoming more regular as companies improve their own cyber defences. Earlier this year, a slew of well-known brands, including AT&T, Ticketmaster, Santander Bank, automotive parts supplier Advance Auto Parts, and luxury retailer Neiman Marcus, were hit with breaches caused by a series of attacks on their accounts at cloud-based data warehousing platform Snowflake.

ZKP Emerged as the "Must-Have" Component of Blockchain Security.

 

Zero-knowledge proof (ZKP) has emerged as a critical security component in Web3 and blockchain because it ensures data integrity and increases privacy. It accomplishes this by allowing verification without exposing data. ZKP is employed on cryptocurrency exchanges to validate transaction volumes or values while safeguarding the user's personal information.

In addition to ensuring privacy, it protects against fraud. Zero-knowledge cryptography, a class of algorithms that includes ZKP, enables complex interactions and strengthens blockchain security. Data is safeguarded from unauthorised access and modification while it moves through decentralised networks. 

Blockchain users are frequently asked to certify that they have sufficient funds to execute a transaction, but they may not necessarily want to disclose their whole amount. ZKP can verify that users meet the necessary standards during KYC processes on cryptocurrency exchanges without requiring users to share their paperwork. Building on this, Holonym offered Human Keys to ensure security and privacy in Zero Trust situations. 

Each person is given a unique key that they can use to unlock their security and privacy rights. It strengthens individual rights through robust decentralised protocols and configurable privacy. The privacy-preserving principle applies to several elements of Web3 data security. ZKP involves complex cryptographic validations, and any effort to change the data invalidates the proof. 

Trustless data processing eases smart contract developer work 

Smart contract developers are now working with their hands tied, limited to self-referential opcodes that cannot provide the information required to assess blockchain activities. To that end, the Space and Time platform's emphasis on enabling trustless, multichain data processing and strengthening smart contracts is worth mentioning, since it ultimately simplifies developers' work. 

Their SXT Chain, a ZKP data blockchain, is now live on testnet. It combines decentralised data storage and blockchain verification. Conventional blockchains are focused on transactions, however SXT Chain allows for advanced data querying and analysis while preserving data integrity through blockchain technology.

The flagship DeFi generation introduced yield farming and platforms like Aave and Uniswap. The new one includes tokenized real-world assets, blockchain lending with dynamic interest rates, cross-chain derivatives, and increasingly complicated financial products. 

To unlock Web3 use cases, a crypto-native, trustless query engine is required, which allows for more advanced DeFi by providing smart contracts with the necessary context. Space and Time is helping to offer one by extending on Chainlink's aggregated data points with a SQL database, allowing smart contract authors to execute SQL processing on any part of Ethereum's history. 

Effective and fair regulatory model 

ZKP allows for selective disclosure, in which just the information that regulators require is revealed. Web3 projects comply with KYC and AML rules while protecting user privacy. ZKP even opens up the possibility of a tiered regulation mechanism based on existing privacy models. Observers can examine the ledger for unusual variations and report any suspect accounts or transactions to higher-level regulators. 

Higher-level regulators reveal particular transaction data. The process is supported by zero-knowledge SNARKs (Succinct Non-interactive Arguments of Knowledge) and attribute-based encryption. These techniques use ZKP to ensure consistency between transaction and regulatory information, preventing the use of fake information to escape monitoring. 

Additionally, ZK solutions let users withdraw funds in a matter of minutes, whereas optimistic rollups take approximately a week to finalise transactions and process withdrawals.

The Growing Concern Regarding Privacy in Connected Cars

 

Data collection and use raise serious privacy concerns, even though they can improve driving safety, efficiency, and the whole experience. The automotive industry's ability to collect, analyse, and exchange such data outpaces the legislative frameworks intended to protect individuals. In numerous cases, car owners have no information or control over how their data is used, let alone how it is shared with third parties. 

The FIA European Bureau feels it is time to face these challenges straight on. As advocates for driver and car owners' rights, we are calling for clearer, more open policies that restore individuals' control over their data. This is why, in partnership with Privacy4Cars, we are hosting an event called "Driving Data Rights: Enhancing Privacy and Control in Connected Cars" on November 19th in Brussels. The event will bring together policymakers, industry executives, and civil society to explore current gaps in legislation and industry practices, as well as how we can secure enhanced data protection for all. 

Balancing innovation with privacy 

A recent Privacy4Cars report identifies alarming industry patterns, demonstrating that many organisations are not fully compliant with GDPR laws. Data transparency, security, and consent methods are often lacking, exposing consumers to data misuse. These findings highlight the critical need for reforms that allow individuals more control over their data while ensuring that privacy is not sacrificed in the sake of innovation.

The benefits of connected vehicle data are apparent. Data has the potential to alter the automotive industry in a variety of ways, including improved road safety, predictive maintenance, and enhanced driving experiences. However, this should not be at the expense of individual private rights. 

As the automobile sector evolves, authorities and industry stakeholders must strike the correct balance between innovation and privacy protection. Stronger enforcement of existing regulations, as well as the creation of new frameworks that suit the unique needs of connected vehicles, are required. Car owners should have a say in how their data is utilised and be confident that it is managed properly. 

Shaping the future of data privacy in cars 

The forthcoming event on November 19th will provide an opportunity to dig deeper into these concerns. Key stakeholders from the European Commission, the automotive industry, and privacy experts will meet to discuss the present legal landscape and what else can be done to protect persons in this fast changing environment. 

The agenda includes presentations from Privacy4Cars on the most recent findings on automotive privacy practices, a panel discussion with automotive industry experts, and case studies demonstrating real-world examples of data misuse and third-party access. 

Connected cars are the future of mobility, but it must be founded on confidence and transparency. By giving individuals authority over their personal data, we can build a system that benefits everyone—drivers, manufacturers, and society as a whole. The FIA European Bureau is committed to collaborating with all parties to make this happen.

Balancing Act: Russia's New Data Decree and the Privacy Dilemma

Balancing Act: Russia's New Data Decree and the Privacy Dilemma

Data Privacy and State Access

Russia's Ministry of Digital Development, Communications, and Mass Media has introduced a draft decree specifying the conditions under which authorities can access staff and customer data from businesses operating in Russia, according to Forbes.

The decree would authorize authorities to demand anonymized personal data of customers and employees from businesses in order to protect the population during emergencies, prevent terrorism, and control the spread of infectious diseases, as well as for economic and social research purposes.

The Proposed Decree

Expected to take effect in September 2025, this draft decree follows amendments to the law On Personal Data, adopted on August 8. This law established a State Information System, requiring businesses and state agencies to upload the personal data of their staff and customers upon request.

The Big Data Association, a nonprofit that includes major Russian companies like Yandex, VK, and Gazprombank, has expressed concerns that the draft decree would permit authorities to request personal data from businesses "for virtually any reason." They warned that this could create legal uncertainties and impose excessive regulatory burdens on companies processing personal data, affecting nearly all businesses and organizations.

Global Context: A Tightrope Walk

Russia is not alone in its quest for greater access to personal data. Countries around the world are grappling with similar issues. For instance, the United States has its own set of laws and regulations under the Patriot Act and subsequent legislation that allows the government to access personal data under certain conditions. Similarly, the European Union’s General Data Protection Regulation (GDPR) provides a framework for data access while aiming to protect individual privacy.

Each country’s approach reflects its unique political, social, and cultural context. However, the core issue remains: finding the right balance between state access and individual privacy.

Ethical and Social Implications

The debate over state access to personal data is not purely legal or political; it is deeply ethical and social. Enhanced state access can lead to improved public safety and national security. For example, during a health crisis like the COVID-19 pandemic, having access to personal data can help in effective contact tracing and monitoring the spread of the virus.

New Tool Circumvents Google Chrome's New Cookie Encryption System

 

A researcher has developed a tool that bypasses Google's new App-Bound encryption cookie-theft defences and extracts saved passwords from the Chrome browser. 

Alexander Hagenah, a cybersecurity researcher, published the tool, 'Chrome-App-Bound-Encryption-Decryption,' after noticing that others had previously identified equivalent bypasses. 

Although the tool delivers what several infostealer operations have already done with their malware, its public availability increases the risk for Chrome users who continue to store sensitive information in their browsers. 

Google launched Application-Bound (App-Bound) encryption in July (Chrome 127) as a new security feature that encrypts cookies using a Windows process with SYSTEM rights. 

The goal was to safeguard sensitive data against infostealer malware, which operates with the logged user's access, making it impossible to decrypt stolen cookies without first achieving SYSTEM privileges and potentially setting off security software alarms. 

"Because the App-Bound service is running with system privileges, attackers need to do more than just coax a user into running a malicious app," noted Google in July. "Now, the malware has to gain system privileges, or inject code into Chrome, something that legitimate software shouldn't be doing.” 

However, by September, several infostealer thieves had discovered ways to circumvent the new security feature, allowing their cybercriminal customers to once again siphon and decrypt sensitive data from Google Chrome. 

Google previously stated that the "cat and mouse" game between info-stealer developers and its engineers was to be expected, and that they never assumed that its defence measures would be impenetrable. Instead, they believed that by introducing App-Bound encryption, they could finally set the groundwork for progressively constructing a more robust system. Below is Google's response from the time:

"We are aware of the disruption that this new defense has caused to the infostealer landscape and, as we stated in the blog, we expect this protection to cause a shift in attacker behavior to more observable techniques such as injection or memory scraping. This matches the new behavior we have seen. 

We continue to work with OS and AV vendors to try and more reliably detect these new types of attacks, as well as continuing to iterate on hardening defenses to improve protection against infostealers for our users.”

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

Growing Focus on Data Privacy Among GenAI Professionals in 2024

 


Recent reports published by Deloitte and Deloitte Consulting, highlighting the significance of data privacy as it pertains to Generative Artificial Intelligence (GenAI), have been widely cited. As the survey found, there has been a significant increase in professionals' concerns about data privacy; only 22% ranked it as their top concern at the beginning of 2023, and the number will rise to 72% by the end of 2024. 

Technology is advancing at an exponential rate, and as a result, there is a growing awareness of its potential risks. There has been a surge in concerns over data privacy caused by generative AI across several industries, according to a new report by Deloitte. Only 22% of professionals ranked it as among their top three concerns last year, these numbers have risen to 72% this year, according to a recent study. 

There was also strong concern regarding data provenance and transparency among professionals, with 47% and 40% informing us that they considered them to be among their top three ethical GenAI concerns for this year, respectively. The proportion of respondents concerned about job displacement, however, was only 16%. It is becoming increasingly common for staff to be curious about how AI technology operates, especially when it comes to sensitive data. 

Almost half of security professionals surveyed by HackerOne in September believe AI is risky, with many of them believing leaks of training data threaten their networks' security. It is noteworthy that 78% of business leaders ranked "safe and secure" as one of their top three ethical technology principles. This represents a 37% increase from the year 2023, which shows the importance of security to businesses today.

As a result of Deloitte's 2024 "State of Ethics and Trust in Technology " report, the results of the survey were reported in a report which surveyed over 1,800 business and technical professionals worldwide, asking them to rate the ethical principles that they apply to technological processes and, specifically, to their use of GenAI. It is becoming increasingly important for technological leaders to carefully examine the talent needs of their organizations, as they assist in guiding the adoption of generative AI. There are also ethical considerations that should be included on this checklist as well. 

A Deloitte report highlights the effectiveness of GenAI in eliminating the "expertise barrier": more people will be able to make more use of their data more happily and cost-effectively, according to Sachin Kulkarni, managing director, of risk and brand protection, at Deloitte. There may be a benefit to this, though as a result there may be an increased risk of data leaks as a result of this action." 

Furthermore, there has been concern expressed about the effects of generative AI on transparency, data provenance, intellectual property ownership, and hallucinations among professionals. Even though job displacement is often listed as a top concern by respondents, only 16% of those asked are reporting job displacement to be true. As a result of the assessment of emerging technology categories, business and IT professionals have concluded that cognitive technologies, which include large language models, artificial intelligence, neural networks, and generative AI, among others, pose the greatest ethical challenges.  

This category had a significant achievement over other technology verticals, including virtual reality, autonomous vehicles, and robotics. However, respondents stated that they considered cognitive technologies to be the most likely to bring about social good in the future. Flexential's survey published earlier this month found that several executives, in light of the huge reliance on data, are concerned about how generative AI tools can increase cybersecurity risks by extending their attack surface as a result, according to the report. 

In Deloitte's annual report, however, the percentage of professionals reporting that they use GenAI internally grew by 20% between last year and this year, reflecting an increase in the use of GenAI by their employees over the previous year. 94% of the respondents said they had incorporated it into their organization's processes in some way or another. Nevertheless, most respondents indicated that these technologies are either still in the pilot phase or are limited in their usage, with only 12% saying that they are used extensively. 

Gartner research published last year also found that about 80% of GenAI projects fail to make it past proof-of-concept as a result of a lack of resources. Europe has been impacted by the recent EU Artificial Intelligence Act and 34% of European respondents have reported that their organizations have taken action over the past year to change their use of AI to adapt to the Act's requirements. 

According to the survey results, however, the impact of the Act is more widespread, with 26% of respondents from the South Asian region changing their lifestyles because of it, and 16% of those from the North and South American regions did the same. The survey also revealed that 20 per cent of respondents based in the U.S. had altered the way their organization was operating as a result of the executive order. According to the survey, 25% of South Asians, 21% of South Americans, and 12% of Europeans surveyed had the same perspective. 

The report explains that "Cognitive technologies such as artificial intelligence (AI) have the potential to provide society with the greatest benefits, but are also the most vulnerable to misuse," according to its authors. The accelerated adoption of GenAI technology is overtaking the capacity of organizations to effectively govern it at a rapid pace. GenAI tools can provide a great deal of help to businesses in a range of areas, from choosing which use cases to apply them to quality assurance, to implementing ethical standards. 

Companies should prioritize both of these areas." Despite artificial intelligence being widely used, policymakers want to make sure that they won't get themselves into trouble with its use, especially when it comes to legislation because any use of it can lead to a lot of problems. 34% of respondents reported that regulatory compliance was their most important reason for implementing ethics policies and guidelines to comply with regulations, while regulatory penalties topped the list of concerns about not complying with such policies and guidelines. 

A new piece of legislation in the EU, known as the Artificial Intelligence Act, entered into force on August 1. The Act, which takes effect today, is intended to ensure that artificial intelligence systems that are used in high-risk environments are safe, transparent, and ethical. If a company does not comply with the regulations, it could face financial penalties ranging from €35 million ($38 million), which is equivalent to 7% of global turnover, to €7.5 million ($8.1 million), which is equivalent to 1.5% of global turnover. 

Over a hundred companies have already signed the EU Artificial Intelligence Pact, with Amazon, Google, Microsoft, and OpenAI among them; they have also volunteered to begin implementing the requirements of the bill before any deadlines established by law. Both of these actions demonstrate that they are committed to the responsible implementation of artificial intelligence in society, and also help them to avoid future legal challenges in the future. 

The United States released a similar executive order in October 2023 with broad guidelines regarding the protection and enhancement of military, civil, and personal privacy as well as protecting the security of government agencies while fostering AI innovation and competition across the entire country. Even though this is not a law, many companies operating in the U.S. have made policy changes to ensure compliance with regulatory changes and comply with public expectations regarding the privacy and security of AI.

Ethics and Tech: Data Privacy Concerns Around Generative AI

Ethics and Tech: Data Privacy Concerns Around Generative AI

The tech industry is embracing Generative AI, but the conversation around data privacy has become increasingly important. The recent “State of Ethics and Trust in Technology” report by Deloitte highlights the pressing ethical considerations that accompany the rapid adoption of these technologies. 30% of organizations have adjusted new AI projects, and 25% have modified existing ones in response to the AI Act, the report mentions.

The Rise of Generative AI

54% of professionals believe that generative AI poses the highest ethical risk among emerging technologies. Additionally, 40% of respondents identified data privacy as their top concern. 

Generative AI, which includes technologies like GPT-4, DALL-E, and other advanced machine learning models, has shown immense potential in creating content, automating tasks, and enhancing decision-making processes. 

These technologies can generate human-like text, create realistic images, and even compose music, making them valuable tools across industries such as healthcare, finance, marketing, and entertainment.

However, the capabilities of generative AI also raise significant data privacy concerns. As these models require vast amounts of data to train and improve, the risk of mishandling sensitive information increases. This has led to heightened scrutiny from both regulatory bodies and the public.

Key Data Privacy Concerns

Data Collection and Usage: Generative AI systems often rely on large datasets that may include personal and sensitive information. The collection, storage, and usage of this data must comply with stringent privacy regulations such as GDPR and CCPA. Organizations must ensure that data is anonymized and used ethically to prevent misuse.

Transparency and Accountability: One of the major concerns is the lack of transparency in how generative AI models operate. Users and stakeholders need to understand how their data is being used and the decisions being made by these systems. Establishing clear accountability mechanisms is crucial to build trust and ensure ethical use.

Bias and Discrimination: Generative AI models can inadvertently perpetuate biases present in the training data. This can lead to discriminatory outcomes, particularly in sensitive areas like hiring, lending, and law enforcement. Addressing these biases requires continuous monitoring and updating of the models to ensure fairness and equity.

Security Risks: The integration of generative AI into various systems can introduce new security vulnerabilities. Cyberattacks targeting AI systems can lead to data breaches, exposing sensitive information. Robust security measures and regular audits are essential to safeguard against such threats.

Ethical Considerations and Trust

80% of respondents are required to complete mandatory technology ethics training, marking a 7% increase since 2022.  Nearly three-quarters of IT and business professionals rank data privacy among their top three ethical concerns related to generative AI:

  • Developing and implementing ethical frameworks for AI usage is crucial. These frameworks should outline principles for data privacy, transparency, and accountability, guiding organizations in the responsible deployment of generative AI.
  • Engaging with stakeholders, including employees, customers, and regulatory bodies, is essential to build trust. Open dialogues about the benefits and risks of generative AI can help in addressing concerns and fostering a culture of transparency.
  • The dynamic nature of AI technologies necessitates continuous monitoring and improvement. Regular assessments of AI systems for biases, security vulnerabilities, and compliance with privacy regulations are vital to ensure ethical use.

Complexity: Research Offers Solution for Healthcare Security Amid Rising Cyberattacks

Complexity: Research Offers Solution for Healthcare Security Amid Rising Cyberattacks

In May, Ascension, a healthcare provider with a network of 140 hospitals across the U.S., suffered a major cyber-attack that disrupted its clinical operations for almost a month. Experts traced the problem to a malicious ransomware that had exploited an employee's computer. 

Healthcare: Juicy Target for Criminals

Threat actors see healthcare systems as lucrative targets for cybercrime because they hold crucial financial, health, and personal data. A 2023 survey research in health and IT professionals revealed that 88% of organizations had suffered around 40% of attacks in the past year. 

Complexity: Flaw in IT System

One major flaw is the rise of complexity in IT systems, says Hüseyin Tanriverdi, associate professor of information, risk, and operations management at Texas McCombs. He believes it's due to years of mergers and acquisitions that have made large-scale multi-hospital systems. 

After mergers, healthcare providers don’t standardize their tech and security operations, which results in causing major complexity in the health systems- different IT systems, different care processes, and different command structures. 

But his new research shows complexity can also offer solutions to these issues. “A good kind of complexity,” Tanriverdi believes can support communication across different systems, governance structures, and care processes, and combat against cyber incidents.

Understanding the Complex vs. Complicated

The research team found two similar-sounding IT terms that link to the problem. In “complicatedness,” an abundance of elements interconnect in a system for sharing info in structured ways. Whereas “complexity” happens when many elements interconnect to share information in unstructured ways- integrating systems following a merger and acquisition. 

Tanrivedi believes complicated structures are better because they are structured, despite being difficult, one can control them. Such is not the case with complex systems as they are unstructured networks. He believes healthcare systems got more vulnerable as they got more complex, 29% were more likely to get hit than average. 

Solution for Better Healthcare Security

Complex systems offer hackers more data transfer points to attack, and a higher risk for human errors, making it a bigger problem.

The solution lies in following a centralized approach for handling the data. “With fewer access points and simplified and hardened cybersecurity controls, unauthorized parties are less likely to gain unauthorized access to patient data,” says Tanrivedi. “Technology reduces cybersecurity risks if it is organized and governed well.”

Construction Firms Targeted in Brute Force Assaults on Accounting Software

 

Unidentified hackers have targeted construction firms using Foundation accounting software, security experts revealed earlier this week. 

According to cybersecurity firm Huntress, the hackers hunt for publicly available Foundation installations on the internet and then test combinations of default usernames and passwords that allow for administrative access.

Huntress claimed it has detected active software breaches from organisations in the plumbing, concrete, and heating, ventilation, and air conditioning (HVAC) industries. The researchers did not specify whether the attacks were effective or what their purpose was. 

Foundation Software, the platform's Ohio-based developer, stated that it was working with Huntress to clarify some of the report's information. 

“The event potentially impacted a small subset of on-premise FOUNDATION users. It did not at all impact the bulk of our accounting users, which are under our secure, cloud-based [software-as-a-service] offering. It also did not impact our internal systems or any of our other product offerings through our subsidiary companies,” Foundation stated. 

The Huntress analysts stated they noticed the malicious behaviour targeting Foundation last week. On one host, the researchers discovered approximately 35,000 brute-force login attempts against the Microsoft SQL Server (MSSQL) used by the organisation to manage its database operations. 

Typically, such databases are kept secret and secure behind a firewall or virtual private network (VPN), but Foundation "features connectivity and access by a mobile app," researchers noted. This means that a specific TCP port, which is designed to regulate and identify network traffic on a computer, may be made open to the public, allowing direct access to the Microsoft SQL database. 

According to the report, Foundation users often used default, easy-to-guess passwords to protect high-privilege database accounts.

“As a result of not following recommendations and security best practices that were provided (one example being not resetting the default credentials), this small subset of on-premise users might face possible vulnerabilities,” Foundation noted. “We have been communicating and providing technical support to these users to mitigate this.” 

Huntress stated it detected 500 hosts running the Foundation software, and nearly 33 of them were publicly exposed with unchanged default credentials. 

“In addition to notifying those where we saw suspicious activity, we also sent out a precautionary advisory notification to any of our customers and partners who have the FOUNDATION software in their environment,” Huntress concluded.

Here's How to Remove Malware From Your Chromebook

 

Imagine this: your Chromebook fails just before you click "Save" after spending hours working on your project. Let's imagine you want to watch a series, but it keeps crashing, making it impossible for you to get the most out of your favourite program. If these situations sound familiar to you, malware may have infected your Chromebook. 

Malware on your Chromebook can have detrimental effects, such as compromising your financial information, forcing you to lose work productivity, and compromising personal information. It is imperative that you take quick action if you think your Chromebook is infected. 

In this article, we'll walk you through the process of identifying whether your Chromebook is infected and give you the simplest method for virus removal: a reputable antivirus software. We'll also go over key precautions you should take to protect your Chromebook from future malware threats. 

Can malware infect Chromebooks ? 

As Chromebooks become more popular, fraudsters hunt for new ways to infect them and steal sensitive information for financial gain or identity theft. And, while Google's sophisticated ecosystem actively protects its users, no system is completely immune to cyber-attacks. 

Viruses, for example, are a popular sort of malware on the internet that adds malicious code to otherwise normal downloads. They are active when you download a malicious file, and they can also download and install automatically if you click on a link. Once the virus is installed on your system, it can cause damage and prevent you from using your device or network.

The positive news is that it is nearly impossible to become infected by an actual virus on Chrome OS. Because it does not enable the installation of any executable software, it is one of the most secure operating systems available today. 

The bad news is that Chromebooks are still vulnerable to some forms of malware, such as search hijackers (search redirection), malicious browser extensions, adware, spyware, phishing schemes, and downloads from unverified websites. 

Prevention tips

Chromebooks are vulnerable to several forms of malware, even though viruses rarely affect them, as mentioned above. Google recommends the following best practices to maintain a secure Chromebook experience: 

Stay updated: Keep your Chrome OS and applications up to date. Regular updates often have critical security patches. 

Use caution with extensions and apps: Read reviews and only use reliable browser extensions and apps from the Chrome Web Store or Google Play. 

Avoid phishing scams: Exercise caution while accessing suspicious websites or emails that ask for personal information. 

Consider security software: Although Chromebooks have built-in security safeguards, adding an extra layer of protection with reputable security software can provide additional peace of mind. 

As Chromebooks gain popularity as a low-cost and efficient alternative to traditional laptops, it is critical to understand their risks, particularly those related to malware. Chrome OS, with its web-based applications and regular updates, offers strong security, but it is still vulnerable to different types of malware such as search hijackers, adware, and spyware.

Ford’s Latest Patent: A Step Toward High-Tech Advertising or Privacy Invasion?


 

Among those filed recently is one from Ford for a system that gathers driver data to personalise in-car advertisements, which raises lots of concerns over privacy. This technological advancement can collect types of information from a car's GPS location to its driving habits and even conversations inside the vehicle. It aims to give targeted ads, real-time, which has raised issues among some privacy advocates over the level of surveillance this system will introduce.

While Ford explains patenting something does not equate to its actual implementation, the idea of the system raises some red flags. It shines a light on at least some of the dangers with gathering vast amounts of data and how that impacts any and all privacy concerns related to targeting consumers at the wheel.

What Does Ford's Patent Explain?

The patent explains the way in which information would be gathered and used by the system for delivering specific ads:

1. GPS Location: This one would identify where the car is and then which advertisements to pop up based on where various shops are in the area. Thus, if a driver is close to a fast food, then they may see an ad for that specific chain on the car's infotainment system.

2. Driving Situations: Ads can be targeted based on traffic conditions and speed of driving as well. When a driver is caught in heavy traffic, for example, ads might be displayed related to entertainment tools like audiobooks or podcasts.

3. Historical Data: Targeted on the basis of earlier behaviour such as which places one has earlier visited or what kind of music he prefers, historical data can be used.

4. In-Car Dialogue: The most contentious part of the patent is how the system will listen to dialogues going on inside the car, be it between the passengers or even among family members. If they are discussing going grocery shopping, the system could automatically point out nearby supermarkets.

Such data collection, particularly the dialogues, has been widely criticised as overly intrusive and a serious concern for privacy.

Privacy Concerns and a Backlash

As such, quite a few privacy advocates view this patent as a threat. Recording in-car conversations, even for the purpose of delivering ads, is a huge violation of privacy. If monitored at such levels, critics argue, it might lead to manipulations through advertisements and raise further worries regarding the usage and protection of data.

It's getting a little too intimate," says Daryl Killian, an automotive influencer discussing the issue. "We're so used to stuff popping up on our devices based on what we're doing online. For a car to be listening and sharing conversations is a bit much. It will send those consumers away who don't like the fact that companies collect this much data already.".

There are also concerns over safety, in that too many commercials can divert focus from driving.

Too much advertisement during driving may expose the driver to probable safety problems during very congested situations.

Ford Position and General Industry Trends

Ford has come out to explain that for them, patenting is just a ritual that does not mean the technology will be developed. The company has reported that this patent is part of the exploration of new ideas and should not be misconstrued as an expression of immediate implementation.

Ford has dabbled in personalised advertising before through a technology that would enable digital variations of signs to display on the windshield of a car for drivers as they drive by. But they are not alone in that. General Motors and many others have experimented with similar technology, which suggests an entire shift toward data-driven, personalised in-car experience.

The Dynamic Between Innovation and Privacy

While exciting with great potential in applications such as tailored navigation or real-time traffic updates, personalised in-car technology should be balanced with strong protections of privacy. Ability for drivers to opt out of data collection and advertising are all crucial to maintaining user trust.

There are several concerns that must be grappled with as this technology continues to evolve:

1. Transparency: Drivers should be told what data is being collected and for what purpose. There must be options that are clear for the users to control or opt-out from such collection of data.

2. Data Security: As more personal data is collected, robust security measures are crucial to protect against unauthorised access or breaches.

3. Regulatory Oversight: Governments may have to evolve and make clearer regulations about how the data of drivers is collected, used, and secured in order to help better protect consumer privacy.

Essentially, as such innovations promise convenience with personalised advertising, it is similarly very important to balance these innovations with necessary protective layers on the side of privacy. Car manufacturers will have to ensure that new technologies improve the driving experience without derailing user trust.


TrickMo Android Trojan Abuses Accessibility Services for On-Device Financial Scam

 

Cybersecurity experts discovered a new form of the TrickMo banking trojan, which now includes advanced evasion strategies and the ability to create fraudulent login screens and steal banking credentials. 

This sophisticated malware employs malicious ZIP files and JSONPacker to obstruct analysis and detection efforts. TrickMo, discovered by CERT-Bund in September 2019, has a history of targeting Android smartphones, with a special focus on German users, in order to acquire one-time passwords (OTPs) and other two-factor authentication (2FA) credentials for financial fraud. The trojan is believed to be the work of the now-defunct TrickBot e-crime gang, which is known for constantly enhancing its obfuscation and anti-analysis features. 

Screen recording, keystroke logging, SMS and photo harvesting, remote control for on-device fraud, and exploiting Android's accessibility services API for HTML overlay attacks and device gestures are some of the main capabilities of the TrickMo version. In addition, the malware could automatically accept permissions, handle notifications to steal or conceal login codes, and intercept SMS messages.

A malicious dropper app that mimics the Google Chrome web browser is used to spread the malware. Users are prompted to upgrade Google Play Services upon installation. In the case that the user agrees, an APK with the TrickMo payload is downloaded and set up pretending to be "Google Services." Next, the user is prompted to allow this program to use accessibility features, which gives them full control over the device. 

TrickMo can use accessibility services to disable critical security features, stop system upgrades, and hinder app uninstallation. Misconfigurations in the malware's command-and-control (C2) server made 12 GB of sensitive data, including credentials and photos, available without authentication. 

This exposed data is vulnerable to exploitation by other threat actors for identity theft, unauthorised account access, financial transfers, and fraudulent transactions. The security breakdown highlights a severe operational security failure by the threat actors, increasing the risk to victims. The exposed private data can be utilised to create convincing phishing emails, resulting in additional information disclosure or malicious acts.

Hacktivism: How Hacktivists are Using Digital Activism to Fight for Justice

Hacktivism: How Hacktivists are Using Digital Activism to Fight for Justice

What is Hacktivism?

Hacktivism, a blend of hacking and activism, has become a major threat in the digital landscape. Hacktivists are driven by political, religious, and social aims, they use different strategies to achieve their goals, and their primary targets include oppressive institutions or governments.

Hacktivists are known for using their technical expertise to drive change and have diverse aspirations, from free speech advocacy and protesting human rights violations to anti-censorship and religious discrimination. 

Data Leaks, Web Defacements, and DDoS Attacks

A recent report by CYFIRMA reveals that hacktivists believe themselves to be digital activists and work for the cause of justice, attacking organizations that they think should be held responsible for their malpractices. “Operation ‘Hamsaupdate’ has been active since early December 2023, where the hacktivist group Handala has been using phishing campaigns to gain access to Israel-based organizations. After breaching the systems, they deploy wipers to destroy data and cause significant disruption.” 

While few target local, regional, or national issues, other groups are involved in larger campaigns that expand to multiple nations and continents.

DDoS Attacks

A general tactic hacktivists use involves DDoS attacks. These attacks stuff websites with heavy traffic, disrupting servers and making sites inaccessible. Hacktivists employ diverse DDoS tools, ranging from botnet services and web-based IP stressors, to attack different layers of the OSI (Open Systems Interconnection) model.

Web Defacement Attacks

Hacktivists modify the website content in Web defacement to show ideological or political agendas. The motive is to humiliate the website owners and spread the idea to a larger audience.

Hacktivists can easily deface websites by exploiting flaws like SQL injection or cross-site scripting.

Data Leaks

Hacktivists also indulge in data leaks, where they steal sensitive data and leak it publicly. This includes personal info, confidential corporate data, or government documents. The aim here is to expose corruption or wrongdoings and hold the accused responsible in the eyes of the public.

Geopolitical Motives

Hacktivist campaigns are sometimes driven by geopolitical tensions, racial conflicts, and religious battles. The hacktivists are sometimes involved in #OP operations, the CYFIRMA report mentions. 

For instance, “#OpIndia is a popular hashtag, used by hacktivist groups from countries such as Pakistan, Bangladesh, Indonesia, Turkey, Morocco, and other Muslim-majority countries (as well as Sweden) that engage in DDoS attacks or deface Indian websites, and target government, individuals, or educational institutions.”

Novel Android Malware Employs OCR to Steal Crypto Wallet Keys From Images

 

A novel mobile malware operation dubbed SpyAgent has surfaced targeting Android device users in South Korea. According to an investigation by McAfee Labs researcher SangRyol Ryu, the malware "targets mnemonic keys by scanning for images on your device that might contain them," and it has expanded its targeting footprint to include the UK.

The campaign uses fake Android apps to deceive users into installing them. These apps seem like real banking, government, streaming, and utility apps. As many as 280 fake apps have been uncovered since the start of the year.

It all begins with SMS messages with booby-trapped links directing users to download the apps in question in the form of APK files published on fraudulent websites. Once installed, they will request intrusive permissions to extract data from the devices. 

The most prominent feature is its ability to employ optical character recognition (OCR) to steal mnemonic keys, which are recovery or seed phrases that allow users to restore access to their bitcoin wallets. Unauthorised access to the mnemonic keys could allow attackers to gain control of the victims' wallets and drain all of the funds stored in them. 

According to McAfee Labs, the command-and-control (C2) infrastructure had major security flaws that permitted unauthorised access to the site's root directory as well as the exposure of victim data. 

The server also has an administrator panel, which serves as a one-stop shop for remotely controlling the infected devices. The appearance of an Apple iPhone running iOS 15.8.2 with the system language set to Simplified Chinese ("zh") in the panel indicates that it may also target iOS users. 

"Originally, the malware communicated with its command-and-control (C2) server via simple HTTP requests," the researchers explained. "While this method was effective, it was also relatively easy for security tools to track and block." "In a significant tactical shift, the malware has now adopted WebSocket connections for its communications. This upgrade allows for more efficient, real-time, two-way interactions with the C2 server and helps it avoid detection by traditional HTTP-based network monitoring tools.” 

The finding comes a little more than a month after Group-IB disclosed another Android remote access trojan (RAT) known as CraxsRAT, which has been targeting Malaysian banking users since at least February 2024 via phishing websites. It's worth noting that CraxsRAT campaigns have already been found to target Singapore by April 2023.

Here's How to Safeguard Yourself Against Phone Scams

 

Sophisticated phone scams are becoming more common and more relentless. The numbers are mind-boggling. According to the FTC, impostor fraudsters cost US consumers $2.7 billion in 2023, and the figure is rising year after year. 

These are merely the listed losses; many people who have been duped are embarrassed and refuse to acknowledge they fell for such a scam. You may believe that you will not be misled, yet many of those who are duped thought this before the incident. 

Scammers have refined their strategies to sound trustworthy and legitimate, and AI is just making matters worse. When combined with the strain or situation, it only takes a few moments to fall for it. 

The best defence against phone scams is to be prepared to face them, as they are likely to occur at some point. We've compiled a list of some of the most popular phone scams in 2024 and how to prevent them.

AI-powered scams

The most obvious example of fraudsters exploiting new technology to power existing scams is artificial intelligence (AI). For instance, scammers might use AI to: 

  • Generate more convincing and genuine sounding phishing emails and text messages. 
  • Create deepfakes of celebrities to lure victims into thinking they're investing in a good company or project.
  • Impersonate an employer and ask for private information. 

Student loan forgiveness scams 

The back-and-forth adjustments in student loan forgiveness create an ideal scenario for scammers. Fraudsters know that individuals want to believe that their student loans will be forgiven, and they will use this need for personal benefit.

For example, scammers may call you or set up fake application sites to steal your Social Security number or bank account information. They may put pressure on their victims by sending bogus urgent messages encouraging them to seek debt relief "before it's too late." Then they will charge you a high application fee. In reality, this is a scam.

Zelle scams

Scammers are using Zelle, a peer-to-peer payment tool, to steal people's money. The fraudster might email, text, or contact you, claiming to work for your bank or credit union's fraud department. They'll claim that a thief intended to steal your money via Zelle and that they need to walk you through "fixing" the issue. 

Subsequently, fraudsters may advise you to pay the money to yourself, but the funds will actually go to their account. Starting in mid-2023, Zelle began refunding victims of some frauds. However, you may not always be eligible for reimbursement, so be aware of these financial frauds. 

Prevention tips 

Avoid clicking on unknown links: Whether the link arrives in your email, a text or a direct message, never click on it unless you're certain the sender has good intentions. If the message says it's from a company or government agency, call the firm using a number that you look up on your own to confirm its legitimacy. 

Be skeptical: Scammers can spoof calls and emails to appear to be from a number of sources, including government institutions, charities, banks, and major companies. Do not provide any personal information, usernames, passwords, or one-time codes that others could use to gain access to your accounts or steal your identity. 

Don't refund or forward overpayments: Beware whenever a company or person asks you to refund or forward part of a payment. Often, the original payment will be fraudulent and taken back later. Following simple safety precautions and reviewing the most recent scam alerts might help you stay safe. However, mistakes might occur, especially when you are stressed or overwhelmed.