Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Datasecurity. Show all posts

Misconfigured Access Controls in NetSuite Stores Cause Major Data Breach

 


Microsoft's apps for MacOS have been exploited by hackers recently to expose a critical vulnerability. It is believed that hackers have been exploiting vulnerabilities in popular applications, such as Microsoft Outlook and Teams, to spy on Mac users. In recent weeks, Cisco Talos' security researchers have revealed how attackers can take advantage of this security breach and gain access to sensitive components like Mac's microphone and camera without the user's consent or knowledge, a division of Cisco Talos that focuses on malware and system vulnerabilities. 

The researchers of Oracle NetSuite have found that there are several thousand NetSuite customers who are inadvertently exposing sensitive company information to unauthenticated users through public-facing stores built using NetSuite SuiteCommerce or NetSuite Site Builder. Possibly, the exposure of custom record types in NetSuite was caused by a lack of understanding about the access controls for these types of record types in this popular SaaS enterprise resource planning platform (ERP) in today's marketplace. 

In terms of Enterprise Resource Planning (ERP) solutions, NetSuite is a widely used SaaS Platform that is widely used for developing and deploying online retail platforms through its SuiteCommerce or SiteBuilder platforms that serve external customers. As a result of these web stores that are hosted on subdomains of the NetSuite tenant, unauthenticated customers can browse, register, and make purchases directly from businesses through those sites. 

This is not a problem with the NetSuite solution itself; it is a problem with the way some access controls have been configured on custom record types (CRTs) that may lead to sensitive customer information being leaked. The most vulnerable data are PII, or personally identifiable information, which includes full addresses and mobile phone numbers of registered customers. In NetSuite, threat actors tend to target Custom Record Types (CRTs) that are controlled using "No Permission Required" access controls. 

This means that unauthenticated users can access data by using NetSuite’s APIs to search for records and records on the cloud. There is, however, one prerequisite that must be met before the attacker can be successful in the attack, and that is knowing what name the CRTs are. Hackers might be able to access sensitive data through a potential problem in NetSuite's SuiteCommerce platform, due to misconfigured access controls to custom record types (CRTs) on NetSuite's platform, according to Aaron Costello, CEO at AppOmni.

To emphasize the point, it is important to recognize that the issue does not have anything to do with a security flaw in the NetSuite product, rather it has more to do with a potential data leak caused by a customer misconfiguration. By that report, the e-commerce sites have been exposed to information about their registered customers, including their addresses and mobile phone numbers. As a result of how Microsoft apps interact with MacOS's Transparency Consent and Control framework (TCC), which is intended to control an application's permissions to comply with the law, there is a vulnerability. 

The TCC ensures that apps are required to request specific entitlements to grant access to certain features, such as the camera, microphone, or location services if they want to use them. A typical application without the necessary entitlements cannot even ask for permission to run, effectively blocking unauthorised access to the application. Cisco Talos has discovered a vulnerability that enables attackers to inject malicious software into Microsoft apps, and then leverage the permissions already granted to those apps to execute malicious code using the software injection. 

As a result, once an attacker modifies an app such as Microsoft Teams or Outlook to inject their code into the app, they are also able to access the camera and microphone on a Mac computer, allowing them to record audio and take photos without the user ever knowing what they are doing. Using an attack scenario outlined by AppOmni, an attacker potentially exploits a CRT in NetSuite that employs table-level access controls with a permission type of "No Permission Required," which enables users who do not have the necessary authentication to access their data through NetSuite's search and record APIs. 

In recent developments, it has been discovered that a significant vulnerability exists in NetSuite stores due to an access control misconfiguration, which has resulted in the exposure of sensitive data. However, for this security breach to be successful, there are several critical prerequisites. The most notable of these is the requirement for the attacker to have prior knowledge of the names of the Custom Record Types (CRTs) in use. 

To mitigate the risks associated with this vulnerability, it is strongly recommended that site administrators take immediate action to enhance access controls on CRTs. This includes setting sensitive fields to "None" for public access, thereby restricting unauthorized access. Additionally, administrators should consider temporarily taking affected sites offline to prevent further data exposure while corrective measures are being implemented. 

One of the most straightforward and effective solutions from a security perspective, as suggested by security expert Costello, involves changing the Access Type of the record type definition. This can be done by setting it to either "Require Custom Record Entries Permission" or "Use Permission List." These changes would significantly reduce the likelihood of unauthorized access to sensitive data.

In a related disclosure, Cymulate has unveiled another significant security concern involving Microsoft Entra ID, formerly known as Azure Active Directory. The issue centres around the potential manipulation of the credential validation process within hybrid identity infrastructures. This vulnerability allows attackers to bypass authentication mechanisms, enabling them to sign in with elevated privileges within the tenant and establish persistence. 

However, the execution of this attack requires that the adversary already possesses administrative access to a server hosting a Pass-Through Authentication (PTA) agent. The PTA agent is a critical module that permits users to sign in to both on-premises and cloud-based applications using Entra ID. The root cause of this vulnerability lies in the synchronization of multiple on-premises domains to a single Azure tenant, which introduces security gaps that could be exploited by attackers.

Urgent Call for EPA Cyber Strategy to Safeguard Water Infrastructure

 


A new watchdog report published by the US government's Environmental Protection Agency says the EPA must develop a comprehensive plan of action to counter the increasing number and sophistication of cybersecurity threats facing the utilities. In the last few years, there have been many cyberattacks against water treatment plants, sewage plants, and other infrastructures across the globe. 

A report by the Government Accountability Office indicates that the entire water industry has found it difficult to deal with the problem through voluntary security initiatives and fought back against new mandates issued by the Environmental Protection Agency. EPA and other government agencies are called upon to do more to assess and identify the full extent of cyber risks that face the water and wastewater sectors, including developing a national strategy and conducting a cyber risk assessment. 

There have been several high-profile hacking incidents that have raised concerns regarding the ability of the country’s drinking water and wastewater treatment industries to maintain their security over the past few years, so the Biden administration has prioritized those industries.  The White House and the Environmental Protection Agency in March urged state officials to provide information on how well-prepared water utilities were dealing with cyber risks that were becoming more prevalent. 

There are still concerns expressed by EPA officials as to how the data will not be integrated into a comprehensive strategy to make this information effective.  When Harry Coker Jr., the National Cyber Director, delivered a speech in May in Washington, D.C., he stated that he planned to increase technical assistance for public water systems by the EPA and that the Department of Agriculture would invest in programs for rural water utilities as part of the water safety reforms.  

A GAO report, released last week, stated that the EPA was working on plans to strengthen federal assistance to the water industry based on the findings of the GAO report. An auditing program for water utilities by the Environmental Protection Agency (EPA) was launched in 2023 to increase their cyber resilience, but the program has now been revoked because a state challenge was filed.  

The Environmental Protection Agency remains committed to providing cybersecurity technical assistance to the water sector, and we will continue to work together with our federal partners to find all the ways we can to better protect the nation's drinking water and wastewater systems, the agency said in a press release.

AI and the Legal Framework: A Critical Turning Point

 


It is no secret that the rapid advancement of generative artificial intelligence (AI) is transforming several industries - including the legal sector. Using AI, lawyers and legal departments can be able to handle all sorts of tasks from standard tasks to assisting with complicated legal analyses, which allows them to be more effective, efficient, accurate, and innovative in their work. It is important for enterprises, particularly legal and compliance professionals, to be aware of the potential concerns that can arise as they evaluate how to use these new tools. 

The role of a legal executive in examining the use of Generative AI consists primarily of making informed recommendations and acting as a resource in educating stakeholders (e.g., business leaders, peer executives, the board, and others) on the risks associated with the use of Generative AI for business purposes. The application of Generative Artificial Intelligence and its implications in terms of the risk of its implementation is therefore very important for this purpose. 

The introduction of generative artificial intelligence in the legal sector marks a turning point in the steps taken to conduct legal proceedings, create legal documents, and provide legal advice to those who need it. Using artificial intelligence technologies for legal applications has the potential to fundamentally change the legal industry in many ways, from increasing the efficiency and speed of the legal process to ensuring that legal information is more accessible and comprehensible to those in need of it. 

The legal system is being impacted by this change in an array of aspects, from legal advice to assuring that lawyers have the right skills and ensuring operations run smoothly. As a result, it's now possible for people to research and follow through countless legal databases, statutes, and case laws in just minutes and hours. Making decisions becomes quicker and easier now that an individual can access the relevant case precedents, understand the jargon used in decisions, and perform many other tasks quickly and efficiently. This will ensure that no time is wasted during the decision-making process. 

In the simplest terms, artificial intelligence envelops all of the processes and methods that allow a machine to behave in a way that would be considered intelligent if a human being behaved similarly. Using artificial intelligence to provide legal advice will enable lawyers to provide their clients with a more accurate and swifter legal service than ever before. In a fraction of a second, artificial intelligence can analyze legal documents and search for case precedents and law articles that are relevant to the case at hand. 

Lawyers need to keep up to date with the new technologies and adapt their skills accordingly to stay competitive. They need to learn how to use artificial intelligence tools and evaluate the results critically. Besides enhancing the ability of machine learning, AI is also getting better at improving its performance by itself (self-improvement). The Generative Artificial Intelligence (GenAI) revolution has the potential to disrupt all industries, but it is increasingly important that executives and business leaders start to pay more attention to it. 

With GenAI, boosting large language models (LLMs) and generative image models, it is possible to reinvent workflows, transform tedious and time-consuming manual duties such as those in the judicial and educational sectors, and even generate new revenue streams from the very beginning. In the year 2023, artificial intelligence will become a key player in mainstream discussions about technology, as opposed to what it was before high-flying, complicated concept that only a handful of experts and academicians were aware of. 

It has been less than a year since long-term learning models became public, and this has been a race to dominate GenAI among both big tech and startups alike. Artificial intelligence has a lot of benefits in the legal sector, one of them being the ease with which complicated structured content can be prepared. The use of AI systems can provide non-lawyers with a means of analyzing complex legal information and presenting it in a way that can be understood by them. 

Further, it is better to integrate AI tools into human-to-human communication by using natural language, which allows intuitive interaction with them, therefore lowering the barrier to their effective use in a considerable way. Introducing generative Artificial Intelligence systems into a legal sector has a lot of implications that require a great deal of care and attention. This is a very important point, as it is necessary to take into account the following factors to satisfy professional and legal requirements. The AI models run on the firm's IT infrastructure and have been developed to meet all privacy regulations and professional legal requirements so they will be hosted by KPMG's global security standards. 

Additionally, the models must be compliant with all malpractice liability regulations and data protection requirements. These systems are designed to ensure that client information remains confidential and that the integrity of the data is preserved at all times, which is essential for the integrity of the system to work. A large part of this strategy consists of making sure that all data that is generated by AI is processed and stored in secure environments, to ensure the highest level of security and disclosure. 

Human beings must be involved in the loop when it comes to automated decision-making so that AI systems can never make fully automated decisions in the legal sector. A qualified lawyer must be involved as part of the professional due diligence process to verify, validate, and, ultimately, confirm the propositions and results that are generated by artificial intelligence. Ensuring that all legal judgments and decisions adhere to professional ethical standards and allowing for clear accountability is essential in any legal system. 

The incorporation of Artificial Intelligence (AI) into the legal field brings forth several challenges that must be navigated with caution and insight. A primary concern is the reliability of AI tools, which rely on foundational models or pre-determined datasets. These tools require ongoing updates to prevent the use of outdated or inaccurate data, as reliance on such information could result in compromised legal outcomes. Data security is another critical issue. Legal proceedings often involve sensitive and confidential information, necessitating rigorous protection of the data processed by AI systems. 

It is vital to ensure that this information remains secure and is not mishandled or exposed, as breaches could have serious repercussions for both privacy and the integrity of the legal process. The issue of absoluteness also poses a significant challenge when considering AI’s role in law. An over-dependence on AI could potentially undermine a lawyer’s ability to exercise sound judgment and uphold ethical standards. While AI operates within strict boundaries, processing data in binary terms of right and wrong, the practice of law often requires navigating complex situations where nuances are critical. 

Therefore, AI should be used to support, rather than replace, human judgment in legal proceedings. Moreover, the challenges of bias and fairness in AI systems must be carefully managed. AI algorithms can unintentionally perpetuate biases present in their training data, leading to potentially unfair or discriminatory outcomes. Ensuring fairness in AI-driven legal processes requires diligent attention to data selection, algorithm design, and ongoing monitoring to identify and address any biases that may arise. It is crucial to address these challenges to integrate AI into the legal system in a manner that promotes justice and equity. 

The use of AI across various aspects of the legal profession, including research, analytics, due diligence, compliance, and contract management, underscores its growing influence and lasting impact on modern society. Yet, no matter how sophisticated AI becomes, it cannot replicate the "human factor" that is central to legal practice. Skills such as quick thinking, adaptability, and improvisation are inherently human and cannot be fully encoded into a machine. While AI can offer guidance, the ultimate responsibility for decisions and actions remains with human professionals. For legal information to be effective in the AI era, it must be well-organized and rich in context. 

Currently, it remains challenging to fully explain the reasoning behind AI-generated outcomes. Nevertheless, AI plays an important role in assisting individuals, litigants, and judges in managing and organizing extensive legal data. As the repository of legal knowledge continues to grow, AI systems will increasingly provide valuable advice and recommendations, enhancing the decision-making capabilities of legal professionals. 

Successful integration of AI into the judiciary requires that judges gain a deep understanding of how these technologies operate. Additionally, courts must undertake the significant task of digitalizing their records and ensuring that they are accompanied by clear legal interpretations, making them more accessible and useful for AI systems. This digital transformation is an ongoing process that requires constant oversight and adjustment to maintain effectiveness. For courts, traditionally structured as production-oriented entities, this shift represents a substantial new challenge.

Security Lapse at First American Exposes Data of 44,000 Clients

 


It has been reported that First American Financial Corporation, one of the largest title insurance companies in the United States, was compromised in December when its computer systems were taken down due to a cyberattack that compromised the information of almost 44,000 individuals. Since its founding in 1889, this organization has provided financial and settlement services to real estate professionals, buyers, and sellers involved in purchasing and selling residential and commercial properties. According to the company's report, it generated $6 billion in revenue last year, resulting in over 21,000 employees. 

First American Financial Services announced on December 21 that it had taken some of its systems offline today to contain the impact of a cyberattack, as the financial services company provided little information as to the nature of the attack in a statement provided in the statement. 

First American announced the following day that they had taken their email systems offline as well and that First American Title and FirstAm.com subsidiaries had also been affected by the same. Almost a week later, on January 8, 2024, the financial services firm announced that it was starting to restore some of its systems, but the full restoration of the company's systems was not announced until a week later. 

In December, First American informed the Securities and Exchange Commission (SEC) that the company had suffered a data breach resulting from a computer incident, as well as that certain non-production systems had been encrypted as a result of the data breach. As of May 28, an updated form filed by the company indicates that their investigation into the incident has been completed. A company update reads: "After reviewing our investigation and findings, we have determined that as a result of the incident, we may have been able to access the personally identifiable information of nearly 44,000 individuals without their permission," the statement reads. 

According to the title insurance provider, “the Company will provide appropriate notification to potentially affected individuals and offer those individuals credit monitoring and identity protection services at no charge to them.” Five months later, on May 28, the company announced it would not be providing credit monitoring and identity protection services to potentially affected individuals at no cost to them. 

The US Securities and Exchange Commission (SEC) has confirmed that the attackers gained access to some of its systems and were able to access sensitive information collected by the organization after an investigation into the incident was conducted. A full report of the incident has been prepared. In the meantime, the investigation has been completed and the incident has been resolved by the company. First American has concluded that as a result of our investigation and findings, personal information regarding about 44,000 individuals may have been accessed without authorization," the company stated. 

There will be no costs for affected individuals to use credit monitoring and identity protection services if proper notification is provided to them. The company will provide appropriate notifications to potentially affected individuals. First American Insurance Company, which is considered the second-largest title insurance company in the nation, collects personal and financial information of hundreds of thousands of individuals each year through title-related documents and then stores it in its EaglePro application, which was developed in-house, according to DFS of New York. 

There was a security vulnerability that was discovered by First American senior management in May 2019 that allowed anyone who had access to EaglePro's link to access the application without requiring any authentication to access not just their documents, but those of individuals involved in unrelated transactions as well." Similarly, Fidelity National Financial, a title insurance provider in the United States, was also the target of a "cybersecurity issue" in November of last year. Various levels of disruption to the company's business operations meant that some of its systems were also taken offline to contain the attack, as a result of which some operations were disrupted. An SEC filing made in January confirmed that the attackers had stolen the data of approximately 1.3 million customers using malware that did not self-propagate and that did not spread through network resources.