Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Data Privacy. Show all posts

The Intersection of Travel and Data Privacy: A Growing Concern

 

The evolving relationship between travel and data privacy is sparking significant debate among travellers and experts. A recent Spanish regulation requiring hotels and Airbnb hosts to collect personal guest data has particularly drawn criticism, with some privacy-conscious tourists likening it to invasive surveillance. This backlash highlights broader concerns about the expanding use of personal data in travel.

Privacy Concerns Across Europe

This trend is not confined to Spain. Across the European Union, regulations now mandate biometric data collection, such as fingerprints, for non-citizens entering the Schengen zone. Airports and border control points increasingly rely on these measures to streamline security and enhance surveillance. Advocates argue that such systems improve safety and efficiency, with Chris Jones of Statewatch noting their roots in international efforts to combat terrorism, driven by UN resolutions and supported by major global powers like the US, China, and Russia.

Challenges with Biometric and Algorithmic Systems

Despite their intended benefits, systems leveraging Passenger Name Record (PNR) data and biometrics often fall short of expectations. Algorithmic misidentifications can lead to unjust travel delays or outright denials. Biometric systems also face significant logistical and security challenges. While they are designed to reduce processing times at borders, system failures frequently result in delays. Additionally, storing such sensitive data introduces serious risks. For instance, the 2019 Marriott data breach exposed unencrypted passport details of millions of guests, underscoring the vulnerabilities in large-scale data storage.

The EU’s Ambitious Biometric Database

The European Union’s effort to create the world’s largest biometric database has sparked concern among privacy advocates. Such a trove of data is an attractive target for both hackers and intelligence agencies. The increasing use of facial recognition technology at airports—from Abu Dhabi’s Zayed International to London Heathrow—further complicates the privacy landscape. While some travelers appreciate the convenience, others fear the long-term implications of this data being stored and potentially misused.

Global Perspectives on Facial Recognition

Prominent figures like Elon Musk openly support these technologies, envisioning their adoption in American airports. However, critics argue that such measures often prioritize efficiency over individual privacy. In the UK, stricter regulations have limited the use of facial recognition systems at airports. Yet, alternative tracking technologies are gaining momentum, with trials at train stations exploring non-facial data to monitor passengers. This reflects ongoing innovation by technology firms seeking to navigate legal restrictions.

Privacy vs. Security: A Complex Trade-Off

According to Gus Hosein of Privacy International, borders serve as fertile ground for experiments in data-driven travel technologies, often at the expense of individual rights. These developments point to the inevitability of data-centric travel but also emphasize the need for transparent policies and safeguards. Balancing security demands with privacy concerns remains a critical challenge as these technologies evolve.

The Choice for Travelers

For travelers, the trade-off between convenience and the protection of personal information grows increasingly complex with every technological advance. As governments and companies push forward with data-driven solutions, the debate over privacy and transparency will only intensify, shaping the future of travel for years to come.

Turn Your Phone Off Daily for Five Minutes to Prevent Hacking

 


There are numerous ways in which critical data on your phone can be compromised. These range from subscription-based apps that covertly transmit private user data to social media platforms like Facebook, to fraudulent accounts that trick your friends into investing in fake cryptocurrency schemes. This issue goes beyond being a mere nuisance; it represents a significant threat to individual privacy, democratic processes, and global human rights.

Experts and advocates have called for stricter regulations and safeguards to address the growing risks posed by spyware and data exploitation. However, the implementation of such measures often lags behind the rapid pace of technological advancements. This delay leaves a critical gap in protections, exacerbating the risks for individuals and organizations alike.

Ronan Farrow, a Pulitzer Prize-winning investigative journalist, offers a surprisingly simple yet effective tip for reducing the chances of phone hacking: turn your phone off more frequently. During an appearance on The Daily Show to discuss his new documentary, Surveilled, Farrow highlighted the pressing need for more robust government regulations to curb spyware technology. He warned that unchecked use of such technology could push societies toward an "Orwellian surveillance state," affecting everyone who uses digital devices, not just political activists or dissidents.

Farrow explained that rebooting your phone daily can disrupt many forms of modern spyware, as these tools often lose their hold during a restart. This simple act not only safeguards privacy but also prevents apps from tracking user activity or gathering sensitive data. Even for individuals who are not high-profile targets, such as journalists or political figures, this practice adds a layer of protection against cyber threats. It also makes it more challenging for hackers to infiltrate devices and steal information.

Beyond cybersecurity, rebooting your phone regularly has additional benefits. It can help optimize device performance by clearing temporary files and resolving minor glitches. This maintenance step ensures smoother operation and prolongs the lifespan of your device. Essentially, the tried-and-true advice to "turn it off and on again" remains a relevant and practical solution for both privacy protection and device health.

Spyware and other forms of cyber threats pose a growing challenge in today’s interconnected world. From Pegasus-like software that targets high-profile individuals to less sophisticated malware that exploits everyday users, the spectrum of risks is wide and pervasive. Governments and technology companies are increasingly being pressured to develop and enforce regulations that prioritize user security. However, until such measures are in place, individuals can take proactive steps like regular phone reboots, minimizing app permissions, and avoiding suspicious downloads to reduce their vulnerability.

Ultimately, as technology continues to evolve, so too must our awareness and protective measures. While systemic changes are necessary to address the larger issues, small habits like rebooting your phone can offer immediate, tangible benefits. In the face of sophisticated cyber threats, a simple daily restart serves as a reminder that sometimes the most basic solutions are the most effective.

The Role of Confidential Computing in AI and Web3

 

 
The rise of artificial intelligence (AI) has amplified the demand for privacy-focused computing technologies, ushering in a transformative era for confidential computing. At the forefront of this movement is the integration of these technologies within the AI and Web3 ecosystems, where maintaining privacy while enabling innovation has become a pressing challenge. A major event in this sphere, the DeCC x Shielding Summit in Bangkok, brought together more than 60 experts to discuss the future of confidential computing.

Pioneering Confidential Computing in Web3

Lisa Loud, Executive Director of the Secret Network Foundation, emphasized in her keynote that Secret Network has been pioneering confidential computing in Web3 since its launch in 2020. According to Loud, the focus now is to mainstream this technology alongside blockchain and decentralized AI, addressing concerns with centralized AI systems and ensuring data privacy.

Yannik Schrade, CEO of Arcium, highlighted the growing necessity for decentralized confidential computing, calling it the “missing link” for distributed systems. He stressed that as AI models play an increasingly central role in decision-making, conducting computations in encrypted environments is no longer optional but essential.

Schrade also noted the potential of confidential computing in improving applications like decentralized finance (DeFi) by integrating robust privacy measures while maintaining accessibility for end users. However, achieving a balance between privacy and scalability remains a significant hurdle. Schrade pointed out that privacy safeguards often compromise user experience, which can hinder broader adoption. He emphasized that for confidential computing to succeed, it must be seamlessly integrated so users remain unaware they are engaging with such technologies.

Shahaf Bar-Geffen, CEO of COTI, underscored the role of federated learning in training AI models on decentralized datasets without exposing raw data. This approach is particularly valuable in sensitive sectors like healthcare and finance, where confidentiality and compliance are critical.

Innovations in Privacy and Scalability

Henry de Valence, founder of Penumbra Labs, discussed the importance of aligning cryptographic systems with user expectations. Drawing parallels with secure messaging apps like Signal, he emphasized that cryptography should function invisibly, enabling users to interact with systems without technical expertise. De Valence stressed that privacy-first infrastructure is vital as AI’s capabilities to analyze and exploit data grow more advanced.

Other leaders in the field, such as Martin Leclerc of iEXEC, highlighted the complexity of achieving privacy, usability, and regulatory compliance. Innovative approaches like zero-knowledge proof technology, as demonstrated by Lasha Antadze of Rarimo, offer promising solutions. Antadze explained how this technology enables users to prove eligibility for actions like voting or purchasing age-restricted goods without exposing personal data, making blockchain interactions more accessible.

Dominik Schmidt, co-founder of Polygon Miden, reflected on lessons from legacy systems like Ethereum to address challenges in privacy and scalability. By leveraging zero-knowledge proofs and collaborating with decentralized storage providers, his team aims to enhance both developer and user experiences.

As confidential computing evolves, it is clear that privacy and usability must go hand in hand to address the needs of an increasingly data-driven world. Through innovation and collaboration, these technologies are set to redefine how privacy is maintained in AI and Web3 applications.

Meet Chameleon: An AI-Powered Privacy Solution for Face Recognition

 


An artificial intelligence (AI) system developed by a team of researchers can safeguard users from malicious actors' unauthorized facial scanning. The AI model, dubbed Chameleon, employs a unique masking approach to create a mask that conceals faces in images while maintaining the visual quality of the protected image.

Furthermore, the researchers state that the model is resource-optimized, meaning it can be used even with low computing power. While the Chameleon AI model has not been made public yet, the team has claimed they intend to release it very soon.

Researchers at Georgia Tech University described the AI model in a report published in the online pre-print journal arXiv. The tool can add an invisible mask to faces in an image, making them unrecognizable to facial recognition algorithms. This allows users to secure their identities from criminal actors and AI data-scraping bots attempting to scan their faces.

“Privacy-preserving data sharing and analytics like Chameleon will help to advance governance and responsible adoption of AI technology and stimulate responsible science and innovation,” stated Ling Liu, professor of data and intelligence-powered computing at Georgia Tech's School of Computer Science and the lead author of the research paper.

Chameleon employs a unique masking approach known as Customized Privacy Protection (P-3) Mask. Once the mask is applied, the photos cannot be recognized by facial recognition software since the scans depict them "as being someone else."

While face-masking technologies have been available previously, the Chameleon AI model innovates in two key areas:

  1. Resource Optimization:
    Instead of creating individual masks for each photo, the tool develops one mask per user based on a few user-submitted facial images. This approach significantly reduces the computing power required to generate the undetectable mask.
  2. Image Quality Preservation:
    Preserving the image quality of protected photos proved challenging. To address this, the researchers employed Chameleon's Perceptibility Optimization technique. This technique allows the mask to be rendered automatically, without requiring any manual input or parameter adjustments, ensuring the image quality remains intact.

The researchers announced their plans to make Chameleon's code publicly available on GitHub soon, calling it a significant breakthrough in privacy protection. Once released, developers will be able to integrate the open-source AI model into various applications.

Over 600,000 People Impacted In a Major Data Leak

 

Over 600,000 persons were impacted by a data leak that took place at another background check company. Compared to the 2.9 billion persons impacted by the National Public Data theft, this is a minor breach, but it's still concerning. SL Data Services, the company in question, was discovered online. It was neither encrypted or password-protected and was available to the public.

Jeremiah Fowler, a cybersecurity researcher, uncovered the breach (or lack of protection on the files). Full names, residences, email addresses, employment data, social media accounts, phone numbers, court records, property ownership data, car records, and criminal records were all leaked.

Everything was stored in PDF files, the majority of which were labelled "background check." The database had a total of 713.1GB of files. Fortunately, the content is no longer publicly available, however it took some time to be properly secured. After receiving the responsible disclosure warning, SL Data Services took a week to make it unavailable. 

A week is a long time to have 600,000 people's information stored in publicly accessible files. Unfortunately, those with data in the breach might not even know their information was included. Since background checks are typically handled by someone else, and the person being checked rarely knows whose background check company was utilised, this might become even more complicated. 

While social security numbers and financial details are not included in the incident, because so much information about the people affected is publicly available, scammers can use it to deceive unsuspecting victims using social engineering.

Thankfully, there is no evidence that malicious actors accessed the open database or obtained sensitive information, but there is no certainty that they did not. Only time will tell—if we observe an increase in abrupt social engineering attacks, we know something has happened.

Internal Threats Loom Large as Businesses Deal With External Threats

 

Most people have likely been forced by their employer to undergo hour-long courses on how to prevent cyberattacks such as phishing, malware, and ransomware. Companies compel their staff to do this since cybercrime can be quite costly. According to FBI and IMF estimates, the cost is predicted to rise from $8.4 trillion in 2022 to $23 trillion by 2027. There are preventative methods available, such as multifactor authentication. 

The fact is, all of these threats are external. As companies develop the ability to handle these concerns, leadership's attention will move to an even more important concern: risks emanating from within the organisation. Being on "the inside" generally entails having access to highly sensitive and confidential information required to perform their duties. 

This can include financial performance statistics, product launch timelines, and source code. While this seems reasonable at first look, allowing access to this information also poses a significant risk to organizations—from top-secret government agencies to Fortune 500 companies and small businesses—if employees leak it.

Unfortunately, insider disclosures are becoming increasingly common. Since 2019, the number of insider occurrences reported by organisations has increased from 66% to an astounding 76%. Furthermore, these insider leaks are costly. In 2023, organisations spent an average of $16.2 million on resolving insider threats, with North American companies incurring the greatest overall cost of $19.09 million. 

There are several recent examples. Someone has leaked Israeli documents regarding an attack on Iran. An Apple employee leaked information about the iPhone 16. Examples abound throughout history. For example, in 1971, the Pentagon Papers altered public perception of the Vietnam War. However, the widespread use of internet media has made these risks simpler to propagate and more difficult to detect. 

Prevention tips 

Tech help: Monitoring for suspicious behaviour with software and AI is one technique to prevent leaks. Behaviour modelling technology, particularly AI-powered ones, can be quite effective at generating statistical conclusions using predictive analytics to, well, forecast outcomes and raise red flags. 

These solutions can provide an alarm, for example, if someone in HR, who would ordinarily not handle product design files, suddenly downloads a large number of product design files. Or if an employee has saved a large amount of information to a USB drive. Companies can use this information to conduct investigations, adjust access levels, or notify them that they need to pay more attention. 

Shut down broad access: Restricting employee access to specific data and files or eliminating certain files completely are two other strategies to stop internal leaks. This can mitigate the chance of leakage in the short term, but at what cost? Information exchange can inspire creativity and foster a culture of trust and innovation. 

Individualize data and files: Steganography, or the act of concealing information in plain sight, dates back to Ancient Greece and is a promising field for preventing leaks. It employs forensic watermarks to change a piece of content (an email, file, photo, or presentation) in imperceptible ways that identify the content so that sharing can be traced back to a single person. 

In recent times, the film industry was the first to employ steganography to combat piracy and theft of vital content. Movies and shows streamed on Hulu or Netflix are often protected with digital rights management (DRM), which includes audio and video watermarking to ensure that each copy is unique. Consider applying this technology to a company's daily operations, where terabytes of digital communications including potentially sensitive information—emails, presentations, photos, customer data—could be personalised for each individual. 

One thing is certain, regardless of the approach a business takes: it needs to have a strategy in place for dealing with the escalating issue of internal leaks. The danger is genuine, and the expenses are excessive. Since most employees are good, it only takes one bad actor to leak information and bring significant damage to their organisation.

The Debate Over Online Anonymity: Safeguarding Free Speech vs. Ensuring Safety

 

Mark Weinstein, an author and privacy expert, recently reignited a long-standing debate about online anonymity, suggesting that social media platforms implement mandatory user ID verification. Weinstein argues that such measures are crucial for tackling misinformation and preventing bad actors from using fake accounts to groom children. While his proposal addresses significant concerns, it has drawn criticism from privacy advocates and cybersecurity experts who highlight the implications for free speech, personal security, and democratic values.  

Yegor Sak, CEO of Windscribe, opposes the idea of removing online anonymity, emphasizing its vital role in protecting democracy and free expression. Drawing from his experience in Belarus, a country known for authoritarian surveillance practices, Sak warns that measures like ID verification could lead democratic nations down a similar path. He explains that anonymity and democracy are not opposing forces but complementary, as anonymity allows individuals to express opinions without fear of persecution. Without it, Sak argues, the potential for dissent and transparency diminishes, endangering democratic values. 

Digital privacy advocate Lauren Hendry Parsons agrees, highlighting how anonymity is a safeguard for those who challenge powerful institutions, including journalists, whistleblowers, and activists. Without this protection, these individuals could face significant personal risks, limiting their ability to hold authorities accountable. Moreover, anonymity enables broader participation in public discourse, as people can freely express opinions without fear of backlash. 

According to Parsons, this is essential for fostering a healthy democracy where diverse perspectives can thrive. While anonymity has clear benefits, the growing prevalence of online harm raises questions about how to balance safety and privacy. Advocates of ID verification argue that such measures could help identify and penalize users engaged in illegal or harmful activities. 

However, experts like Goda Sukackaite, Privacy Counsel at Surfshark, caution that requiring sensitive personal information, such as ID details or social security numbers, poses serious risks. Data breaches are becoming increasingly common, with incidents like the Ticketmaster hack in 2024 exposing the personal information of millions of users. Sukackaite notes that improper data protection can lead to unauthorized access and identity theft, further endangering individuals’ security. 

Adrianus Warmenhoven, a cybersecurity expert at NordVPN, suggests that instead of eliminating anonymity, digital education should be prioritized. Teaching critical thinking skills and encouraging responsible online behavior can empower individuals to navigate the internet safely. Warmenhoven also stresses the role of parents in educating children about online safety, comparing it to teaching basic life skills like looking both ways before crossing the street. 

As discussions about online anonymity gain momentum, the demand for privacy tools like virtual private networks (VPNs) is expected to grow. Recent surveys by NordVPN reveal that more individuals are seeking to regain control over their digital presence, particularly in countries like the U.S. and Canada. However, privacy advocates remain concerned that legislative pushes for ID verification and weakened encryption could result in broader restrictions on privacy-enhancing tools. 

Ultimately, the debate over anonymity reflects a complex tension between protecting individual rights and addressing collective safety. While Weinstein’s proposal aims to tackle urgent issues, critics argue that the risks to privacy and democracy are too significant to ignore. Empowering users through education and robust privacy protections may offer a more sustainable path forward.

Five Common Cybersecurity Errors and How to Avoid Them

 

In the cultural mishmash of modern tech-savvy consumers, the blue screen of death looms large. The screen serves as a simple reminder informing the user that the device is unable to resolve the issue on its own. A computer crash can indicate that your CPU is degrading after years of use, but a cybersecurity compromise can also cause hardware to malfunction or operate unexpectedly. 

A significant portion of the total amount of theft and illegal conduct that impacts people today is carried out by cybercriminals. According to the FBI's 2023 Internet Crime Report, cybercrime complaints resulted in losses above $12.5 billion. The numbers showed a 10% increase in complaints and a 22% increase in financial losses.

As defenders, we must constantly look for what we have missed and how we can get better. Five common cybersecurity errors are listed below, along with tips on how to prevent them: 

Using simple password:  Employing strong passwords to safeguard your sensitive data is a vital part of any effective cybersecurity plan. Strong passwords can make it difficult for hackers to access your credentials. These passwords must include capital letters, symbols, and broken words, if any. Nearly everyone is aware of this aspect of internet use, and many online systems require users to include these security features in their profiles. However, 44% of users hardly ever change their passwords (though over a third of internet users participate in monthly refreshes), and 13% of Americans use the same password for every online account they create. 

Underestimating the human element: This is a fatal error because you would be overlooking a significant contributor to 74% of data breaches. According to the Ponemon Cost of a Data Breach 2022 Report, the top attack vector last year was stolen or compromised credentials; it appears that many of us are falling for scams and disclosing critical information. That's why black hats keep coming back: we provide a consistent, predictable source of funds. To tighten those reigns, implement an employee Security Awareness Training (SAT) program and follow the principle of least privilege. 

Invincible thinking:  Small firms frequently fall into this attitude, believing they have nothing of value to an outside attacker. If all attackers were pursuing billions of money and governmental secrets, this could be accurate. But they aren't. There are innumerable black hats who profit from "small" payments, compounded dividends, and the sale of credential lists. Any company having users and logins can find what they're looking for. This same approach can and should be applied to organisations of all sizes. Combat the "it can't happen to me" mentality with regular risk assessments, pen tests, SAT training, and red teaming to prepare your organisation; because it can. 

Not caring enough:   This is exactly where fraudsters want you: clueless and "I don't care." This can happen all too easily when SOCs become overwhelmed by the 1,000-plus daily notifications they receive, let alone attempting to stay ahead of the game with proactive preventive measures (or even strategy). Threat actors take advantage of teams that are overburdened. If your resources are stretched thin, the correct investment in the right area might alleviate some of the stress, allowing you to do more with less. 

Playing a defensive game:   We've all heard that the best defence is a good offence. And that is true. Cybersecurity frequently receives a solely defensive rap, which unfairly underestimates its value. Cybercriminals are continuously catching organisations off guard, and all too often, SOCs on the ground have never dealt with anything like them before. They patched vulnerabilities. They dodged phishing emails. However, an APT, advanced threat, or even a true red-alert cyber incursion might all be new territory. Prepare your digital and people nervous systems for an attack by instilling offensive security techniques such as penetration testing and red teaming in them before day zero.

Ransomware Gangs Actively Recruiting Pen Testers: Insights from Cato Networks' Q3 2024 Report

 

Cybercriminals are increasingly targeting penetration testers to join ransomware affiliate programs such as Apos, Lynx, and Rabbit Hole, according to Cato Network's Q3 2024 SASE Threat Report, published by its Cyber Threats Research Lab (CTRL).

The report highlights numerous Russian-language job advertisements uncovered through surveillance of discussions on the Russian Anonymous Marketplace (RAMP). Speaking at an event in Stuttgart, Germany, on November 12, Etay Maor, Chief Security Strategist at Cato Networks, explained:"Penetration testing is a term from the security side of things when we try to reach our own systems to see if there are any holes. Now, ransomware gangs are hiring people with the same level of expertise - not to secure systems, but to target systems."

He further noted, "There's a whole economy in the criminal underground just behind this area of ransomware."

The report details how ransomware operators aim to ensure the effectiveness of their attacks by recruiting skilled developers and testers. Maor emphasized the evolution of ransomware-as-a-service (RaaS), stating, "[Ransomware-as-a-service] is constantly evolving. I think they're going into much more details than before, especially in some of their recruitment."

Cato Networks' team discovered instances of ransomware tools being sold, such as locker source code priced at $45,000. Maor remarked:"The bar keeps going down in terms of how much it takes to be a criminal. In the past, cybercriminals may have needed to know how to program. Then in the early 2000s, you could buy viruses. Now you don't need to even buy them because [other cybercriminals] will do this for you."

AI's role in facilitating cybercrime was also noted as a factor lowering barriers to entry. The report flagged examples like a user under the name ‘eloncrypto’ offering a MAKOP ransomware builder, an offshoot of PHOBOS ransomware.

The report warns of the growing threat posed by Shadow AI—where organizations or employees use AI tools without proper governance. Of the AI applications monitored, Bodygram, Craiyon, Otter.ai, Writesonic, and Character.AI were among those flagged for security risks, primarily data privacy concerns.

Cato CTRL also identified critical gaps in Transport Layer Security (TLS) inspection. Only 45% of surveyed organizations utilized TLS inspection, and just 3% inspected all relevant sessions. This lapse allows attackers to leverage encrypted TLS traffic to evade detection.

In Q3 2024, Cato CTRL noted that 60% of CVE exploit attempts were blocked within TLS traffic. Prominent vulnerabilities targeted included Log4j, SolarWinds, and ConnectWise.

The report is based on the analysis of 1.46 trillion network flows across over 2,500 global customers between July and September 2024. It underscores the evolving tactics of ransomware gangs and the growing challenges organizations face in safeguarding their systems.

New SMTP Cracking Tool for 2024 Sold on Dark Web Sparks Email Security Alarm

 

A new method targeting SMTP (Simple Mail Transfer Protocol) servers, specifically updated for 2024, has surfaced for sale on the dark web, sparking significant concerns about email security and data privacy.

This cracking technique is engineered to bypass protective measures, enabling unauthorized access to email servers. Such breaches risk compromising personal, business, and government communications.

The availability of this tool showcases the growing sophistication of cybercriminals and their ability to exploit weaknesses in email defenses. Unauthorized access to SMTP servers not only exposes private correspondence but also facilitates phishing, spam campaigns, and cyber-espionage.

Experts caution that widespread use of this method could result in increased phishing attacks, credential theft, and malware distribution. "Organizations and individuals must prioritize strengthening email security protocols, implementing strong authentication, and closely monitoring for unusual server activity," they advise.

Mitigating these risks requires consistent updates to security patches, enforcing multi-factor authentication, and using email encryption. The emergence of this dark web listing highlights the ongoing threats cybercriminals pose to critical communication systems.

As attackers continue to innovate, the cybersecurity community emphasizes vigilance and proactive defense strategies to safeguard sensitive information. This development underscores the urgent need for robust email security measures in the face of evolving cyber threats.

Hacker Claims to Publish Nokia Source Code

 

The Finnish telecoms equipment firm Nokia is looking into the suspected release of source code material on a criminal hacking site. See also: Gartner Market Guide for DFIR Retainer Services.

An attacker going by the handle "IntelBroker," who is also the proprietor of the current iteration of BreachForums, revealed on Thursday what he said was a cache of "Nokia-related source code" stolen from a third-party breach. The data consists of two folders: "nokia_admin1" and "nokia_etl_summary-data."

IntelBroker initially stated in a Last week's BreachForums post that he was selling the code, characterising it as a collection of "SSH keys, source code, RSA keys, Bitbucket logins, SMTP accounts, Webhooks, and hardcoded credentials."

A Nokia spokesperson stated that the company is "aware of reports that an unauthorised actor has alleged to have gained access to certain third-party contractor data, and possibly Nokia data." We will continue to constantly watch the situation." Last week on Tuesday, the hacker told Hackread that the data would cost $20,000.

IntelBroker told Bleeping Computer that the data came from Nokia's third-party service provider SonarQube. The hacker claimed to have gained access using a default password. SonarQube did not immediately reply to a request for comment.

In 2023, IntelBroker published online data stolen from a health insurance marketplace used by members of Congress, their families, and staffers. Earlier this year, he sparked a probe at the Department of State by uploading online papers purportedly stolen from government contractor Acuity. 

Third-party breaches at major firms are becoming more regular as companies improve their own cyber defences. Earlier this year, a slew of well-known brands, including AT&T, Ticketmaster, Santander Bank, automotive parts supplier Advance Auto Parts, and luxury retailer Neiman Marcus, were hit with breaches caused by a series of attacks on their accounts at cloud-based data warehousing platform Snowflake.

ZKP Emerged as the "Must-Have" Component of Blockchain Security.

 

Zero-knowledge proof (ZKP) has emerged as a critical security component in Web3 and blockchain because it ensures data integrity and increases privacy. It accomplishes this by allowing verification without exposing data. ZKP is employed on cryptocurrency exchanges to validate transaction volumes or values while safeguarding the user's personal information.

In addition to ensuring privacy, it protects against fraud. Zero-knowledge cryptography, a class of algorithms that includes ZKP, enables complex interactions and strengthens blockchain security. Data is safeguarded from unauthorised access and modification while it moves through decentralised networks. 

Blockchain users are frequently asked to certify that they have sufficient funds to execute a transaction, but they may not necessarily want to disclose their whole amount. ZKP can verify that users meet the necessary standards during KYC processes on cryptocurrency exchanges without requiring users to share their paperwork. Building on this, Holonym offered Human Keys to ensure security and privacy in Zero Trust situations. 

Each person is given a unique key that they can use to unlock their security and privacy rights. It strengthens individual rights through robust decentralised protocols and configurable privacy. The privacy-preserving principle applies to several elements of Web3 data security. ZKP involves complex cryptographic validations, and any effort to change the data invalidates the proof. 

Trustless data processing eases smart contract developer work 

Smart contract developers are now working with their hands tied, limited to self-referential opcodes that cannot provide the information required to assess blockchain activities. To that end, the Space and Time platform's emphasis on enabling trustless, multichain data processing and strengthening smart contracts is worth mentioning, since it ultimately simplifies developers' work. 

Their SXT Chain, a ZKP data blockchain, is now live on testnet. It combines decentralised data storage and blockchain verification. Conventional blockchains are focused on transactions, however SXT Chain allows for advanced data querying and analysis while preserving data integrity through blockchain technology.

The flagship DeFi generation introduced yield farming and platforms like Aave and Uniswap. The new one includes tokenized real-world assets, blockchain lending with dynamic interest rates, cross-chain derivatives, and increasingly complicated financial products. 

To unlock Web3 use cases, a crypto-native, trustless query engine is required, which allows for more advanced DeFi by providing smart contracts with the necessary context. Space and Time is helping to offer one by extending on Chainlink's aggregated data points with a SQL database, allowing smart contract authors to execute SQL processing on any part of Ethereum's history. 

Effective and fair regulatory model 

ZKP allows for selective disclosure, in which just the information that regulators require is revealed. Web3 projects comply with KYC and AML rules while protecting user privacy. ZKP even opens up the possibility of a tiered regulation mechanism based on existing privacy models. Observers can examine the ledger for unusual variations and report any suspect accounts or transactions to higher-level regulators. 

Higher-level regulators reveal particular transaction data. The process is supported by zero-knowledge SNARKs (Succinct Non-interactive Arguments of Knowledge) and attribute-based encryption. These techniques use ZKP to ensure consistency between transaction and regulatory information, preventing the use of fake information to escape monitoring. 

Additionally, ZK solutions let users withdraw funds in a matter of minutes, whereas optimistic rollups take approximately a week to finalise transactions and process withdrawals.

The Growing Concern Regarding Privacy in Connected Cars

 

Data collection and use raise serious privacy concerns, even though they can improve driving safety, efficiency, and the whole experience. The automotive industry's ability to collect, analyse, and exchange such data outpaces the legislative frameworks intended to protect individuals. In numerous cases, car owners have no information or control over how their data is used, let alone how it is shared with third parties. 

The FIA European Bureau feels it is time to face these challenges straight on. As advocates for driver and car owners' rights, we are calling for clearer, more open policies that restore individuals' control over their data. This is why, in partnership with Privacy4Cars, we are hosting an event called "Driving Data Rights: Enhancing Privacy and Control in Connected Cars" on November 19th in Brussels. The event will bring together policymakers, industry executives, and civil society to explore current gaps in legislation and industry practices, as well as how we can secure enhanced data protection for all. 

Balancing innovation with privacy 

A recent Privacy4Cars report identifies alarming industry patterns, demonstrating that many organisations are not fully compliant with GDPR laws. Data transparency, security, and consent methods are often lacking, exposing consumers to data misuse. These findings highlight the critical need for reforms that allow individuals more control over their data while ensuring that privacy is not sacrificed in the sake of innovation.

The benefits of connected vehicle data are apparent. Data has the potential to alter the automotive industry in a variety of ways, including improved road safety, predictive maintenance, and enhanced driving experiences. However, this should not be at the expense of individual private rights. 

As the automobile sector evolves, authorities and industry stakeholders must strike the correct balance between innovation and privacy protection. Stronger enforcement of existing regulations, as well as the creation of new frameworks that suit the unique needs of connected vehicles, are required. Car owners should have a say in how their data is utilised and be confident that it is managed properly. 

Shaping the future of data privacy in cars 

The forthcoming event on November 19th will provide an opportunity to dig deeper into these concerns. Key stakeholders from the European Commission, the automotive industry, and privacy experts will meet to discuss the present legal landscape and what else can be done to protect persons in this fast changing environment. 

The agenda includes presentations from Privacy4Cars on the most recent findings on automotive privacy practices, a panel discussion with automotive industry experts, and case studies demonstrating real-world examples of data misuse and third-party access. 

Connected cars are the future of mobility, but it must be founded on confidence and transparency. By giving individuals authority over their personal data, we can build a system that benefits everyone—drivers, manufacturers, and society as a whole. The FIA European Bureau is committed to collaborating with all parties to make this happen.

Balancing Act: Russia's New Data Decree and the Privacy Dilemma

Balancing Act: Russia's New Data Decree and the Privacy Dilemma

Data Privacy and State Access

Russia's Ministry of Digital Development, Communications, and Mass Media has introduced a draft decree specifying the conditions under which authorities can access staff and customer data from businesses operating in Russia, according to Forbes.

The decree would authorize authorities to demand anonymized personal data of customers and employees from businesses in order to protect the population during emergencies, prevent terrorism, and control the spread of infectious diseases, as well as for economic and social research purposes.

The Proposed Decree

Expected to take effect in September 2025, this draft decree follows amendments to the law On Personal Data, adopted on August 8. This law established a State Information System, requiring businesses and state agencies to upload the personal data of their staff and customers upon request.

The Big Data Association, a nonprofit that includes major Russian companies like Yandex, VK, and Gazprombank, has expressed concerns that the draft decree would permit authorities to request personal data from businesses "for virtually any reason." They warned that this could create legal uncertainties and impose excessive regulatory burdens on companies processing personal data, affecting nearly all businesses and organizations.

Global Context: A Tightrope Walk

Russia is not alone in its quest for greater access to personal data. Countries around the world are grappling with similar issues. For instance, the United States has its own set of laws and regulations under the Patriot Act and subsequent legislation that allows the government to access personal data under certain conditions. Similarly, the European Union’s General Data Protection Regulation (GDPR) provides a framework for data access while aiming to protect individual privacy.

Each country’s approach reflects its unique political, social, and cultural context. However, the core issue remains: finding the right balance between state access and individual privacy.

Ethical and Social Implications

The debate over state access to personal data is not purely legal or political; it is deeply ethical and social. Enhanced state access can lead to improved public safety and national security. For example, during a health crisis like the COVID-19 pandemic, having access to personal data can help in effective contact tracing and monitoring the spread of the virus.

New Tool Circumvents Google Chrome's New Cookie Encryption System

 

A researcher has developed a tool that bypasses Google's new App-Bound encryption cookie-theft defences and extracts saved passwords from the Chrome browser. 

Alexander Hagenah, a cybersecurity researcher, published the tool, 'Chrome-App-Bound-Encryption-Decryption,' after noticing that others had previously identified equivalent bypasses. 

Although the tool delivers what several infostealer operations have already done with their malware, its public availability increases the risk for Chrome users who continue to store sensitive information in their browsers. 

Google launched Application-Bound (App-Bound) encryption in July (Chrome 127) as a new security feature that encrypts cookies using a Windows process with SYSTEM rights. 

The goal was to safeguard sensitive data against infostealer malware, which operates with the logged user's access, making it impossible to decrypt stolen cookies without first achieving SYSTEM privileges and potentially setting off security software alarms. 

"Because the App-Bound service is running with system privileges, attackers need to do more than just coax a user into running a malicious app," noted Google in July. "Now, the malware has to gain system privileges, or inject code into Chrome, something that legitimate software shouldn't be doing.” 

However, by September, several infostealer thieves had discovered ways to circumvent the new security feature, allowing their cybercriminal customers to once again siphon and decrypt sensitive data from Google Chrome. 

Google previously stated that the "cat and mouse" game between info-stealer developers and its engineers was to be expected, and that they never assumed that its defence measures would be impenetrable. Instead, they believed that by introducing App-Bound encryption, they could finally set the groundwork for progressively constructing a more robust system. Below is Google's response from the time:

"We are aware of the disruption that this new defense has caused to the infostealer landscape and, as we stated in the blog, we expect this protection to cause a shift in attacker behavior to more observable techniques such as injection or memory scraping. This matches the new behavior we have seen. 

We continue to work with OS and AV vendors to try and more reliably detect these new types of attacks, as well as continuing to iterate on hardening defenses to improve protection against infostealers for our users.”

Microsoft Introduces AI Solution for Erasing Ex from Memories

 


It reveals the story of a woman who is emotionally disturbed and seeks the help of artificial intelligence as she tries to erase her past in director Vikramaditya Motwane's new Hindi film, CTRL. There is no doubt that the movie focuses on data and privacy, but humans are social animals and they need someone to listen to them, guide them, or be there as they go through life.  The CEO of Microsoft AI, Mustafa Suleyman, spoke about this recently in a CNBC interview. 

During an interview with CNN, Suleyman explained that the company is engineering AI companions to watch "what we are doing and to remember what we are doing." This will create a close relationship between AI and humans. As a result of the announcement of AI assistants for the workplace, many companies like Microsoft, OpenAI, and Google have come up with such solutions.  

It has been announced by Microsoft CEO Satya Nadella that Windows will be launching a new feature called Recall. A semantic search is more than just a keyword search; it digs deep into users' digital history to recreate moments from the past, tracking them back to the time they happened. It was announced today by Microsoft's AI CEO, Mustafa Suleyman, that Copilot, the company's artificial intelligence assistant, has been redesigned. 

Copilot, a newly revamped version of Microsoft's most popular AI companion, shares the same vision of a companion for AI that will revolutionize the way users interact with technology daily in their day-to-day lives with the AI head. After joining Microsoft earlier this year, after the company strategically hired key staff from Inflection AI, Suleyman wrote a 700-word memo describing what he refers to as a "technological paradigm shift." 

Copilot has been redesigned to create an AI experience that is more personalized and supportive, similar to Inflection AI's Pi product, which adapts to users' requirements over time, similar to the Pi product. The announcement of AI assistants for the workplace has been made by a number of companies, including Microsoft, OpenAI, and Google.  The Wall Street Journal reported that Microsoft CEO Satya Nadella explained that "Recall is not just about documents." in an interview. 

A sophisticated AI model embedded directly inside the device begins to take screenshots of users' activity and then feeds the data collected into an on-board database that analyzes these activities. By using neural processing technology, all images and interactions can be made searchable, even going as far as searching images by themselves. There are some concerns regarding the events, with Elon Musk warning in a characteristic post that this is akin to an episode of Black Mirror. Going to turn this 'feature' off in the future." 

OpenAI has introduced the ChatGPT desktop application, now powered by the latest GPT-4o model, which represents a significant advancement in artificial intelligence technology. This AI assistant offers real-time screen-reading capabilities, positioning itself as an indispensable support tool for professionals in need of timely assistance. Its enhanced functionality goes beyond merely following user commands; it actively learns from the user's workflow, adapts to individual habits, and anticipates future needs, even taking proactive actions when required. This marks a new era of intelligent and responsive AI companions. 

Jensen Huang also highlighted the advanced capabilities of AI Companion 2.0, emphasizing that this system does not just observe and support workflows—it learns and evolves with them, making it a more intuitive and helpful partner for users in their professional endeavors. Meanwhile, Zoom has introduced Zoom Workplace, an AI-powered collaboration platform designed to elevate teamwork and productivity in corporate environments. The platform now offers over 40 new features, which include updates to the Zoom AI Companion for various services such as Zoom Phone, Team Chat, Events, Contact Center, and the "Ask AI Companion" feature. 

The AI Companion functions as a generative AI assistant seamlessly integrated throughout Zoom’s platform, enhancing productivity, fostering stronger collaboration among team members, and enabling users to refine and develop their skills through AI-supported insights and assistance. The rapid advancements in artificial intelligence continue to reshape the technological landscape, as companies like Microsoft, OpenAI, and Google lead the charge in developing AI companions to support both personal and professional endeavors.

These AI solutions are designed to not only enhance productivity but also provide a more personalized, intuitive experience for users. From Microsoft’s innovative Recall feature to the revamped Copilot and the broad integration of AI companions across platforms like Zoom, these developments mark a significant shift in how humans interact with technology. While the potential benefits are vast, these innovations also raise important questions about data privacy, human-AI relationships, and the ethical implications of such immersive technology. 

As AI continues to evolve and become a more integral part of everyday life, the balance between its benefits and the concerns it may generate will undoubtedly shape the future of AI integration across industries. Microsoft and its competitors remain at the forefront of this technological revolution, striving to create tools that are not only functional but also responsive to the evolving needs of users in a rapidly changing digital world.

Growing Focus on Data Privacy Among GenAI Professionals in 2024

 


Recent reports published by Deloitte and Deloitte Consulting, highlighting the significance of data privacy as it pertains to Generative Artificial Intelligence (GenAI), have been widely cited. As the survey found, there has been a significant increase in professionals' concerns about data privacy; only 22% ranked it as their top concern at the beginning of 2023, and the number will rise to 72% by the end of 2024. 

Technology is advancing at an exponential rate, and as a result, there is a growing awareness of its potential risks. There has been a surge in concerns over data privacy caused by generative AI across several industries, according to a new report by Deloitte. Only 22% of professionals ranked it as among their top three concerns last year, these numbers have risen to 72% this year, according to a recent study. 

There was also strong concern regarding data provenance and transparency among professionals, with 47% and 40% informing us that they considered them to be among their top three ethical GenAI concerns for this year, respectively. The proportion of respondents concerned about job displacement, however, was only 16%. It is becoming increasingly common for staff to be curious about how AI technology operates, especially when it comes to sensitive data. 

Almost half of security professionals surveyed by HackerOne in September believe AI is risky, with many of them believing leaks of training data threaten their networks' security. It is noteworthy that 78% of business leaders ranked "safe and secure" as one of their top three ethical technology principles. This represents a 37% increase from the year 2023, which shows the importance of security to businesses today.

As a result of Deloitte's 2024 "State of Ethics and Trust in Technology " report, the results of the survey were reported in a report which surveyed over 1,800 business and technical professionals worldwide, asking them to rate the ethical principles that they apply to technological processes and, specifically, to their use of GenAI. It is becoming increasingly important for technological leaders to carefully examine the talent needs of their organizations, as they assist in guiding the adoption of generative AI. There are also ethical considerations that should be included on this checklist as well. 

A Deloitte report highlights the effectiveness of GenAI in eliminating the "expertise barrier": more people will be able to make more use of their data more happily and cost-effectively, according to Sachin Kulkarni, managing director, of risk and brand protection, at Deloitte. There may be a benefit to this, though as a result there may be an increased risk of data leaks as a result of this action." 

Furthermore, there has been concern expressed about the effects of generative AI on transparency, data provenance, intellectual property ownership, and hallucinations among professionals. Even though job displacement is often listed as a top concern by respondents, only 16% of those asked are reporting job displacement to be true. As a result of the assessment of emerging technology categories, business and IT professionals have concluded that cognitive technologies, which include large language models, artificial intelligence, neural networks, and generative AI, among others, pose the greatest ethical challenges.  

This category had a significant achievement over other technology verticals, including virtual reality, autonomous vehicles, and robotics. However, respondents stated that they considered cognitive technologies to be the most likely to bring about social good in the future. Flexential's survey published earlier this month found that several executives, in light of the huge reliance on data, are concerned about how generative AI tools can increase cybersecurity risks by extending their attack surface as a result, according to the report. 

In Deloitte's annual report, however, the percentage of professionals reporting that they use GenAI internally grew by 20% between last year and this year, reflecting an increase in the use of GenAI by their employees over the previous year. 94% of the respondents said they had incorporated it into their organization's processes in some way or another. Nevertheless, most respondents indicated that these technologies are either still in the pilot phase or are limited in their usage, with only 12% saying that they are used extensively. 

Gartner research published last year also found that about 80% of GenAI projects fail to make it past proof-of-concept as a result of a lack of resources. Europe has been impacted by the recent EU Artificial Intelligence Act and 34% of European respondents have reported that their organizations have taken action over the past year to change their use of AI to adapt to the Act's requirements. 

According to the survey results, however, the impact of the Act is more widespread, with 26% of respondents from the South Asian region changing their lifestyles because of it, and 16% of those from the North and South American regions did the same. The survey also revealed that 20 per cent of respondents based in the U.S. had altered the way their organization was operating as a result of the executive order. According to the survey, 25% of South Asians, 21% of South Americans, and 12% of Europeans surveyed had the same perspective. 

The report explains that "Cognitive technologies such as artificial intelligence (AI) have the potential to provide society with the greatest benefits, but are also the most vulnerable to misuse," according to its authors. The accelerated adoption of GenAI technology is overtaking the capacity of organizations to effectively govern it at a rapid pace. GenAI tools can provide a great deal of help to businesses in a range of areas, from choosing which use cases to apply them to quality assurance, to implementing ethical standards. 

Companies should prioritize both of these areas." Despite artificial intelligence being widely used, policymakers want to make sure that they won't get themselves into trouble with its use, especially when it comes to legislation because any use of it can lead to a lot of problems. 34% of respondents reported that regulatory compliance was their most important reason for implementing ethics policies and guidelines to comply with regulations, while regulatory penalties topped the list of concerns about not complying with such policies and guidelines. 

A new piece of legislation in the EU, known as the Artificial Intelligence Act, entered into force on August 1. The Act, which takes effect today, is intended to ensure that artificial intelligence systems that are used in high-risk environments are safe, transparent, and ethical. If a company does not comply with the regulations, it could face financial penalties ranging from €35 million ($38 million), which is equivalent to 7% of global turnover, to €7.5 million ($8.1 million), which is equivalent to 1.5% of global turnover. 

Over a hundred companies have already signed the EU Artificial Intelligence Pact, with Amazon, Google, Microsoft, and OpenAI among them; they have also volunteered to begin implementing the requirements of the bill before any deadlines established by law. Both of these actions demonstrate that they are committed to the responsible implementation of artificial intelligence in society, and also help them to avoid future legal challenges in the future. 

The United States released a similar executive order in October 2023 with broad guidelines regarding the protection and enhancement of military, civil, and personal privacy as well as protecting the security of government agencies while fostering AI innovation and competition across the entire country. Even though this is not a law, many companies operating in the U.S. have made policy changes to ensure compliance with regulatory changes and comply with public expectations regarding the privacy and security of AI.

Ethics and Tech: Data Privacy Concerns Around Generative AI

Ethics and Tech: Data Privacy Concerns Around Generative AI

The tech industry is embracing Generative AI, but the conversation around data privacy has become increasingly important. The recent “State of Ethics and Trust in Technology” report by Deloitte highlights the pressing ethical considerations that accompany the rapid adoption of these technologies. 30% of organizations have adjusted new AI projects, and 25% have modified existing ones in response to the AI Act, the report mentions.

The Rise of Generative AI

54% of professionals believe that generative AI poses the highest ethical risk among emerging technologies. Additionally, 40% of respondents identified data privacy as their top concern. 

Generative AI, which includes technologies like GPT-4, DALL-E, and other advanced machine learning models, has shown immense potential in creating content, automating tasks, and enhancing decision-making processes. 

These technologies can generate human-like text, create realistic images, and even compose music, making them valuable tools across industries such as healthcare, finance, marketing, and entertainment.

However, the capabilities of generative AI also raise significant data privacy concerns. As these models require vast amounts of data to train and improve, the risk of mishandling sensitive information increases. This has led to heightened scrutiny from both regulatory bodies and the public.

Key Data Privacy Concerns

Data Collection and Usage: Generative AI systems often rely on large datasets that may include personal and sensitive information. The collection, storage, and usage of this data must comply with stringent privacy regulations such as GDPR and CCPA. Organizations must ensure that data is anonymized and used ethically to prevent misuse.

Transparency and Accountability: One of the major concerns is the lack of transparency in how generative AI models operate. Users and stakeholders need to understand how their data is being used and the decisions being made by these systems. Establishing clear accountability mechanisms is crucial to build trust and ensure ethical use.

Bias and Discrimination: Generative AI models can inadvertently perpetuate biases present in the training data. This can lead to discriminatory outcomes, particularly in sensitive areas like hiring, lending, and law enforcement. Addressing these biases requires continuous monitoring and updating of the models to ensure fairness and equity.

Security Risks: The integration of generative AI into various systems can introduce new security vulnerabilities. Cyberattacks targeting AI systems can lead to data breaches, exposing sensitive information. Robust security measures and regular audits are essential to safeguard against such threats.

Ethical Considerations and Trust

80% of respondents are required to complete mandatory technology ethics training, marking a 7% increase since 2022.  Nearly three-quarters of IT and business professionals rank data privacy among their top three ethical concerns related to generative AI:

  • Developing and implementing ethical frameworks for AI usage is crucial. These frameworks should outline principles for data privacy, transparency, and accountability, guiding organizations in the responsible deployment of generative AI.
  • Engaging with stakeholders, including employees, customers, and regulatory bodies, is essential to build trust. Open dialogues about the benefits and risks of generative AI can help in addressing concerns and fostering a culture of transparency.
  • The dynamic nature of AI technologies necessitates continuous monitoring and improvement. Regular assessments of AI systems for biases, security vulnerabilities, and compliance with privacy regulations are vital to ensure ethical use.

Complexity: Research Offers Solution for Healthcare Security Amid Rising Cyberattacks

Complexity: Research Offers Solution for Healthcare Security Amid Rising Cyberattacks

In May, Ascension, a healthcare provider with a network of 140 hospitals across the U.S., suffered a major cyber-attack that disrupted its clinical operations for almost a month. Experts traced the problem to a malicious ransomware that had exploited an employee's computer. 

Healthcare: Juicy Target for Criminals

Threat actors see healthcare systems as lucrative targets for cybercrime because they hold crucial financial, health, and personal data. A 2023 survey research in health and IT professionals revealed that 88% of organizations had suffered around 40% of attacks in the past year. 

Complexity: Flaw in IT System

One major flaw is the rise of complexity in IT systems, says Hüseyin Tanriverdi, associate professor of information, risk, and operations management at Texas McCombs. He believes it's due to years of mergers and acquisitions that have made large-scale multi-hospital systems. 

After mergers, healthcare providers don’t standardize their tech and security operations, which results in causing major complexity in the health systems- different IT systems, different care processes, and different command structures. 

But his new research shows complexity can also offer solutions to these issues. “A good kind of complexity,” Tanriverdi believes can support communication across different systems, governance structures, and care processes, and combat against cyber incidents.

Understanding the Complex vs. Complicated

The research team found two similar-sounding IT terms that link to the problem. In “complicatedness,” an abundance of elements interconnect in a system for sharing info in structured ways. Whereas “complexity” happens when many elements interconnect to share information in unstructured ways- integrating systems following a merger and acquisition. 

Tanrivedi believes complicated structures are better because they are structured, despite being difficult, one can control them. Such is not the case with complex systems as they are unstructured networks. He believes healthcare systems got more vulnerable as they got more complex, 29% were more likely to get hit than average. 

Solution for Better Healthcare Security

Complex systems offer hackers more data transfer points to attack, and a higher risk for human errors, making it a bigger problem.

The solution lies in following a centralized approach for handling the data. “With fewer access points and simplified and hardened cybersecurity controls, unauthorized parties are less likely to gain unauthorized access to patient data,” says Tanrivedi. “Technology reduces cybersecurity risks if it is organized and governed well.”