Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label AI Apps. Show all posts

The Concerning Rise of AI “Undressing” Apps: A Violation of Privacy and Ethics

 

Today, AI can help with a variety of tasks, like making personalised food plans and offering dating advice, as well as fixing image flaws and optimising workflow.

However, AI technology has also opened the door to more controversial apps, such as AI nude generators used for AI undressing. AI undressing is becoming increasingly popular as a result of rapid technical breakthroughs and the interest it generates. These apps use deep learning algorithms to analyse and edit images, successfully removing clothing from photographs. 

Nevertheless, the usage of these apps raises serious legal and ethical concerns. Many of these apps have the potential to infringe private rights and be used maliciously, which could result in legal consequences. Responsible use of AI undressing apps is critical, but the potential for abuse and the difficulties of regulation remain significant hurdles.

In Israel, for example, there have been debates about implementing regulations similar to those governing revenge pornography, which would criminalise the unauthorised use of AI undressing apps. In addition, Israeli tech businesses and academic institutions are creating educational courses and guidelines to promote the appropriate use of AI. These initiatives aim to mitigate the negative effects of applications such as AI undressing while upholding ethical standards in technology use. 

One of the most pressing challenges concerning AI-powered undressing apps is whether they can be used properly. This is a complex subject that ultimately depends on individual notions of right and wrong, as well as the willingness to take the required measures to safeguard oneself and others from the possible harms that these apps can generate. 

The appropriate use of such technology necessitates a thorough awareness of its ramifications as well as a commitment to ethical principles. As AI evolves, it is critical for society to strike a balance between innovation and ethical responsibility. It is critical to ensure that technological breakthroughs are used to improve our lives while maintaining our values and safety. 

This includes establishing strong legal frameworks, raising awareness and educating about the risks, and cultivating an ethical AI culture. By doing so, we can maximise the benefits of AI while minimising its potential risks, resulting in a safer and more responsible technological landscape for everybody.

DBS Bank Reveals Big 'Data Challenges' With AI Use


In a bid to adopt Artificial Intelligence (AI) to its operations, DBS Bank had to face several challenges. While doing the same, however, the company realized that doing so goes beyond just figuring out the training models. Data turned out to be one of those challenges, according to DBS’ chief analytics officer Sameer Gupta. 

The Singapore bank started its path to use AI in 2018 by focusing on four main areas: the creation of analytics capabilities, data culture and curriculum, data upskilling, and data enablement.

The company’s goal was to create a data culture that pushed all employees to always consider how data and AI could assist them in their work as well as the relevant use cases and talent, such as machine learning engineers. It entailed offering a training course that instructed personnel on when and how to use data and when not to.

The bank is working on establishing its infrastructure to encompass AI adoption to its data platform, data management structure and data governance. It established a framework that all of its data use cases must be evaluated. PURE is for Purposeful, Unsurprising, Respectful, and Explainable. According to DBS, these four principles are fundamental in directing the bank's responsible use of data.

With the help of its data platform, ADA, the bank is better able to ensure data governance, quality, discoverability, and security. 

It has been discovered that 95% of the data turned out to be useful and crucial for DBS’ AI-based operations. For a fact, the platform consists of more than 5.3 petabytes of data, with 32,000 datasets including videos and structured data. 

However, Gupta revealed that getting to this point “proved a mammoth task.” He explained that organizing this data and making it discoverable in particular needed a lot of effort, primarily manual and human expertise. It took countless hours to identify the metadata, and there are very few tools available to automate this process.

Taking into account the data that spreads across different systems, "a lot of heavy lifting was needed to bring datasets onto a single platform and make these discoverable. Employees must be able to extract the data they need and the bank had to ensure this was done securely,” he said.

Today, DBS govern more than 300 AI and machine learning program, yielding a revenue uplift of SG$150 million ($112.53 million). Additionally, the company saved SG$30 million ($22.51 million) in 2022 for their efforts in risk mitigation, for example from bettering their credit monitoring. Gupta notes that these AI use cases involve a range of functions, like human resources, legal, and fraud detection.

Moreover, he highlighted the need to ensure that the use of AI applications maintains the PURE principles and Singapore's FEAT principles – serving as a framework for the sector’s AI use. It will also be necessary to evaluate other recognized hazards, such as hallucinations and copyright violations, he said.

He added that the company needs to work on its technical elements, including building mechanisms to monitor AI use and gather feedback in order to analyze the ongoing operation and identify areas of improvement. This will consequently ensure that the organization will learn from its AI application, and will be able to make necessary changes whenever needed.

In regards to DBS’ AI usage for predicting outages, such as the disruptions it witnessed last year, he confirmed that the bank is in fact working in identifying how it can do better, including tapping data analysis. He said there is potential to apply AI, for instance, in operations to spot anomalies and choose the best course of action. He also noted that a variety of circumstances might lead to surges in demand.

DBS is the biggest bank in Singapore and currently employs 1,000 data scientists, engineers, and engineers. It runs 600 AI and machine learning algorithms, in order to facilitate its five million customers spread across the regions of China, Indonesia, and India.

With an increase in revenue to SG$350 million ($262.56 million), the bank's AI projects are targeting to produce additional economic value and cost avoidance benefits this year. In the following three years, it aims to reach SG$ 1 billion ($750.17 million).

Blocking Access to AI Apps is a Short-term Solution to Mitigate Safety Risk


Another major revelation in regard to ChatGPT recently came to light through research conducted by Netskope. According to their analysis, business organizations are experiencing about 183 occurrences of sensitive data being posted to ChatGPT for every 10,000 corporate users each month. Amongst the sensitive data being exposed, source code bagged the largest share.

The security researchers further scrutinized the data of the million enterprise users worldwide and emphasized the growing trend of generative AI app usage, which witnessed an increase of 22.5% over the past two months. This consequently escalated the chance of sensitive data being exposed. 

ChatGPT Reigning the Generative AI Market

Apparently, organizations with 10,000 (or more) users are utilizing some or the other AI tool – with an average of 5 apps – on a regular basis. Compared to other generative AI apps, ChatGPT has more than 8 times as many daily active users. Within the next seven months, it is anticipated that the number of people accessing AI apps will double at the present growth pace.

The AI app with the swiftest growth in installations over the last two months was Google Bard, which is presently attracting new users at a rate of 7.1% per week versus 1.6% for ChatGPT. Although the generative AI app market is expected to considerably grow before then, with many more apps in development, Google Bard is not projected to overtake ChatGPT for more than a year at the current rate.

Besides the intellectual property (excluding source code) and personally identifiable information, other sensitive data communicated via ChatGPT includes regulated data, such as financial and healthcare data, as well as passwords and keys, which are typically included in source code.

According to Ray Canzanese, Threat Research Director, Netskope Threat Lab, “It is inevitable that some users will upload proprietary source code or text containing sensitive data to AI tools that promise to help with programming or writing[…]Therefore, it is imperative for organizations to place controls around AI to prevent sensitive data leaks. Controls that empower users to reap the benefits of AI, streamlining operations and improving efficiency, while mitigating the risks are the ultimate goal. The most effective controls that we see are a combination of DLP and interactive user coaching.”

Safety Measures to Adopt AI Apps

As opportunistic attackers look to profit from the popularity of artificial intelligence, Netskope Threat Labs is presently monitoring ChatGPT proxies and more than 1,000 malicious URLs and domains, including several phishing attacks, malware distribution campaigns, spam, and fraud websites.

While blocking access to AI content and apps may seem like a good idea, it is indeed a short-term solution. 

James Robinson, Deputy CISO at Netskope, said “As security leaders, we cannot simply decide to ban applications without impacting on user experience and productivity[…]Organizations should focus on evolving their workforce awareness and data policies to meet the needs of employees using AI products productively. There is a good path to safe enablement of generative AI with the right tools and the right mindset.”

Organizations must focus their strategy on finding acceptable applications and implementing controls that enable users to use them to their maximum potential while protecting the business from dangers in order to enable the safe adoption of AI apps. For protection against assaults, such a strategy should incorporate domain filtering, URL filtering, and content inspection.

Here, we are listing some more safety measures to secure data and use AI tools with safety: 

  • Disable access to apps that lack a legitimate commercial value or that put the organization at disproportionate risk. 
  • Educate employees to remind users of their company policy pertaining to the usage of AI apps.
  • Utilize cutting-edge data loss prevention (DLP) tools to identify posts with potentially sensitive data.  

Ikigai: MIT-based AI Apps Startup is set to Alleviate Supply Chain Attacks with Advanced Cybersecurity


This year, the constant surge of data breaches and ransomware attacks are apparently impacting the supply chains and the manufacturers who are replying on them. VentureBeat has discovered in their research that supply chain-directed ransomware attacks have broken all previous records in the manufacturing industry, with the most severe losses occurring in the medical device, pharmaceutical, and plastics industries. The complete sum of the victim organization's cyber-insurance is being demanded as ransom by the attackers. The attackers send top management a copy of their insurance coverage if they refuse, but they are rejected.

Threat Actors Asking for Bigger Ransoms

Manufacturers who are impacted by the supply chain attacks claim the ransom demands are two to three times more than those made by other businesses. This is so because it can cost millions to shut down a production line for even a single day. Much smaller to mid-tier single-location manufacturers scramble to get cybersecurity assistance after paying the ransom discreetly. However, they are frequently victims a second or third time.

Ransomware attacks remain the cybercrime of choice for threat actors who are targeting supply chains for financial gain. Till now, the most well-known cases of such attacks have included companies like Aebi Schmidt, ASCO, COSCO, Eurofins Scientific, Norsk Hydro, and Titan Manufacturing and Distributing. The other major firms that were attacked choose to stay anonymous.

Among the manufacturing firms, A.P. Møller-Maersk, the Danish shipping giant suffered the most severe attack on a supply chain, which cost $200–300 million and temporarily shut down the major cargo facility at the Port of Los Angeles.

What is the MIT Based Start-up? 

Ikigai, the MIT-based startup has developed an AI Apps platform based on the research conducted by its cofounders at MIT with large graphical models (LGMs) and expert-in-th-loop (EiTL), through which the system can collect real-time inputs from professionals and continuously learn to maximize AI-driven insights and expert knowledge, intuition, and expertise.

The list of industries using Ikigai's AI Apps is expanding. Currently, it includes manufacturing (predictive maintenance quality assurance), retail (demand forecasting, new product launch), insurance (auditing rate-making), financial services (compliance know-your-customer), banking (customer entity matching txn reconciliation), and supply chain optimization (labor planning sales and operations planning).

Making sense of walled, incomplete data dispersed throughout the organization is a constant struggle for every enterprise. The most challenging, intricate issues that an organization faces merely amplify how broad its information gaps prevent decision-making. Manufacturers pursuing a China Plus One strategy, ESG initiatives, and sustainability have told VentureBeat that the complexity of the decisions they must make in these strategic areas is outpacing current approaches to data mining.

The LGMs used by Ikigai's AI Apps platform, which works with sparse, small datasets to give necessary insight and intelligence, aid in resolving these problems. DeepMatch for AI-powered data prep, DeepCast for predictive modeling with sparse data and one-click MLOps, and DeepPlan for reinforcement learning-based domain-specific decision suggestions are some of its features. EiTL is a sophisticated product feature made possible by Ikigai's technology.

EiTL, together with LGM models will eventually strengthen the model accuracy by integrating human expertise. To identify new risks and fraud tendencies in managed detection and response (MDR) situations, EiTL would integrate human experience with machine learning algorithms. EiTL's real-time AI system inputs have the potential to enhance MDR teams' threat identification and response capabilities.