Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label Google+. Show all posts

Gmail Users Face a New Dilemma Between AI Features and Data Privacy

 



Google’s Gmail is now offering two new upgrades, but here’s the catch— they don’t work well together. This means Gmail’s billions of users are being asked to pick a side: better privacy or smarter features. And this decision could affect how their emails are handled in the future.

Let’s break it down. One upgrade focuses on stronger protection of your emails, which works like advanced encryption. This keeps your emails private, even Google won’t be able to read them. The second upgrade brings in artificial intelligence tools to improve how you search and use Gmail, promising quicker, more helpful results.

But there’s a problem. If your emails are fully protected, Gmail’s AI tools can’t read them to include in its search results. So, if you choose privacy, you might lose out on the benefits of smarter searches. On the other hand, if you want AI help, you’ll need to let Google access more of your email content.

This challenge isn’t unique to Gmail. Many tech companies are trying to combine stronger security with AI-powered features, but the two don’t always work together. Apple tried solving this with a system that processes data securely on your device. However, delays in rolling out their new AI tools have made their solution uncertain for now.

Some reports explain the choice like this: if you turn on AI features, Google will use your data to power smart tools. If you turn it off, you’ll have better privacy, but lose some useful options. The real issue is that opting out isn’t always easy. Some settings may remain active unless you manually turn them off, and fully securing your emails still isn’t simple.

Even when extra security is enabled, email systems have limitations. For example, Apple’s iCloud Mail doesn’t use full end-to-end encryption because it must work with global email networks. So even private emails may not be completely safe.

This issue goes beyond Gmail. Other platforms are facing similar challenges. WhatsApp, for example, added a privacy mode that blocks saving chats and media, but also limits AI-related features. OpenAI’s ChatGPT can now remember what you told it in past conversations, which may feel helpful but also raises questions about how your personal data is being stored.

In the end, users need to think carefully. AI tools can make email more useful, but they come with trade-offs. Email has never been a perfectly secure space, and with smarter AI, new threats like scams and data misuse may grow. That’s why it’s important to weigh both sides before making a choice.



Google Plans Big Messaging Update for Android Users

 



Google is preparing a major upgrade to its Messages app that will make texting between Android and iPhone users much smoother and more secure. For a long time, Android and Apple phones haven’t worked well together when it comes to messaging. But this upcoming change is expected to improve the experience and add strong privacy protections.


New Messaging Technology Called RCS

The improvement is based on a system called RCS, short for Rich Communication Services. It’s a modern replacement for traditional SMS texting. This system adds features like read receipts, typing indicators, and high-quality image sharing—all without needing third-party apps. Most importantly, RCS supports encryption, which means messages can be protected and private.

Recently, the organization that decides how mobile networks work— the GSMA announced support for RCS as the new standard. Both Google and Apple have agreed to upgrade their messaging apps to match this system, allowing Android and iPhone users to send safer, encrypted messages to each other for the first time.


Why Is This Important Now?

The push for stronger messaging security comes after several cyberattacks, including a major hacking campaign by Chinese groups known as "Salt Typhoon." These hackers broke into American networks and accessed sensitive data. Events like this have raised concerns about weak security in regular text messaging. Even the FBI advised people not to use SMS for sharing personal or financial details.


What’s Changing in Google Messages?

As part of this shift, Google is updating its Messages app to make it easier for users to see which contacts are using RCS. In a test version of the app, spotted by Android Authority, Google is adding new features that label contacts based on whether they support RCS. The contact list may also show different colors to make RCS users stand out.

At the moment, there’s no clear way to know whether a chat will use secure RCS or fallback SMS. This update will fix that. It will even help users identify if someone using an iPhone has enabled RCS messaging.


A More Secure Future for Messaging

Once this update is live, Android users will have a messaging app that better matches Apple’s iMessage in both features and security. It also means people can communicate across platforms without needing apps like WhatsApp or Signal. With both Google and Apple on board, RCS could soon become the standard way we all send safe and reliable text messages.


Building Smarter AI Through Targeted Training


 

In recent years, artificial intelligence and machine learning have been in high demand across a broad range of industries. As a consequence, the cost and complexity of constructing and maintaining these models have increased significantly. Artificial intelligence and machine learning systems are resource-intensive, as they require substantial computation resources and large datasets, and are also difficult to manage effectively due to their complexity. 

As a result of this trend, professionals such as data engineers, machine learning engineers, and data scientists are increasingly being tasked with identifying ways to streamline models without compromising performance or accuracy, which in turn will lead to improved outcomes. Among the key aspects of this process involves determining which data inputs or features can be reduced or eliminated, thereby making the model operate more efficiently. 

In AI model optimization, a systematic effort is made to improve a model's performance, accuracy, and efficiency to achieve superior results in real-world applications. The purpose of this process is to improve a model's operational and predictive capabilities through a combination of technical strategies. It is the engineering team's responsibility to improve computational efficiency—reducing processing time, reducing resource consumption, and reducing infrastructure costs—while also enhancing the model's predictive precision and adaptability to changing datasets by enhancing the model's computational efficiency. 

An important optimization task might involve fine-tuning hyperparameters, selecting the most relevant features, pruning redundant elements, and making advanced algorithmic adjustments to the model. Ultimately, the goal of modeling is not only to provide accurate and responsive data, but also to provide scalable, cost-effective, and efficient data. As long as these optimization techniques are applied effectively, they ensure the model will perform reliably in production environments as well as remain aligned with the overall objectives of the organization. 

It is designed to retain important details and user preferences as well as contextually accurate responses when ChatGPT's memory feature is enabled, which is typically set to active by default so that the system can provide more personalized responses over time. If the user desires to access this functionality, he or she can navigate to the Settings menu and select Personalization, where they can check whether memory is active and then remove specific saved interactions if needed. 

As a result of this, it is recommended that users periodically review the data that has been stored within the memory feature to ensure its accuracy. In some cases, incorrect information may be retained, including inaccurate personal information or assumptions made during a previous conversation. As an example, in certain circumstances, the system might incorrectly log information about a user’s family, or other aspects of their profile, based on the context in which it is being used. 

In addition, the memory feature may inadvertently store sensitive data when used for practical purposes, such as financial institutions, account details, or health-related queries, especially if users are attempting to solve personal problems or experiment with the model. It is important to remember that while the memory function contributes to improved response quality and continuity, it also requires careful oversight from the user. There is a strong recommendation that users audit their saved data points routinely and delete the information that they find inaccurate or overly sensitive. This practice helps maintain the accuracy of data, as well as ensure better, more secure interactions. 

It is similar to clearing the cache of your browser periodically to maintain your privacy and performance optimally. "Training" ChatGPT in terms of customized usage means providing specific contextual information to the AI so that its responses will be relevant and accurate in a way that is more relevant to the individual. ITGuides the AI to behave and speak in a way that is consistent with the needs of the users, users can upload documents such as PDFs, company policies, or customer service transcripts. 

When people and organizations can make customized interactions for business-related content and customer engagement workflows, this type of customization provides them with more customized interactions. It is, however, often unnecessary for users to build a custom GPT for personal use in the majority of cases. Instead, they can share relevant context directly within their prompts or attach files to their messages, thereby achieving effective personalization. 

As an example, a user can upload their resume along with a job description when crafting a job application, allowing artificial intelligence to create a cover letter based on the resume and the job description, ensuring that the cover letter accurately represents the user's qualifications and aligns with the position's requirements. As it stands, this type of user-level customization is significantly different from the traditional model training process, which requires large quantities of data to be processed and is mainly performed by OpenAI's engineering teams. 

Additionally, ChatGPT users can increase the extent of its memory-driven personalization by explicitly telling it what details they wish to be remembered, such as their recent move to a new city or specific lifestyle preferences, like dietary choices. This type of information, once stored, allows the artificial intelligence to keep a consistent conversation going in the future. Even though these interactions enhance usability, they also require thoughtful data sharing to ensure privacy and accuracy, especially as ChatGPT's memory is slowly swelled over time. 

It is essential to optimize an AI model to improve performance as well as resource efficiency. It involves refining a variety of model elements to maximize prediction accuracy and minimize computational demand while doing so. It is crucial that we remove unused parameters from networks to streamline them, that we apply quantization to reduce data precision and speed up processing, and that we implement knowledge distillation, which translates insights from complex models to simpler, faster models. 

A significant amount of efficiency can be achieved by optimizing data pipelines, deploying high-performance algorithms, utilizing hardware accelerations such as GPUs and TPUs, and employing compression techniques such as weight sharing, low-rank approximation, and optimization of the data pipelines. Also, balancing batch sizes ensures the optimal use of resources and the stability of training. 

A great way to improve accuracy is to curate clean, balanced datasets, fine-tune hyperparameters using advanced search methods, increase model complexity with caution and combine techniques like cross-validation and feature engineering with the models. Keeping long-term performance high requires not only the ability to learn from pre-trained models but also regular retraining as a means of combating model drift. To enhance the scalability, cost-effectiveness, and reliability of AI systems across diverse applications, these techniques are strategically applied. 

Using tailored optimization solutions from Oyelabs, organizations can unlock the full potential of their AI investments. In an age when artificial intelligence is continuing to evolve rapidly, it becomes increasingly important to train and optimize models strategically through data-driven optimization. There are advanced techniques that can be implemented by organizations to improve performance while controlling resource expenditures, from selecting features and optimizing algorithms to efficiently handling data. 

As professionals and teams that place a high priority on these improvements, they will put themselves in a much better position to create AI systems that are not only faster and smarter but are also more adaptable to the daily demands of the world. Businesses are able to broaden their understanding of AI and improve their scalability and long-term sustainability by partnering with experts and focusing on how AI achieves value-driven outcomes.

New Sec-Gemini v1 from Google Outperforms Cybersecurity Rivals

 


A cutting-edge artificial intelligence model developed by Google called Sec-Gemini v1, a version of Sec-Gemini that integrates advanced language processing, real-time threat intelligence, and enhanced cybersecurity operations, has just been released. With the help of Google's proprietary Gemini large language model and dynamic security data and tools, this innovative solution utilizes its capabilities seamlessly to enhance security operations. 

A new AI model, Sec-Gemini v1 that combines sophisticated reasoning with real-time cybersecurity insights and tools has been released by Google. This integration makes the model extremely capable of performing essential security functions like threat detection, vulnerability assessment, and incident analysis. A key part of Google's effort to support progress across the broader security landscape is its initiative to provide free access to Sec-Gemini v1 to select institutions, professionals, non-profit organizations, and academic institutions to promote a collaborative approach to security research. 

Due to its integration with Google Threat Intelligence (GTI), the Open Source Vulnerabilities (OSV) database, and other key data sources, Sec-Gemini v1 stands out as a unique solution. On the CTI-MCQ threat intelligence benchmark and the CTI-Root Cause Mapping benchmark, it outperforms peer models by at least 11%, respectively. Using the CWE taxonomy, this benchmark assesses the model's ability to analyze and classify vulnerabilities.

One of its strongest features is accurately identifying and describing the threat actors it encounters. Because of its connection to Mandiant Threat Intelligence, it can recognize Salt Typhoon as a known adversary, which is a powerful feature. There is no doubt that the model performs better than its competitors based on independent benchmarks. According to a report from Security Gemini v1, compared to comparable AI systems, Sec-Gemini v1 scored at least 11 per cent higher on CTI-MCQ, a key metric used to assess threat intelligence capabilities. 

Additionally, it achieved a 10.5 per cent edge over its competitors in the CTI-Root Cause Mapping benchmark, a test that assesses the effectiveness of an AI model in interpreting vulnerability descriptions and classifying them by the Common Weakness Enumeration framework, an industry standard. It is through this advancement that Google is extending its leadership position in artificial intelligence-powered cybersecurity, by providing organizations with a powerful tool to detect, interpret, and respond to evolving threats more quickly and accurately. 

It is believed that Sec-Gemini v1 has the strength to be able to perform complex cybersecurity tasks efficiently, according to Google. Aside from conducting in-depth investigations, analyzing emerging threats, and assessing the impact of known vulnerabilities, you are also responsible for performing comprehensive incident investigations. In addition to accelerating decision-making processes and strengthening organization security postures, the model utilizes contextual knowledge in conjunction with technical insights to accomplish the objective. 

Though several technology giants are actively developing AI-powered cybersecurity solutions—such as Microsoft's Security Copilot, developed with OpenAI, and Amazon's GuardDuty, which utilizes machine learning to monitor cloud environments—Google appears to have carved out an advantage in this field through its Sec-Gemini v1 technology. 

A key reason for this edge is the fact that it is deeply integrated with proprietary threat intelligence sources like Google Threat Intelligence and Mandiant, as well as its remarkable performance on industry benchmarks. In an increasingly competitive field, these technical strengths place it at the top of the list as a standout solution. Despite the scepticism surrounding the practical value of artificial intelligence in cybersecurity - often dismissed as little more than enhanced assistants that still require a lot of human interaction - Google insists that Sec-Gemini v1 is fundamentally different from other artificial intelligence models out there. 

The model is geared towards delivering highly contextual, actionable intelligence rather than simply summarizing alerts or making basic recommendations. Moreover, this technology not only facilitates faster decision-making but also reduces the cognitive load of security analysts. As a result, teams can respond more quickly to emerging threats in a more efficient way. At present, Sec-Gemini v1 is being made available exclusively as a research tool, with access being granted only to a select set of professionals, academic institutions, and non-profit organizations that are willing to share their findings. 

There have been early signs that the model will make a significant contribution to the evolution of AI-driven threat defence, as evidenced by the model's use-case demonstrations and early results. It will introduce a new era of proactive cyber risk identification, contextualization, and mitigation by enabling the use of advanced language models. 

In real-world evaluations, the Google security team demonstrated Sec-Gemini v1's advanced analytical capabilities by correctly identifying Salt Typhoon, a recognized threat actor, with its accurate analytical capabilities. As well as providing in-depth contextual insights, the model provided in-depth contextual information, including vulnerability details, potential exploitation techniques, and associated risk levels. This level of nuanced understanding is possible because Mandiant's threat intelligence provides a rich repository of real-time threat data as well as adversary profiles that can be accessed in real time. 

The integration of Sec-Gemini v1 into other systems allows Sec-Gemini v1 to go beyond conventional pattern recognition, allowing it to provide more timely threat analysis and faster, evidence-based decision-making. To foster collaboration and accelerate model refinement, Google has offered limited access to Sec-Gemini v1 to a carefully selected group of cybersecurity practitioners, academics, and non-profit organizations to foster collaboration. 

To avoid a broader commercial rollout, Google wishes to gather feedback from trusted users. This will not only ensure that the model is more reliable and capable of scaling across different use cases but also ensure that it is developed in a responsible and community-led manner. During practical demonstrations, Google's security team demonstrated Sec-Gemini v1's ability to identify Salt Typhoon, an internationally recognized threat actor, with high accuracy, as well as to provide rich contextual information, such as vulnerabilities, attack patterns and potential risk exposures associated with this threat actor. 

Through its integration with Mandiant's threat intelligence, which enhances the model's ability to understand evolving threat landscapes, this level of precision and depth can be achieved. The Sec-Gemini v1 software, which is being made available for free to a select group of cybersecurity professionals, academic institutions, and nonprofit organizations, for research, is part of Google's commitment to responsible innovation and industry collaboration. 

Before a broader deployment of this model occurs, this initiative will be designed to gather feedback, validate use cases, and ensure that it is effective across diverse environments. Sec-Gemini v1 represents an important step forward in integrating artificial intelligence into cybersecurity. Google's enthusiasm for advancing this technology while ensuring its responsible development underscores the company's role as a pioneer in the field. 

Providing early, research-focused access to Sec-Gemini v1 not only fosters collaboration within the cybersecurity community but also ensures that Sec-Gemini v1 will evolve in response to collective expertise and real-world feedback, as Google offers this model to the community at the same time. Sec-Gemini v1 has demonstrated remarkable performance across industry benchmarks as well as its ability to detect and mitigate complex threats, so it may be able to change the face of threat defense strategies in the future. 

The advanced reasoning capabilities of Sec-Gemini v1 are coupled with cutting-edge threat intelligence, which can accelerate decision-making, cut response times, and improve organizational security. However, while Sec-Gemini v1 shows great promise, it is still in the research phase and awaiting wider commercial deployment. Using such a phased approach, it is possible to refine the model carefully, ensuring that it adheres to the high standards that are required by various environments. 

For this reason, it is very important that stakeholders, such as cybersecurity experts, researchers, and industry professionals, provide valuable feedback during the first phase of the model development process, to ensure that the model's capabilities are aligned with real-world scenarios and needs. This proactive stance by Google in engaging the community emphasizes the importance of integrating AI responsibly into cybersecurity. 

This is not solely about advancing the technology, but also about establishing a collaborative framework that can make it easier to detect and respond to emerging cyber threats more effectively, more quickly, and more securely. The real issue is the evolution of Sec-Gemini version 1, which may turn out to be one of the most important tools for safeguarding critical systems and infrastructure around the globe in the future.

Ensuring AI Delivers Value to Business by Making Privacy a Priority

 


Many organizations are adopting Artificial Intelligence (AI) as a capability, but the focus is shifting from capability to responsibility. In the future, PwC anticipates that AI will be worth $15.7 trillion to the global economy, an unquestionable transformational potential. As a result of this growth, local GDPs are expected to grow by 26% in the next five years and hundreds of AI applications across all industries are expected to follow suit. 

Although these developments are promising, significant privacy concerns are emerging alongside them. AI relies heavily on large volumes of personal data, introducing heightened risks for misuse and data breaches. A prominent area of concern is the development of generative artificial intelligence (AI), which, in its misapplied state, can be used to create deceptive content, such as fake identities and manipulated images, which could pose serious threats to digital trust and privacy.

As Harsha Solanki of Infobip points out, 80% of organizations in the world are faced with cyber threats originating from poor data governance. This statistic emphasizes the scale of the issue. A growing need for businesses to prioritize data protection and adopt robust privacy frameworks has resulted in this statistic. During an era when artificial intelligence is reshaping customer experiences and operational models, safeguarding personal information is more than just a compliance requirement – it is essential to ethical innovation and sustained success in the future. 

Essentially, Artificial Intelligence (AI) is the process by which computer systems are developed to perform tasks that would normally require human intelligence. The tasks can include organizing data, detecting anomalies, conversing in natural language, performing predictive analytics, and making complex decisions based on this information. 

By simulating cognitive functions like learning, reasoning, and problem-solving, artificial intelligence can make machines process and respond to information in a way similar to how humans do. In its simplest form, artificial intelligence is a software program that replicates and enhances human critical thinking within digital environments. Several advanced technologies are incorporated into artificial intelligence systems to accomplish this. These technologies include machine learning, natural language processing, deep learning, and computer vision. 

As a consequence of these technologies, AI systems can analyze a vast amount of structured and unstructured data, identify patterns, adapt to new inputs, and improve over time. Businesses are relying increasingly on artificial intelligence to drive innovation and operational excellence as a foundational tool. In the next generation, organizations are leveraging artificial intelligence to streamline workflows, improve customer experiences, optimize supply chains, and support data-driven strategic decisions. 

Throughout its evolution, Artificial Intelligence is destined to deliver greater efficiency, agility, and competitive advantage to industries as a whole. It should be noted, however, that such rapid adoption also highlights the importance of ethical considerations, particularly regarding data privacy, transparency, and the ability to account for actions taken. Throughout the era of artificial intelligence, Cisco has provided a comprehensive analysis of the changing privacy landscape through its new 2025 Data Privacy Benchmark Study. 

The report sheds light on the challenges organizations face in balancing innovation with responsible data practices as well as the challenges they face in managing their data. With actionable information, the report provides businesses with a valuable resource for deploying artificial intelligence technologies while maintaining a commitment to user privacy and regulatory compliance as they develop AI technology. Finding the most suitable place for storing the data that they require efficiently and securely has been a significant challenge for organizations for many years. 

The majority of the population - approximately 90% - still favors on-premises storage due to perceived security and control benefits, but this approach often comes with increased complexity and increased operational costs. Although these challenges exist, there has been a noticeable shift towards trusted global service providers in recent years despite these challenges. 

There has been an increase from 86% last year in the number of businesses claiming that these providers provide superior data protection, including industry leaders such as Cisco, in recent years. It appears that this trend coincides with the widespread adoption of advanced artificial intelligence technologies, especially generative AI tools like ChatGPT, which are becoming increasingly integrated into day-to-day operations across a wide range of industries. This is also a sign that professional knowledge of these tools is increasing as they gain traction, with 63% of respondents indicating a solid understanding of the functioning of these technologies. 

However, a deeper engagement with AI carries with it a new set of risks as well—ranging from privacy concerns, and compliance challenges, to ethical questions regarding algorithmic outputs. To ensure responsible AI deployment, businesses must strike a balance between embracing innovation and ensuring that privacy safeguards are enforced. 

AI in Modern Business

As artificial intelligence (AI) becomes embedded deep in modern business frameworks, its impact goes well beyond routine automation and efficiency gains. 

In today's world, organizations are fundamentally changing the way they gather, interpret, and leverage data – placing data stewardship and robust governance at the top of the strategic imperative list. A responsible use of data, in this constantly evolving landscape, is no longer just an option; it's a necessity for innovation in the long run and long-term competitiveness. As a consequence, there is an increasing obligation for technological practices to be aligned with established regulatory frameworks as well as societal demands for transparency and ethical accountability, which are increasingly becoming increasingly important. 

Those organizations that fail to meet these obligations don't just incur regulatory penalties; they also jeopardize stakeholder confidence and brand reputation. As digital trust has become a critical asset for businesses, the ability to demonstrate compliance, fairness, and ethical rigor in AI deployment has become one of the most important aspects of maintaining credibility with clients, employees, and business partners alike. AI-driven applications that seamlessly integrate AI features into everyday digital tools can be used to build credibility. 

The use of artificial intelligence is not restricted to specific software anymore. It has now expanded to enhance user experiences across a broad range of sites, mobile apps, and platforms. Samsung's Galaxy S24 Ultra, for example, is a perfect example of this trend. The phone features artificial intelligence features such as real-time transcription, intuitive search through gestures, and live translation—demonstrating just how AI is becoming an integral part of consumer technology in an increasingly invisible manner. 

In light of this evolution, it is becoming increasingly evident that multi-stakeholder collaboration will play a significant role in the development and implementation of artificial intelligence. In her book, Adriana Hoyos, an economics professor at IE University, emphasizes the importance of partnerships between governments, businesses, and individual citizens in the promotion of responsible innovation. She cites Microsoft's collaboration with OpenAI as one example of how AI accessibility can be broadened while still maintaining ethical standards of collaboration with OpenAI. 

However, Hoyos also emphasizes the importance of regulatory frameworks evolving along with technological advances, so that progress remains aligned with public interests while at the same time ensuring the public interest is protected. She also identifies areas in which big data analytics, green technologies, cybersecurity, and data encryption will play an important role in the future. 

AI is becoming increasingly used as a tool to enhance human capabilities and productivity rather than as a replacement for human labor in organizations. In areas such as software development that incorporates AI technology, the shift is evident. AI provides support for human creativity and technical expertise but does not replace it. The world is redefining what it means to be "collaboratively intelligent," with the help of humans and machines complementing one another. AI scholar David De Cremer, as well as Garry Kasparov, are putting together a vision for this future.

To achieve this vision, forward-looking leadership will be required, able to cultivate diverse, inclusive teams, and create an environment in which technology and human insight can work together effectively. As AI continues to evolve, businesses are encouraged to focus on capabilities rather than specific technologies to navigate the landscape. The potential for organizations to gain significant advantages in productivity, efficiency, and growth is enhanced when they leverage AI to automate processes, extract insights from data, and enhance employee and customer engagement. 

Furthermore, responsible adoption of new technologies demands an understanding of privacy, security, and thics, as well as the impact of these technologies on the workforce. As soon as AI becomes more mainstream, the need for a collaborative approach will become increasingly important to ensure that it will not only drive innovation but also maintain social trust and equity at the same time.

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Apple and Google App Stores Host VPN Apps Linked to China, Face Outrage

Google (GOOGL) and Apple (AAPL) are under harsh scrutiny after a recent report disclosed that their app stores host VPN applications associated with a Chinese cybersecurity firm, Qihoo 360. The U.S government has blacklisted the firm. The Financial Times reports that 5 VPNs still available to U.S users, such as VPN Proxy master and Turbo VPN, are linked to Qihoo. It was sanctioned in 2020 on the charges of alleged military ties. 

Ilusion of Privacy: VPNs collecting data 

In 2025 alone, three VPN apps have had over a million downloads on Google Play and  Apple’s App Store, suggesting these aren’t small-time apps, Sensor Tower reports. They are advertised as “private browsing” tools, but the VPNs provide the companies with complete user data of their online activity. This is alarming because China’s national security laws mandate that companies give user data if the government demands it. 

Concerns around ownership structures

The intricate web of ownership structures raises important questions; the apps are run by Singapore-based Innovative Connecting, owned by Lemon Seed, a Cayman Islands firm. Qihoo acquired Lemon Seed for $69.9 million in 2020. The company claimed to sell the business months late, but FT reports the China-based team making the applications were still under Qihoo’s umbrella for years. According to FT, a developer said, “You could say that we’re part of them, and you could say we’re not. It’s complicated.”

Amid outrage, Google and Apple respond 

Google said it strives to follow sanctions and remove violators when found. Apple has removed two apps- Snap VPN and Thunder VPN- after FT contacted the business, claiming it follows strict rules on VPN data-sharing.

Privacy scare can damage stock valuations

What Google and Apple face is more than public outage. Investors prioritise data privacy, and regulatory threat has increased, mainly with growing concerns around U.S tech firms’ links to China. If the U.S government gets involved, it can result in stricter rules, fines, and even more app removals. If this happens, shareholders won’t be happy. 

According to FT, “Innovative Connecting said the content of the article was not accurate and declined to comment further. Guangzhou Lianchuang declined to comment. Qihoo and Chen Ningyi did not respond to requests for comment.”

Google sets new rules to improve internet safety through better website security

 




Google is taking major steps to make browsing the web safer. As the company behind Chrome, the most widely used internet browser, Google’s decisions shape how people all over the world experience the internet. Now, the company has announced two new safety measures that focus on how websites prove they are secure.


Why is this important?

Most websites use something called HTTPS. This means that the connection between your device and the website is encrypted, keeping your personal data private. To work, HTTPS relies on digital certificates that prove a website is real and trustworthy. These certificates are issued by special organizations called Certificate Authorities.

But hackers are always looking for ways to cheat the system. If they manage to get a fake certificate, they can pretend to be a real website and steal information. To prevent this, Google is asking certificate providers to follow two new safety processes.


The first method: double-checking website identity (MPIC)

Google is now supporting something called MPIC, short for Multi-Perspective Issuance Corroboration. This process adds more layers of checking before a certificate is approved. Right now, website owners only need to show they own the domain once. But this can be risky if someone finds a way to fake that proof.

MPIC solves the issue by using several different sources to confirm the website’s identity. Think of it like asking multiple people to confirm someone’s name instead of just asking one. This makes it much harder for attackers to fool the system. The group that oversees certificate rules has agreed to make MPIC a must-follow step for all providers.


The second method: scanning certificates for errors (linting)

The second change is called linting. This is a process that checks each certificate to make sure it’s made properly and doesn’t have mistakes. It also spots certificates that use outdated or weak encryption, which can make websites easier to hack.

Linting helps certificate providers stick to the same rules and avoid errors that could lead to problems later. Google has mentioned a few free tools that can be used to carry out linting, such as zlint and certlint. Starting from March 15, 2025, all new public certificates must pass this check before they are issued.


What this means for internet users

These changes are part of Google’s ongoing plan to make the internet more secure. When websites follow these new steps, users can be more confident that their information is safe. Even though these updates happen in the background, they play a big role in protecting people online.



Massive Data Breach at Samsung Exposes 270000 Records

 


During the analysis of the Samsung Germany data breach, a wide range of sensitive information was found to be compromised, including customer names, addresses, email addresses, order history, and internal communications, among other sensitive data. Those findings were contained in a report released by cybersecurity firm Hudson Rock, which examined the breach and the reasons that led to it thoroughly. Spectos GmbH, a third-party IT service provider, is believed to have been compromised in 2021 when an infostealer malware infection occurred on an employee's computer. Hudson Rock explains that this is an initial point of compromise dating back to 2021. 

By using the domain samsung-shop.spectos.com, Spectos' software solutions for monitoring and improving service quality are directly integrated with Samsung Germany's customer service infrastructure. It was found that access to Samsung Germany's systems was gained using credentials that had previously been compromised as a result of the Racoon Infostealer malware. It is well known that the specific strain of malware is capable of harvesting a large amount of sensitive data from infected machines, including usernames, passwords, browser cookies, and auto-fill information. 

As it transpired, the credentials in this case came from the device of an employee of Spectos GmbH in 2021 that was stolen. Although there were no security practices in place, such as the rotation of passwords or revocation protocols, the login information was valid and exploitable for nearly four years after the lapse occurred. Cybercriminals exploited outdated credentials and gained unauthorized access through this lapse, further emphasizing the ongoing risks posed by improperly managed third-party access in the future. 

It was not until approximately four years after the login information was inactive, that it was exploited by a threat actor operating under the name "GHNA," which had remained inactive for nearly four years. Through the use of these long-abandoned credentials, the attacker gained access to a Spectos client-Samsung Germany-linked system resulting in approximately 270,000 customer service tickets becoming visible to the public and subsequently being leaked out. 

In light of this incident, there are significant cybersecurity risks associated with third-party access to information. Thus, the importance of regular credential audits, access reviews, and robust identity management practices cannot be overstated. As a result of this breach, the investigation is ongoing, with a particular focus on determining the extent of the breach and implementing remedial measures to prevent similar incidents in the future. 

A growing trend in cyberattacks is to exploit valid credentials which have been poorly managed by malicious actors, so that they may be able to infiltrate systems and escape detection. It is particularly concerning that the compromised credentials have been valid for such a long time in this case, suggesting that access governance and credential lifecycle management may not have been effective enough. Hudson Rock stated in their report that if proactive measures had been taken, “this incident would not have occurred.” 

Because outdated credentials were still active after several years of inactivity, a serious lapse in security hygiene is evident. A chance to mitigate this threat was missed, but the damage has been considerable because of the damage that has already been done. This incident serves as a cautionary example of how vital it is to regularly update login credentials, conduct access reviews, and implement strong practices to manage third parties' risks. In his recent interview with Deepwatch's Chief Information Security Officer, Chad Cragle stressed the importance of protecting credentials from compromise, calling compromised credentials “a time bomb” that can be exploited at any moment if not addressed proactively. 

The warning comes following the recent data breach involving Samsung Germany, which raised serious concerns about identity security and the ability to access third-party systems. Experts in the industry are emphasizing the importance of implementing enhanced security controls, especially when it comes to managing external partner access to systems. It has become increasingly evident that organizations need to implement stricter oversight to mitigate the threat posed by outdated or exposed login credentials, which is evident in the ongoing investigation into the breach. Organizations need to develop more resilient frameworks to mitigate these threats. 

With the rapid adoption of artificial intelligence-driven technologies and cloud infrastructure, the cybersecurity landscape continues to be compounded. While these technological advancements offer significant operational benefits, they also introduce complex vulnerabilities which cybercriminals are increasingly adept at exploiting to gain an advantage over their adversaries. Specifically, the development of artificial intelligence has enabled threat actors to manipulate leaked data even more effectively, and this puts a greater burden on organizations to strengthen their security systems and safeguard customers' data. 

In recent years, Samsung has been subjected to greater scrutiny when it comes to its cybersecurity posture. A significant amount of attention was focused on Samsung in 2023 after the company accidentally leaked sensitive internal code by utilizing generative AI tools like ChatGPT. Such incidents demonstrate a persistent lack of security governance in Samsung and are an indication that the company needs to implement a more rigorous and forward-looking approach to data protection in the future. 

A multi-layered security strategy is essential for businesses to prevent similar breaches from happening in the future, including regular credential audits, an identity access management system that is robust, continuous monitoring, and secure integration practices for third-party vendors. In his opinion, likely, Spectos GmbH did not have adequate monitoring mechanisms in place to identify anomalous activity that might have been linked to the compromised credentials, as indicated by Heath Renfrow, Co-Founder and Chief Information Security Officer of Fenix24. 

Many organizations emphasize detecting external threats and suspicious behaviours when conducting risk assessments, but they often underestimate the risks associated with valid credentials that have been silently compromised, according to him. When credentials are associated with routine or administrative operations, such as service monitoring or quality management, unauthorized access can blend in with the expected activity and can be difficult to detect, since it blends in with what is expected. It was pointed out by Renfrow that cybercriminals are often extremely patient and may even delay taking action until conditions are optimal. 

It might be necessary to observe the network for changes in structure, evidence privileges over time, or even identify opportune moments—such as during broader security incidents—in which their actions are most likely to be noticed or will be of maximum impact. The Samsung Germany support services are warning its customers to take extra care when receiving unsolicited messages, particularly if they have previously interacted with Samsung Germany's customer service. 

Generally, security professionals recommend avoiding unfamiliar links, monitoring users' accounts for unusual activity, and following best practices to make sure their online safety is enhanced. These include using strong, unique passwords and enabling two-factor authentication. This incident highlights a persistent weakness in cybersecurity strategy, which is not properly managing and rotating login credentials. In his remarks, Hudson Rock founder Alone Gal highlighted that organizations can avoid attacks of this kind when they follow a strong credential hygiene policy and monitor access to their systems continuously. 

“Infostealers do not have to break down the doors,” Gal stated. According to reports from the cybersecurity community, artificial intelligence could lead to an accelerated process of exploiting such breaches due to its potential to speed up the process. There are some tools which can be integrated into AI-driven systems that can be used to identify valuable data within leaked records, prioritize targets at high risk, and launch follow-up attacks more rapidly and accurately than ever before. This breach has over the last few weeks also brought the threat of freely circulating sensitive data being weaponized in a very short period, amplifying the threat for Samsung and its affected customers.