Called Google Maps Timeline, from December, Google will save user location data for a maximum of 180 days. After the duration ends, the data will be erased from Google Cloud servers.
The new policy means Google can only save a user’s movements and whereabouts for 6 months, the user has an option to store the data on a personal device, but the cloud data will be permanently deleted from Google servers.
The new privacy change is welcomed, smartphones can balance privacy and convenience in terms of data storage, but nothing is more important than location data.
Users can change settings that suit them best, but the majority go with default settings. The problem here arises when Google uses user data for suggesting insights (based on anonymous location data), or improving Google services like ads products.
The Google Maps Timeline feature addresses questions about data privacy and security. The good things include:
Better privacy: By restricting the storage timeline of location data on the cloud, Google can reduce data misuse. Limiting the storage duration means less historical data is exposed to threat actors if there's a breach.
More control to users: When users have the option to retain location data on their devices, it gives them ownership over their personal data. Users can choose whether to delete their location history or keep the data.
Accountability from Google: The move is a positive sign toward building transparency and trust, showing a commitment to user privacy.
Services: Google features that use location history data for tailored suggestions might be impacted, and users may observe changes in correct location-based suggestions and targeted ads.
The problem in data recovery: For users who like to store their data for a longer duration, the new move can be a problem. Users will have to self-back up data if they want to keep it for more than 180 days.
According to the Forrester, the public cloud market is set to reach $1 trillion by year 2026, with the lion’s share of investment directed to the big four, i.e. Alibaba, Amazon Web Services, Google Cloud, and Microsoft.
In the wake of pandemic, businesses hastened their cloud migration and reaped the rewards as cloud services sped up innovation, offering elasticity to adjust to change demand, and scaled with expansion. Even as the C-suite reduces spending in other areas, it is certain that there is no going back. The demand from businesses for platform-as-a-service (PaaS), which is expected to reach $136 billion in 2023, and infrastructure-as-a-service (IaaS), which is expected to reach $150 billion, is particularly high.
Still, this rapid growth, which in fact caught business strategists and technologies by surprise, has its own cons. If organizations do not take the essential actions to increase the security of public cloud data, the risks are likely to grow considerably.
The challenges posed by "shadow data," or unknown, uncontrolled public cloud data, is a result of a number of issues. Business users are creating their own applications, and programmers are constantly creating new instances of their own code to create and test new applications. A number of these services retain and utilize critical data with no knowledge of the IT and security staff. Versioning, which allows several versions of data to be stored in the same bucket in the cloud, adds risks if policies are not set up correctly.
Unmanaged data repositories are frequently ignored when the rate of innovation quickens. In addition, if third parties or unrelated individuals are given excessive access privileges, sensitive data that is adequately secured could be transferred to an unsafe location, copied there, or become vulnerable.
A large number of security experts (82%) are aware of, and in fact, concerned about the growing issues pertaining to the public cloud data security problem. These professionals can swiftly aid in minimizing the hazards by doing the following:
Teams can automatically find all of their cloud data, not just known or tagged assets, thanks to a next-generation public cloud data security platform. All cloud data storages, including managed and unmanaged assets, virtual machines, shadow data stores, data caches and pipelines, and big data, are detected. This data is used by the platform to create an extensive, unified data catalog for multi-cloud environments used by enterprises. All sensitive data, including PII, PHI, and transaction data from the payment card industry (PCI), is carefully identified and categorized in the catalogs.
Security teams may apply and enforce the proper security policies and verify data settings against their organization's specified guardrails with complete insights into their sensitive cloud data. Public cloud data security may aid in exposing complicated policy breaches, which could further help in prioritizing risk-based mannerisms, on the basis of data sensitivity level, security posture, volume, and exposure.
The aforementioned is a process named data security posture management, that offers recommendations that are customized for every cloud environment, thus making them more effective and relevant.
Teams can then begin organizing sensitive data without interfering with corporate operations. Teams will be prompted by a public cloud data security platform to implement best practices, such as enabling encryption and restricting third-party access, and practicing greater data hygiene by eliminating unnecessary sensitive data from the environment.
Moreover, security teams can utilize the platform to enable constant monitoring of data. This way, security experts can efficiently identify policy violations and ensure that the public cloud data is following the firm’s mentioned guidelines and security postures, no matter where it is stored, used, or transferred in the cloud.
The advent of technology led malicious actors, to invade the privacy of users' systems in a few steps. Cloud security is one such technology that has increasingly worked to fortify users' data from threat actors.