A critical vulnerability in Google Cloud's Document AI service could have allowed cybercriminals to steal sensitive information from users' cloud storage accounts and even inject malware, cybersecurity experts have warned.
The flaw was first discovered by researchers at Vectra AI, who reported it to Google in April 2024. Document AI is a suite of machine learning tools that automates the extraction, analysis, and processing of documents, converting unstructured files like invoices and contracts into structured data to streamline workflows.
The issue arose during the batch processing of documents, a feature that automates large-scale document analysis. Instead of using the caller’s permissions, the system relied on broader permissions granted to a "service agent," a Google-managed entity responsible for processing tasks. This created a security gap, allowing a malicious actor with access to a project to potentially retrieve and modify any files stored in the associated Google Cloud Storage buckets.
Vectra AI researchers provided a proof of concept to demonstrate how an attacker could exfiltrate and alter a PDF file before reuploading it to its original location. Although Google released a patch and labelled the issue "fixed" soon after, the researchers criticized the initial fix as inadequate.
In response to further pressure, Google implemented a more comprehensive downgrade in September 2024, addressing the vulnerability by limiting access to impacted projects.