Google Takeout Tool Shares Private Videos

Read about Google’s latest privacy mishap of sharing private photos through its Takeout app.
Written by
Protecto
Leading Data Privacy Platform for AI Agent Builders

Table of Contents

Share Article

In the latest privacy mishap, Google admits that it mistakenly shared private photos to strangers. According to Google, the leak happened while users exported their personal content from the photo-sharing service in the cloud. Though Google says that the data privacy breach affected only a small number of Google Photos users, the extent of the bug remains unclear.

Google recently sent notifications through a nonchalant email alerting users of a data breach. This data breach has affected users of Google’s Takeout app, which allows users to download data from Google apps as a backup or for use with another service. The incident affected only the users who used the app between November 21 and November 25. Though Google says it is not in a position to reveal the actual number of affected parties, the estimation is around 0.1% of one billion users worldwide.

Experts feel that Google should have handled its communication about the inadvertent disclosure of personal content in a better way. This situation shows some of the complications that companies like Google are facing by providing additional privacy features to users.

Though Google assured that it has fixed the underlying issue and has conducted an in-depth analysis to help prevent this from ever happening again, privacy experts feel history repeating.

Protecto
Leading Data Privacy Platform for AI Agent Builders
Protecto is an AI Data Security & Privacy platform trusted by enterprises across healthcare and BFSI sectors. We help organizations detect, classify, and protect sensitive data in real-time AI workflows while maintaining regulatory compliance with DPDP, GDPR, HIPAA, and other frameworks. Founded in 2021, Protecto is headquartered in the US with operations across the US and India.

Related Articles

NER model PII detection pipeline breaking down when processing messy real-world LLM inputs

Why NER models fail at PII detection in LLM workflows – 7 critical gaps

NER models miss critical PII detection gaps in LLM workflows. Learn 7 reasons why NER-based sensitive data detection breaks down and what to use instead....
What Is Format-Preserving Encryption

What Is Format-Preserving Encryption (FPE)?

What is format-preserving encryption? Learn how FPE secures sensitive data without breaking systems—and why it matters for payments, AI, and compliance....
AI Guardrails Failures: The Risk Nobody Sees Coming

AI Guardrails: The Layer Between Your Model and a Mistake

Most AI failures aren’t bugs, they’re missing AI guardrails. Learn how weak controls expose data, break compliance, and why most AI projects fail early....
Protecto SaaS is LIVE! If you are a startup looking to add privacy to your AI workflows
Learn More