Customer Case Study: Enabling Privacy-Preserving Processing of Sensitive Data with OpenAI

Customer Case Study: Protecto Enables Privacy-Preserving Processing of Sensitive Data with OpenAI
Written by
Amar Kanagaraj
Founder and CEO of Protecto

Table of Contents

Share Article

Customer Need

  • The customer, a global technology company, aimed to leverage OpenAI’s capabilities for processing sensitive data, specifically driver history and criminal records. These reports are sourced from agencies across the globe. Hence, these reports are not in a standard format, so LLMs are perfect for processing the information.
  • Applying OpenAI would save significant product gain and cost savings of over $10M annually.
  • This task required a sophisticated approach due to the sensitive nature of the data.

Challenge

  • The primary challenge was the sensitive nature of the data and the data residency requirements. Directly sending such sensitive information to large language models (LLMs) like those provided by OpenAI raised concerns about data privacy compliance.
  • To ensure compliance with privacy regulations, the customer wanted to retain sensitive PII data within the region where it originated.
  • Removing sensitive elements could strip the data of its core value, making it less useful for the intended processing and analysis.

Solution

  • Protecto’s solution involved intelligent tokenization, a method designed to identify and redact sensitive data elements while preserving the overall structure and utility of the data.
  • This process replaced sensitive information with format-preserving tokens. These tokens maintained the integrity and format of the original data, ensuring that its business value and utility remained intact.
  • Crucially, the tokenization process was tailored to work seamlessly with OpenAI. Protecto sent specific instructions to the LLM to ensure it could fully understand and process the tokenized data without loss of critical information.

Outcome

  • Protecto allowed the client to safely and effectively use OpenAI’s LLMs for processing sensitive driver and criminal history data.
  • Ensured compliance with data residency and privacy regulations while retaining the full analytical value of the data.
  • The client could harness the power of advanced AI to process their data without compromising on data protection or utility.
Amar Kanagaraj
Founder and CEO of Protecto
Amar Kanagaraj, Founder and CEO of Protecto, is a visionary leader in privacy, data security, and trust in the emerging AI-centric world, with over 20 years of experience in technology and business leadership.Prior to Protecto, Amar co-founded Filecloud, an enterprise B2B software startup, where he put it on a trajectory to hit $10M in revenue as CMO.

Related Articles

Agentic Data Classification

Agentic Data Classification: A New Architecture for Modern Data Protection

Discover how agentic data classification replaces rigid, model-centric systems with adaptive, intelligent orchestration for scalable, context-aware data protection....

A Step-by-Step Guide to Enabling HIPAA-Safe Healthcare Data for AI

Learn how to enable HIPAA-safe AI in healthcare with a step-by-step approach to PHI identification, masking, access control, and auditability. Build compliant AI workflows without slowing innovation....

How Protecto Delivers Format Preserving Masking to Support Generative AI

Protecto deploys a number of smart techniques to secure sensitive data in generative AI workflows, maintaining structure and referential integrity while preventing leaks or false semantics. Read on to know how. ...
Protecto SaaS is LIVE! If you are a startup looking to add privacy to your AI workflows
Learn More