A massive breach involving ~9.7 million customers of Medibank

Read about the fallout of a massive data breach involving customers of Medibank.
Written by
Protecto

Table of Contents

Share Article

A massive breach (~9.7 million customers) in Medibank continues to escalate. Hackers have leaked information about 200 customers as a warning shot.

A massive data breach has rocked Australian health insurance company Medibank within a month of the government passing a resolution to toughen up data privacy laws and impose heavy penalties for data breaches. This all started when a ransomware attack group stole the personal information of about 9.7 million Medibank customers. The stolen data includes extremely sensitive personal and medical information.

Since then, things have steadily gone from bad to worse for Medibank after the company refused to comply with the demands of the ransomware group, indicating that they do not believe that paying the attackers will prevent them from releasing personal information. As a result, the attackers have started leaking information on the dark web, releasing sensitive data.

In the first wave, the hackers leaked information about 200 Medibank customers. While names, passport numbers, and medical claim records have been disclosed. To make it worse, data includes numerical diagnosis codes that make it possible to link individuals to issues like HIV, alcohol addiction, and drug addiction.

There is also concern about the details of high-profile customers being leaked, as the Australian Prime Minister and the #Cybersecurity Minister have already confirmed being victims of the breach.

Moreover, leaked negotiation screenshots also reveal that the hackers have threatened to disclose decryption keys for customer credit cards despite Medibank’s insistence that no banking or credit card details were stolen.

The situation is devolving rapidly, with more data leaks expected soon. While Medibank has quickly rolled out a support system for possible victims, many would wonder whether they are partly to blame for this scenario and should be on the receiving end of sanctions.

Protecto

Related Articles

Protecting Against Prompt Injection at the Data Layer, Not the Prompt Layer

Prompt injection is often treated as a prompt engineering problem. It is not. When untrusted data is allowed to shape model behavior without clear boundaries, the system becomes fragile. This post explores why defending at the prompt layer is fundamentally reactive, and how shifting protection to the data layer creates a more durable, principled security model for AI systems....
AI Data Governance Framework

AI Data Governance Framework: A Step-by-Step Implementation Guide

Learn how AI data governance protects sensitive information in dynamic AI workflows. Discover compliance strategies and AI governance solutions for data privacy protection with Protecto....

Why Confusing ChatGPT and LLMs as the Same Thing Creates Security Blind Spots

Confusing ChatGPT with the broader category of large language models may seem harmless, but it creates real security blind spots. This article breaks down the difference, explains why the distinction matters for risk, governance, and data exposure, and shows how teams can design safer AI systems....
Protecto SaaS is LIVE! If you are a startup looking to add privacy to your AI workflows
Learn More