- Jan 8, 2011
The Microsoft AI research division accidentally leaked dozens of terabytes of sensitive data starting in July 2020 while contributing open-source AI learning models to a public GitHub repository.
Almost three years later, this was discovered by cloud security firm Wiz whose security researchers found that a Microsoft employee inadvertently shared the URL for a misconfigured Azure Blob storage bucket containing the leaked information.
The Wiz Research Team found that besides the open-source models, the internal storage account also inadvertently allowed access to 38TB worth of additional private data.
The exposed data included backups of personal information belonging to Microsoft employees, including passwords for Microsoft services, secret keys, and an archive of over 30,000 internal Microsoft Teams messages originating from 359 Microsoft employees.
In an advisory on Monday by the Microsoft Security Response Center (MSRC) team, Microsoft said that no customer data was exposed, and no other internal services faced jeopardy due to this incident.
Wiz reported the incident to MSRC on June 22nd, 2023, which revoked the SAS token to block all external access to the Azure storage account, mitigating the issue on June 24th, 2023.