A man and a robot are sitting at a desk together in a workplace, working on a laptop.

Replicate AI Flaw Enables Attackers To Access Customer Data

Replicate AI contained a significant flaw that may have allowed attackers to run a malicious AI model within the platform, resulting in a cross-tenant attack. Due to this vulnerability, sensitive information or exclusive data may have been revealed, giving access to clients’ private AI models.

Users can create and deploy customised models with just one line of code on the Replicate website and they can simply share and interact with AI models deployed at scale. It allows uploading one’s own models, sharing pre-existing models on the hub, and customising them for certain use cases.

The Wiz researchers worked worked on a platform security research project with partners that provide AI-as-a-Service. This highlighted the difficulty of ensuring effective tenancy separation in such solutions, especially when AI models come from untrustworthy sources.

The Wiz Research team disclosed this issue to Replicate in January 2024, demonstrating their commitment to security, and Replicate quickly resolved the issue. There was no breach of user data, and consumers do not need to take any action.

Replicate Vulnerability Leads to Cross-Tenant Data Exposure

Malicious AI models deliver a significant risk to AI systems, particularly for AI service providers. Attackers can utilise AI models for cross-tenant attacks since they are often sent in ways that allow for arbitrary code execution.

The vulnerability relates to using a malicious Cog-formatted container to execute remote code on Replicate’s infrastructure. Replicate uses the proprietary Cog format for containerising models. Users can interact with a containerised model by uploading it to Replicate’s platform and using Cog to containerise the picture.

A Replicate Flaw Allows for Malicious Containers to Enable Remote Code Execution
A Replicate Flaw Allows for Malicious Containers to Enable Remote Code Execution (Wiz)

Researchers from Wiz produced a malicious Cog container and posted it on the network. They utilised it to run programmes on the Replicate infrastructure with root access. The researchers wrote in their report. Attackers used a similar technique to exploit flaws on the HuggingFace platform.

We suspect this code-execution technique is common. Companies and organizations often run AI models from untrusted sources, even though these models could be malicious.

Researchers used a vulnerability in a Kubernetes cluster hosted on Google Cloud Platform to explore the surroundings, travel laterally, and ultimately leave the node. They managed to carry out a cross-tenant attack, which gave them the ability to view and alter the results of other models, despite the difficulties.

This method of code execution draws attention to a common problem. Businesses and organizations use potentially dangerous AI models when they getfrom unreliable sources.

Researchers from Wiz employed a similar method with Hugging Face in past AI security research. Researchers showed how it would be possible to allow lateral movement within the internal infrastructure. They also demonstrated the potential to upload a malicious AI model to the controlled AI inference service.

A Comprehensive Illustration of Cross-Tenant Attacks on AI Models using Replicate AI
A Comprehensive Illustration of Cross-Tenant Attacks on AI Models using Replicate AI

This information follows Wiz’s report on vulnerabilities in sites such as Hugging Face, slightly over a month ago. These vulnerabilities could have enabled threat actors to escalate privileges, access other customers’ models, and compromise CI/CD pipelines.

Currently, verifying a model’s legitimacy and checking for risks is difficult. Malicious AI models create a new attack surface, involving specialised mitigating measures.

In addition, cloud providers that host customer models in shared environments must stick to complicated tenant isolation policies. This assures that if an attacker uses an unstable model, they cannot access other customers’ data or compromise the service.

Phishing Tackle provides training course videos that cover different types of artificial intelligence threats, information on risks posed by artificial intelligence, and tips on common attacks. We offer a free 14-day trial to help train your users to help train your users in avoiding these types of attacks and to test their knowledge with simulated phishing attacks.

Recent posts