Chinese AI startup DeepSeek, known for its large language model DeepSeek-R1, has suffered a major security breach after two of its ClickHouse databases were left publicly exposed. Reportedly, over a million log entries were found in the compromised instances, including operational metadata, API keys, backend credentials, and unencrypted user conversation histories.
The security of quickly evolving AI systems has become a significant concern because of the incident. Shortly after the release of DeepSeek, cybersecurity researchers uncovered major vulnerabilities in its infrastructure.
Sensitive user data and sensitive information have been compromised, with experts warning that such data frequently finds up on the Dark Web, where it is exchanged or abused by criminals.
The situation escalated further after rumours of stolen internal files related to DeepSeek-V2, the company’s sophisticated open-source model. Reports indicate that source code and private training information has been compromised, and it may now be appearing on the Dark Web.
A security researcher at Wiz named Gal Nagli disclosed that the ClickHouse database’s exposure “allowed full control over operations, including access to internal data.” The cloud security team notified DeepSeek of the issue, which DeepSeek has subsequently fixed.
Deepseek’s Technical Failures: Insecure Database, Disabled ATS & Weak Encryption
An insecure database gave attackers complete access over DeepSeek’s critical operations, which caused the company’s security breach. Researchers discovered more than one million lines of log data, including conversation histories and personally identifiable information (PII) for over a million individuals. This massive release quickly captured the interest of fraudsters on Dark Web markets.
Worsening the risk, the DeepSeek iOS app had globally disabled App Transport Security. As a result, unencrypted user data was transmitted across the internet. Simultaneously, the app employed the outdated 3DES encryption method with hard-coded keys—allowing any intercepted traffic to be easily decoded.
Further investigation by SecurityScorecard’s Strike team revealed SQL injection vulnerabilities in DeepSeek’s backend. Along with faulty cryptographic privileges, these vulnerabilities gave hackers further ways to get user information and API credentials.
The DeepSeek-R1 model’s performance in security evaluations was particularly concerning. It failed 91% of jailbreaking tests and 86% of prompt-injection attacks, showing a concerning inability to handle basic exploit techniques.
Chat logs and API keys started to trade on dark web as valuable resources. Phishing sites simulate DeepSeek appeared almost quickly, attacking both user accounts and cryptocurrency wallets. Analysts warn that these resources are now fuelling a wave of attacks, including corporate espionage and large-scale scams.
AI use is rapidly increasing, but most businesses are unprepared to deal with it. The DeepSeek breach raises a clear warning that AI systems are vulnerable in the absence of sufficient risk control.
These technologies handle vast amounts of sensitive data in real-time, often via cloud platforms and open APIs. It just takes one vulnerability to lead to targeted attacks, financial losses, or data leaks for both people and companies.
Organisations must focus on their external attack surface, since external attackers are responsible for more than 80% of breaches. This involves continuous monitoring of internet-facing resources, including AI endpoints and associated infrastructure.
Security testing must be conducted on a continuous basis. Testing should not be restricted to “high-priority” systems. Include regular application security reviews, penetration tests, and AI-specific evaluations.
The DeepSeek incident should be seen as a wake-up call. Security must be integrated into AI development from the outset. Continuous monitoring and rigorous testing should be mandatory. The risks are simply too high to treat AI security as an afterthought—particularly as the Dark Web actively seeks to exploit any weakness.
Phishing Tackle provides training course videos that cover different types of artificial intelligence threats designed to counter the ever-changing risks posed by artificial intelligence. We offer a free 14-day trial to help train your users to help train your users in avoiding these types of attacks and to test their knowledge with simulated phishing attacks.