DeepSeek AI database is exposed, allowing unauthorized access to sensitive information, including chat logs and secret keys. Chinese AI startup DeepSeek left a critical database exposed online, raising security concerns. The unprotected ClickHouse database could have allowed unauthorized access to sensitive internal data, according to cybersecurity firm Wiz.
The database contained over a million lines of log streams, including chat histories, secret keys, backend details, and API secrets. Wiz researcher Gal Nagli warned that the database misconfiguration permitted full control over operations, allowing privilege escalation within DeepSeek’s system.
Hackers could have exploited ClickHouse’s HTTP interface to run arbitrary SQL queries through a web browser. Cybersecurity firm Wiz revealed that DeepSeek AI database is exposed, creating risks of data breaches and privilege escalation. DeepSeek has since fixed the issue after Wiz contacted the company. However, it remains unclear whether cybercriminals accessed or downloaded the exposed data.
AI Boom Raises Security Concerns
DeepSeek has gained rapid popularity for its open-source AI models, which claim to compete with top systems like OpenAI. Its AI chatbot has topped app store charts across multiple countries. However, the company has also faced large-scale cyberattacks, forcing it to pause new registrations.
Nagli emphasized that rapid AI adoption without robust security measures is a major risk. “While many focus on futuristic AI threats, basic security failures like database misconfigurations pose immediate dangers,” he stated.
Privacy and National Security Scrutiny
DeepSeek has drawn regulatory scrutiny over its privacy policies. Italian authorities recently asked the company to clarify its data handling practices. Shortly after, DeepSeek’s apps became unavailable in Italy. Ireland’s Data Protection Commission has also requested similar information.
In the U.S., concerns over DeepSeek’s Chinese connections have raised national security alarms. Reports from Bloomberg, Financial Times, and The Wall Street Journal suggest OpenAI and Microsoft are investigating whether DeepSeek unlawfully used OpenAI’s API to train its models, a technique known as distillation.
An OpenAI spokesperson told The Guardian that some entities in China are actively working to replicate U.S. AI models through distillation. These allegations add to the growing concerns about AI security, intellectual property rights, and geopolitical tensions in the tech sector.
Lack of Cybersecurity in AI Development
Experts warn that DeepSeek AI database is exposed, highlighting the dangers of weak security measures in AI startups. The security flaw in DeepSeek’s ClickHouse database reveals a critical gap in AI startups’ cybersecurity practices. The database was left exposed without authentication, allowing unauthorized access to highly sensitive data, including secret keys, API information, and backend details. This kind of oversight is alarming, especially in a field that processes vast amounts of user data.
Despite AI’s advancements, companies continue to overlook basic security risks. Many focus on developing sophisticated models but fail to implement fundamental protections like encrypted storage and strict access controls. As AI adoption accelerates, such lapses can lead to data breaches, identity theft, and unauthorized model training by malicious actors. DeepSeek’s situation is a reminder that innovation must be balanced with security.
Regulatory and Ethical Challenges
DeepSeek’s database exposure has raised security concerns and triggered regulatory scrutiny. Italian and Irish data protection agencies are now investigating the company’s data collection and storage practices. The sudden removal of its app from the Italian market suggests possible compliance issues. As AI regulations tighten worldwide, companies like DeepSeek must ensure transparency and compliance to avoid legal consequences.
Moreover, allegations that DeepSeek may have used OpenAI’s API without permission add another layer of ethical concern. If true, it raises questions about intellectual property rights and fair competition in AI development.