Blog

DeepSeek AI: What Security Leaders Need to Know About Its Security Risks

BY Mitangi Parekh

February 11, 2025 | 11 MINS READ

Cyber Risk

Managed Detection and Response

Cybersecurity Strategy

Generative AI

Want to learn more on how to achieve Cyber Resilience?

TALK TO AN EXPERT

Artificial intelligence is evolving at a rapid pace, and organizations are increasingly looking for ways to leverage it without compromising security. DeepSeek AI, a Chinese-developed model, has gained attention for its efficiency, low cost, and strong performance—but it also raises serious security and data privacy concerns.

However, its rise has also sparked serious discussions around data privacy, cybersecurity vulnerabilities, and geopolitical risks.

The discussion around DeepSeek AI is not just about its performance. Security leaders must conduct a thorough security assessment and ask:

  • Where is the data going?
  • Who has access to it?
  • What risks does it pose to corporate security and regulatory compliance?

While some organizations are considering DeepSeek to cut AI-related costs, others (e.g., government regulators) are restricting or outright banning its use due to concerns about data sovereignty, outdated security guardrails, and the model’s ties to Chinese data-sharing laws.

For security teams and CISOs, the challenge is clear: Should DeepSeek AI be used at all? And if so, how can it be done safely?

What Makes DeepSeek AI Different?

At its core, DeepSeek AI stands out because it delivers performance comparable to top-tier AI models while being significantly cheaper to operate. For organizations concerned with the high costs associated with AI adoption, DeepSeek AI has presented a cost-effective solution to integrate large language models and other Generative AI tools into their workflows.

The Technological Edge of Efficiency and Cost Savings

DeepSeek uses an architecture called Mixture of Experts (MoE), a machine-learning technique that optimizes computational resources. Instead of activating the entire neural network for every query, MoE allows DeepSeek to dynamically activate only the necessary sub-networks to process a given request, significantly improving efficiency.

This efficiency translates into real-world cost savings. Some estimates suggest that DeepSeek’s inference costs are up to 96% lower than competitors like ChatGPT. For businesses that rely heavily on AI-driven analytics, automation, or customer interactions, this means a massive reduction in operational expenses.

Risks Associated with Synthetic Training Data

Unlike other Generative AI models that scrape publicly available information, DeepSeek uses synthetic training data; it generates additional training material using AI itself. This technique expands the amount of data available for training, which in theory may improve the model’s ability to generate high-quality responses.

However, this approach introduces a higher risk of hallucinations. Since AI-generated content isn’t always factually accurate, there’s an increased likelihood that DeepSeek’s responses could include misleading or false information, especially in critical areas like cybersecurity, law, and medicine.

Moreover, although DeepSeek has positioned itself as an open-source model, its training data remains undisclosed. This lack of transparency means that security teams cannot fully assess what data the model was trained on, whether it contains inherent biases, or if it has been manipulated in any way.

The Privacy and Data Sovereignty Concerns

One of the most pressing concerns surrounding DeepSeek AI is how it handles user data and where that data is stored. Unlike AI providers that operate under strict privacy regulations such as GDPR in Europe or CCPA in California, DeepSeek AI is governed by Chinese data laws, which require companies to share user data with government authorities upon request.

For security leaders evaluating DeepSeek AI, the most important distinction to understand is the difference between using DeepSeek as a self-hosted model vs. using it as a cloud-based service.

Using DeepSeek as a Model (Self-Hosted AI Solution)

A self-hosted model means that the AI solution runs entirely on your organization’s own infrastructure. With this approach, the risks of integrating the AI solution are significantly reduced. In this scenario, no data is sent to external servers, and the organization retains full control over the model’s inputs and outputs.

Many organizations that provide large language models or GenAI solutions (e.g., Perplexity AI) have self-hosted DeepSeek, effectively reducing their risk exposure and data privacy concerns. As a result, despite integrating DeepSeek into their platform, they can ensure their customers that all user data is stored within their own U.S. and European data centers. This model ensures that DeepSeek can be leveraged without exposing sensitive corporate information to foreign-controlled servers.

Self-hosting DeepSeek, and running DeepSeek R1, internally comes at a high cost, making it impractical for many organizations.

Using DeepSeek as a Service (Chat and API-Based Access)

For organizations looking for cost-effective solutions, it’s impractical to self-host the DeepSeek model. Instead, it’s easier to use a hosted service via its website (chat.deepseek.ai) or API access.

In this case, all interactions are transmitted to servers located in China, where the data is subject to government oversight. This means that user metadata, IP addresses, keystrokes, chat logs, and potentially even sensitive corporate information may be logged and accessible by authorities under Chinese law.

This distinction is crucial. If your organization is using DeepSeek in a way that transmits data to Chinese-controlled infrastructure, the risks extend beyond cybersecurity and into regulatory compliance.

Many government bodies have responded to these data privacy concerns; Italy’s data protection agency, the Garante, has already moved to block DeepSeek over privacy concerns, and it’s likely that other governments will follow suit.

Greg Crowley, Chief Information Security Officer at eSentire, warns that security teams must be proactive in ensuring that employees are not unknowingly exposing corporate data by interacting with DeepSeek’s hosted services.

Excerpt of an internal email sent by eSentire’s CISO team banning the use of DeepSeek AI from corporate devices.
Excerpt of an internal email sent by eSentire’s CISO team banning the use of DeepSeek AI from corporate devices.

In fact, eSentire’s CISO team sent out the above internal communication to ban employees from accessing or using DeepSeek from all corporate devices. Banning the use of DeepSeek is even more critical if your organization uses a BYOD policy.

The Cybersecurity Risks: Jailbreaks, Data Leaks, and Censorship

While DeepSeek AI’s privacy concerns are serious, its cybersecurity vulnerabilities are equally alarming. Unlike GPT-4 and Claude 3.5, which have undergone significant security improvements, DeepSeek remains susceptible to older jailbreaking techniques that were patched in other models years ago.

Security researchers have demonstrated that DeepSeek can be easily manipulated into generating harmful content, including step-by-step malware creation guides, instructions for keylogging and data exfiltration scripts, and explanations on how to purchase stolen credentials from underground marketplaces.

The latter is particularly concerning. Threat research presented in the 2024 Year in Review, 2025 Threat Outlook report from the eSentire Threat Response Unit (TRU) states that the use of stolen valid credentials dominated as an initial access vector into corporate environments in 2024.

This is a preview image of the he Modern Threat Actors’ Playbook: How Initial Access and Ransomware Deployment Trends are Shifting in 2025.

REPORT

The Modern Threat Actors’ Playbook: How Initial Access and Ransomware Deployment Trends are Shifting in 2025

Download Now

Furthermore, attackers have also successfully bypassed DeepSeek’s security filters using prompt escalation techniques, encoding workarounds, and multi-turn attacks. The fact that DeepSeek’s safeguards are significantly weaker than those of its competitors makes it a more attractive target for cybercriminals looking to exploit AI for malicious purposes.

In addition to its vulnerabilities, DeepSeek has also demonstrated poor operational security practices. Security researchers at Wiz discovered that a publicly accessible ClickHouse database linked to DeepSeek was exposing over a million lines of AI interaction logs, including API keys, backend credentials, and chat history. This level of exposure raises major concerns about whether DeepSeek’s own security measures are adequate to protect enterprise data.

Lastly, censorship and bias remain a major concern for DeepSeek users. Because DeepSeek is trained under the oversight of the Chinese Communist Party (CCP), it’s programmed to censor over 1,000 politically sensitive topics.

While this may seem unrelated to cybersecurity, it raises serious questions about the model’s trustworthiness and whether it could be manipulated to filter or distort critical information.

Should Your Organization Use DeepSeek?

The answer depends on how it is being used. If your organization can self-host DeepSeek entirely within its own infrastructure, the risks are significantly reduced, as long as there are adequate safeguards against unreliable outputs.

However, if DeepSeek is being accessed as a service, the security and privacy risks far outweigh any potential benefits.

If your organization needs AI-driven analytics and automation, you should consider whether alternatives such as GPT-4, Claude, or Meta’s LLaMA can provide similar efficiency without the data sovereignty concerns.

Many of these models are expected to adopt techniques like DeepSeek’s Mixture of Experts approach, meaning that cost savings will not remain a unique advantage for long.

Key Considerations for Evaluating Internal Use

1. Compare DeepSeek’s performance with alternatives before committing to adoption.

DeepSeek AI’s primary advantage is its cost-efficiency, but it is not the only AI model offering high performance, so your team should should:

  • Conduct side-by-side benchmarking to compare DeepSeek’s accuracy, reliability, and efficiency with alternatives.
  • Consider long-term security and compliance—many Western AI providers are subject to stricter data protection laws, making them safer for enterprise use.
  • Evaluate whether waiting for a more secure AI alternative is a better strategy rather than adopting a model with known security gaps.

2. Assess the feasibility of self-hosting to maintain full control over data.

If your organization must use DeepSeek, the only secure way to do so is through full self-hosting. This means deploying the model on-premises or within a private cloud environment where data is not transmitted externally. However, there are several challenges to consider:

  • High computational costs: Running DeepSeek R1 internally requires a large-scale GPU infrastructure, which can be incredibly costly for most organizations to undertake.
  • Ongoing model maintenance: Unlike using an AI service, self-hosting requires constant updates, security monitoring, and performance tuning.
  • Data integrity concerns: Since DeepSeek’s training data is undisclosed, organizations must evaluate the risk of inaccurate or biased outputs.

3. Ensure no API calls are made to DeepSeek’s external servers.

Even if DeepSeek is only integrated into internal workflows, accidental API calls to its hosted service could lead to unintended data exposure. Your IT Security team should:

  • Conduct a full security audit to ensure that all AI interactions are processed within corporate infrastructure.
  • Disable outbound API access to DeepSeek’s servers, preventing any unintentional data transmission.
  • Use containerized environments for AI model deployment, ensuring complete isolation from external networks.

Practical Recommendations for Security Teams

For CISOs and security practitioners, mitigating the risks associated with DeepSeek AI requires a proactive approach. Whether your organization is considering DeepSeek as an internal AI tool or simply wants to prevent employees from unknowingly exposing corporate data, here are some immediate actions to take to protect your organization from DeepSeek-related threats:

1. Block access to chat.deepseek.ai and other DeepSeek-hosted services on corporate networks.

DeepSeek AI’s hosted service poses the most significant security risk because user interactions are processed on servers located in China. This means any queries, metadata, and behavioral patterns could be logged, analyzed, or even shared with third-party vendors under China’s strict data laws. Therefore, security teams should:

  • Implement DNS and firewall rules to block access to DeepSeek’s official domain.
  • Restrict the use of DeepSeek-related API endpoints that could be embedded in third-party tools.
  • Enforce company-wide security policies prohibiting access to AI services that fail to meet compliance requirements.

2. Monitor network traffic for DeepSeek-related activity to detect unauthorized usage.

Even if access to DeepSeek AI’s website is restricted, some employees may attempt to use VPNs, proxies, or external devices to bypass security controls. Additionally, some third-party AI tools and browser extensions may be integrating DeepSeek’s API in the background, unknowingly transmitting data to foreign servers. To mitigate this risk, security teams should:

3. Educate employees about the privacy risks of AI services hosted on foreign infrastructure.

The reality is that even well-intentioned employees may not fully understand the security implications of using AI tools like DeepSeek. Given many organizations’ push to do more with less, employees are increasingly using AI for research, automation, and brainstorming often without considering where the data is going. To address this, organizations should:

  • Provide security awareness training on the risks of using unvetted AI services. Make sure to use security awareness training programs that use real-world scenarios to test user resiliency and drive behavioural change.
  • Communicate clear policies on what AI tools are approved for corporate use by implementing an Acceptable Use Policy that outline in detail which applications have been approved for use, and which are blocked.

Final Thoughts

While it’s easy to speed through Generative AI adoption, your organization’s security must remain a top priority. DeepSeek AI may be technically impressive, but its data privacy risks, security vulnerabilities, and ties to Chinese data laws make it a high-risk choice for enterprises.

Before investing in DeepSeek, make sure you weigh whether the security trade-offs justify the cost, especially when alternative models with stronger security protections are available.

If DeepSeek AI must be used, it should be self-hosted under strict security controls, with no external API interactions. However, for most organizations, the safest option is to block its hosted services entirely and explore alternative AI models with stronger security assurances.

To learn how your organization can take a proactive approach to AI governance and secure its Generative AI usage with eSentire MDR for GenAI, connect with an eSentire Security Specialist now.

Mitangi Parekh
Mitangi Parekh Content Marketing Director

As the Content Marketing Director, Mitangi Parekh leads content and social media strategy at eSentire, overseeing the development of security-focused content across multiple marketing channels. She has nearly a decade of experience in marketing, with 8 years specializing in cybersecurity marketing. Throughout her time at eSentire, Mitangi has created multiple thought leadership content programs that drive customer acquisition, expand share of voice to drive market presence, and demonstrate eSentire's security expertise. Mitangi holds dual degrees in Biology (BScH) and English (BAH) from Queen's University in Kingston, Ontario.

Read the Latest from eSentire