Combine cutting-edge XDR technology, multi-signal threat intelligence and 24/7 Elite Threat Hunters to help you build a world-class security operation.
Get unlimited Incident Response with threat suppression guarantee - anytime, anywhere.
Cyber risk and advisory programs that identify security gaps and build security strategies to address them.
24/7 SOC-as-a-Service with unlimited threat hunting and incident handling.
XDR with machine learning that eliminates noise, enables real-time detection and response, and automatically blocks threats.
Seamless integration and threat investigation across your existing tech stack.
Proactive threat intelligence, original threat research and a world-class team of seasoned industry veterans.
Extend your team capabilities and prevent business disruption with expertise from eSentire.
We balance automated blocks with rapid human-led investigations to manage threats.
Guard endpoints by isolating and remediating threats to prevent lateral spread.
Defend brute force attacks, active intrusions and unauthorized scans.
Investigation and threat detection across multi-cloud or hybrid environments.
Remediate misconfigurations, vulnerabilities and policy violations.
Investigate and respond to compromised identities and insider threats.
Stop ransomware before it spreads.
Meet regulatory compliance mandates.
Detect and respond to zero-day exploits.
End misconfigurations and policy violations.
Defend third-party and supply chain risk.
Prevent disruption by outsourcing MDR.
Adopt a risk-based security approach.
Meet insurability requirements with MDR.
Protect your most sensitive data.
Build a proven security program.
Operationalize timely, accurate, and actionable cyber threat intelligence.
THE THREAT Beginning in early January 2025, eSentire Threat Response Unit (TRU) observed an increase in the number of incidents involving the NetSupport Remote Access Trojan…
Jan 30, 2025The Threat In recent weeks, eSentire has observed multiple Email Bombing attacks, which involve threat actors using phishing techniques to gain remote access to a host in…
eSentire is The Authority in Managed Detection and Response Services, protecting the critical data and applications of 2000+ organizations in 80+ countries from known and unknown cyber threats. Founded in 2001, the company’s mission is to hunt, investigate and stop cyber threats before they become business disrupting events.
We provide sophisticated cybersecurity solutions for Managed Security Service Providers (MSSPs), Managed Service Providers (MSPs), and Value-Added Resellers (VARs). Find out why you should partner with eSentire, the Authority in Managed Detection and Response, today.
Multi-Signal MDR with 300+ technology integrations to support your existing investments.
24/7 SOC-as-a-Service with unlimited threat hunting and incident handling.
Three MDR package tiers are available based on per-user pricing and level of risk tolerance.
The latest security advisories, blogs, reports, industry publications and webinars published by TRU.
Compare eSentire to other Managed Detection and Response vendors to see how we stack up against the competition.
See why 2000+ organizations globally have chosen eSentire for their MDR Solution.
Artificial intelligence is evolving at a rapid pace, and organizations are increasingly looking for ways to leverage it without compromising security. DeepSeek AI, a Chinese-developed model, has gained attention for its efficiency, low cost, and strong performance—but it also raises serious security and data privacy concerns.
However, its rise has also sparked serious discussions around data privacy, cybersecurity vulnerabilities, and geopolitical risks.
The discussion around DeepSeek AI is not just about its performance. Security leaders must conduct a thorough security assessment and ask:
While some organizations are considering DeepSeek to cut AI-related costs, others (e.g., government regulators) are restricting or outright banning its use due to concerns about data sovereignty, outdated security guardrails, and the model’s ties to Chinese data-sharing laws.
For security teams and CISOs, the challenge is clear: Should DeepSeek AI be used at all? And if so, how can it be done safely?
At its core, DeepSeek AI stands out because it delivers performance comparable to top-tier AI models while being significantly cheaper to operate. For organizations concerned with the high costs associated with AI adoption, DeepSeek AI has presented a cost-effective solution to integrate large language models and other Generative AI tools into their workflows.
DeepSeek uses an architecture called Mixture of Experts (MoE), a machine-learning technique that optimizes computational resources. Instead of activating the entire neural network for every query, MoE allows DeepSeek to dynamically activate only the necessary sub-networks to process a given request, significantly improving efficiency.
This efficiency translates into real-world cost savings. Some estimates suggest that DeepSeek’s inference costs are up to 96% lower than competitors like ChatGPT. For businesses that rely heavily on AI-driven analytics, automation, or customer interactions, this means a massive reduction in operational expenses.
Unlike other Generative AI models that scrape publicly available information, DeepSeek uses synthetic training data; it generates additional training material using AI itself. This technique expands the amount of data available for training, which in theory may improve the model’s ability to generate high-quality responses.
However, this approach introduces a higher risk of hallucinations. Since AI-generated content isn’t always factually accurate, there’s an increased likelihood that DeepSeek’s responses could include misleading or false information, especially in critical areas like cybersecurity, law, and medicine.
Moreover, although DeepSeek has positioned itself as an open-source model, its training data remains undisclosed. This lack of transparency means that security teams cannot fully assess what data the model was trained on, whether it contains inherent biases, or if it has been manipulated in any way.
One of the most pressing concerns surrounding DeepSeek AI is how it handles user data and where that data is stored. Unlike AI providers that operate under strict privacy regulations such as GDPR in Europe or CCPA in California, DeepSeek AI is governed by Chinese data laws, which require companies to share user data with government authorities upon request.
For security leaders evaluating DeepSeek AI, the most important distinction to understand is the difference between using DeepSeek as a self-hosted model vs. using it as a cloud-based service.
A self-hosted model means that the AI solution runs entirely on your organization’s own infrastructure. With this approach, the risks of integrating the AI solution are significantly reduced. In this scenario, no data is sent to external servers, and the organization retains full control over the model’s inputs and outputs.
Many organizations that provide large language models or GenAI solutions (e.g., Perplexity AI) have self-hosted DeepSeek, effectively reducing their risk exposure and data privacy concerns. As a result, despite integrating DeepSeek into their platform, they can ensure their customers that all user data is stored within their own U.S. and European data centers. This model ensures that DeepSeek can be leveraged without exposing sensitive corporate information to foreign-controlled servers.
Self-hosting DeepSeek, and running DeepSeek R1, internally comes at a high cost, making it impractical for many organizations.
For organizations looking for cost-effective solutions, it’s impractical to self-host the DeepSeek model. Instead, it’s easier to use a hosted service via its website (chat.deepseek.ai) or API access.
In this case, all interactions are transmitted to servers located in China, where the data is subject to government oversight. This means that user metadata, IP addresses, keystrokes, chat logs, and potentially even sensitive corporate information may be logged and accessible by authorities under Chinese law.
This distinction is crucial. If your organization is using DeepSeek in a way that transmits data to Chinese-controlled infrastructure, the risks extend beyond cybersecurity and into regulatory compliance.
Many government bodies have responded to these data privacy concerns; Italy’s data protection agency, the Garante, has already moved to block DeepSeek over privacy concerns, and it’s likely that other governments will follow suit.
Greg Crowley, Chief Information Security Officer at eSentire, warns that security teams must be proactive in ensuring that employees are not unknowingly exposing corporate data by interacting with DeepSeek’s hosted services.
In fact, eSentire’s CISO team sent out the above internal communication to ban employees from accessing or using DeepSeek from all corporate devices. Banning the use of DeepSeek is even more critical if your organization uses a BYOD policy.
While DeepSeek AI’s privacy concerns are serious, its cybersecurity vulnerabilities are equally alarming. Unlike GPT-4 and Claude 3.5, which have undergone significant security improvements, DeepSeek remains susceptible to older jailbreaking techniques that were patched in other models years ago.
Security researchers have demonstrated that DeepSeek can be easily manipulated into generating harmful content, including step-by-step malware creation guides, instructions for keylogging and data exfiltration scripts, and explanations on how to purchase stolen credentials from underground marketplaces.
The latter is particularly concerning. Threat research presented in the 2024 Year in Review, 2025 Threat Outlook report from the eSentire Threat Response Unit (TRU) states that the use of stolen valid credentials dominated as an initial access vector into corporate environments in 2024.
Furthermore, attackers have also successfully bypassed DeepSeek’s security filters using prompt escalation techniques, encoding workarounds, and multi-turn attacks. The fact that DeepSeek’s safeguards are significantly weaker than those of its competitors makes it a more attractive target for cybercriminals looking to exploit AI for malicious purposes.
In addition to its vulnerabilities, DeepSeek has also demonstrated poor operational security practices. Security researchers at Wiz discovered that a publicly accessible ClickHouse database linked to DeepSeek was exposing over a million lines of AI interaction logs, including API keys, backend credentials, and chat history. This level of exposure raises major concerns about whether DeepSeek’s own security measures are adequate to protect enterprise data.
Lastly, censorship and bias remain a major concern for DeepSeek users. Because DeepSeek is trained under the oversight of the Chinese Communist Party (CCP), it’s programmed to censor over 1,000 politically sensitive topics.
While this may seem unrelated to cybersecurity, it raises serious questions about the model’s trustworthiness and whether it could be manipulated to filter or distort critical information.
The answer depends on how it is being used. If your organization can self-host DeepSeek entirely within its own infrastructure, the risks are significantly reduced, as long as there are adequate safeguards against unreliable outputs.
However, if DeepSeek is being accessed as a service, the security and privacy risks far outweigh any potential benefits.
If your organization needs AI-driven analytics and automation, you should consider whether alternatives such as GPT-4, Claude, or Meta’s LLaMA can provide similar efficiency without the data sovereignty concerns.
Many of these models are expected to adopt techniques like DeepSeek’s Mixture of Experts approach, meaning that cost savings will not remain a unique advantage for long.
1. Compare DeepSeek’s performance with alternatives before committing to adoption.
DeepSeek AI’s primary advantage is its cost-efficiency, but it is not the only AI model offering high performance, so your team should should:
2. Assess the feasibility of self-hosting to maintain full control over data.
If your organization must use DeepSeek, the only secure way to do so is through full self-hosting. This means deploying the model on-premises or within a private cloud environment where data is not transmitted externally. However, there are several challenges to consider:
3. Ensure no API calls are made to DeepSeek’s external servers.
Even if DeepSeek is only integrated into internal workflows, accidental API calls to its hosted service could lead to unintended data exposure. Your IT Security team should:
For CISOs and security practitioners, mitigating the risks associated with DeepSeek AI requires a proactive approach. Whether your organization is considering DeepSeek as an internal AI tool or simply wants to prevent employees from unknowingly exposing corporate data, here are some immediate actions to take to protect your organization from DeepSeek-related threats:
DeepSeek AI’s hosted service poses the most significant security risk because user interactions are processed on servers located in China. This means any queries, metadata, and behavioral patterns could be logged, analyzed, or even shared with third-party vendors under China’s strict data laws. Therefore, security teams should:
Even if access to DeepSeek AI’s website is restricted, some employees may attempt to use VPNs, proxies, or external devices to bypass security controls. Additionally, some third-party AI tools and browser extensions may be integrating DeepSeek’s API in the background, unknowingly transmitting data to foreign servers. To mitigate this risk, security teams should:
The reality is that even well-intentioned employees may not fully understand the security implications of using AI tools like DeepSeek. Given many organizations’ push to do more with less, employees are increasingly using AI for research, automation, and brainstorming often without considering where the data is going. To address this, organizations should:
While it’s easy to speed through Generative AI adoption, your organization’s security must remain a top priority. DeepSeek AI may be technically impressive, but its data privacy risks, security vulnerabilities, and ties to Chinese data laws make it a high-risk choice for enterprises.
Before investing in DeepSeek, make sure you weigh whether the security trade-offs justify the cost, especially when alternative models with stronger security protections are available.
If DeepSeek AI must be used, it should be self-hosted under strict security controls, with no external API interactions. However, for most organizations, the safest option is to block its hosted services entirely and explore alternative AI models with stronger security assurances.
To learn how your organization can take a proactive approach to AI governance and secure its Generative AI usage with eSentire MDR for GenAI, connect with an eSentire Security Specialist now.
As the Content Marketing Director, Mitangi Parekh leads content and social media strategy at eSentire, overseeing the development of security-focused content across multiple marketing channels. She has nearly a decade of experience in marketing, with 8 years specializing in cybersecurity marketing. Throughout her time at eSentire, Mitangi has created multiple thought leadership content programs that drive customer acquisition, expand share of voice to drive market presence, and demonstrate eSentire's security expertise. Mitangi holds dual degrees in Biology (BScH) and English (BAH) from Queen's University in Kingston, Ontario.