Combine cutting-edge XDR technology, multi-signal threat intelligence and 24/7 Elite Threat Hunters to help you build a world-class security operation.
Our team delivers the fastest response time in the industry. Threat suppression within just 4 hours of being engaged.
Cyber risk and advisory programs that identify security gaps and build security strategies to address them.
24/7 SOC-as-a-Service with unlimited threat hunting and incident handling.
XDR with machine learning that eliminates noise, enables real-time detection and response, and automatically blocks threats.
Seamless integration and threat investigation across your existing tech stack.
Proactive threat intelligence, original threat research and a world-class team of seasoned industry veterans.
Extend your team capabilities and prevent business disruption with expertise from eSentire.
We balance automated blocks with rapid human-led investigations to manage threats.
Guard endpoints by isolating and remediating threats to prevent lateral spread.
Defend brute force attacks, active intrusions and unauthorized scans.
Investigation and threat detection across multi-cloud or hybrid environments.
Remediate misconfigurations, vulnerabilities and policy violations.
Investigate and respond to compromised identities and insider threats.
Stop ransomware before it spreads.
Meet regulatory compliance mandates.
Detect and respond to zero-day exploits.
End misconfigurations and policy violations.
Defend third-party and supply chain risk.
Prevent disruption by outsourcing MDR.
Adopt a risk-based security approach.
Meet insurability requirements with MDR.
Protect your most sensitive data.
Build a proven security program.
Operationalize timely, accurate, and actionable cyber threat intelligence.
THE THREAT On November 18th, 2024, Palo Alto disclosed a critical actively exploited authentication bypass zero-day vulnerability impacting Palo Alto Networks PAN-OS. The…
Nov 13, 2024THE THREAT Update: eSentire has observed multiple exploitation attempts targeting CVE-2024-8069. In real-world attacks, threat actors successfully achieved RCE and attempted to…
eSentire is The Authority in Managed Detection and Response Services, protecting the critical data and applications of 2000+ organizations in 80+ countries from known and unknown cyber threats. Founded in 2001, the company’s mission is to hunt, investigate and stop cyber threats before they become business disrupting events.
We provide sophisticated cybersecurity solutions for Managed Security Service Providers (MSSPs), Managed Service Providers (MSPs), and Value-Added Resellers (VARs). Find out why you should partner with eSentire, the Authority in Managed Detection and Response, today.
Multi-Signal MDR with 300+ technology integrations to support your existing investments.
24/7 SOC-as-a-Service with unlimited threat hunting and incident handling.
Three MDR package tiers are available based on per-user pricing and level of risk tolerance.
The latest security advisories, blogs, reports, industry publications and webinars published by TRU.
Compare eSentire to other Managed Detection and Response vendors to see how we stack up against the competition.
See why 2000+ organizations globally have chosen eSentire for their MDR Solution.
Enterprise organizations and cyber insurance providers can't ignore generative AI technologies like ChatGPT. Having attracted 100 million monthly active users within two months of its launch and gaining widespread adoption across many industries, ChatGPT uses a probabilistic algorithm to produce human-sounding answers.
The transformative potential of this technology is exciting – ChatGPT and other Large Language Models (LLMs) can augment your existing in-house resources, accelerate processes, and provide access to easy-to-understand knowledge. Specifically for the insurance industry and for many enterprise organizations, this new technology comes with its own set of opportunities and innovative applications that can drive efficiencies.
However, generative AI tools have become a new threat vector for the insurance industry and security leaders across the board. Threat actors are already exploiting ChatGPT and similar LLMs to write dynamic malware code that can bypass security tools and create content for phishing attacks.
From a policyholder’s perspective, it’s also important for organizations to know that ChatGPT can also compromise your cyber insurance coverage if your use of the public version of ChatGPT leads to a data breach.
Since employees are already using generative AI tools, it's essential that you accurately understand the cyber risks associated with this technology while maximizing the value of the latest large language models.
This blog will discuss how cyber insurers and policyholders need to think about mitigating risks related to ChatGPT.
ChatGPT is evolving at lightning-fast speed, and its cyber risks are also growing exponentially. With new features like the mobile app and the ability to browse the Internet added to the product, new and unknown risks arise for its users. In addition, ChatGPT's clones pose other significant cyber risks. OpenAI built the tool on an open-source algorithm. As companies race to create their version of ChatGPT, some alternatives may have fewer restrictions that threat actors can exploit.
Here are the most common ways threat actors exploit generative AI:
The predictive algorithm ChatGPT uses to construct sentences presents new possibilities for advanced social engineering attacks. Threat actors can easily draft sophisticated phishing emails that sound like humans wrote them. This functionality removes one of the telltale signs of phishing emails — spelling errors.
For sophisticated threat actors, ChatGPT opens new possibilities for conducting large-scale targeted phishing attacks. Threat actors with access to compromised email accounts can train their AIs on writing styles and patterns, closely imitating the writing style of unsuspecting victims. With scripting and automation, ChatGPT will be able to create customized communications and optimize them in real time.
ChatGPT has proven to be a potent tool for developers. The model's code-generation capabilities allow programmers of all levels to automate tasks and focus on more critical aspects of their work. But the line between software and malware is thin. Although OpenAI introduced restrictions to prevent ChatGPT from being used for malicious purposes, threat actors are finding ways to get around them.
Threat actors who are able to bypass the safeguards can exploit ChatGPT to write or improve malware code. By facilitating malware creation, ChatGPT enables threat actors without technical skills to conduct cyberattacks. In fact, ChatGPT has already proven capable of writing dynamic malware that can change to bypass security tools such as endpoint detection and response (EDR) methods. Although OpenAI continuously cracks down on the illicit use of the tool, you need to be aware of the heightened risks of malware attacks.
Data privacy is one of the biggest concerns with ChatGPT. It is not clear how OpenAI manages, uses, stores, or shares the data of users of the free version of ChatGPT. Since ChatGPT is trained on large data sets, there are no guarantees that some of its answers won't contain your Personally Identifiable Information (PII).
Another data privacy risk involves the data provided to ChatGPT through user prompts. The public-facing version of the tool is subject to a click-through agreement. By clicking "I agree" and using the platform, users provide legal consent to share their data with Open AI. These legally enforceable agreements are notorious for being one-sided in favor of the company collecting the data. So if a user inputs sensitive client or corporate data they didn't intend to share, the remedies available to fix it are few and far between.
“In cyber [insurance] what we’re starting to see are the limitations around wrongful information collection. We’re seeing a lot of wrongful collections around BIPA [Biometric Information Privacy Act] and pixel tracking software, especially in the healthcare space. I would say that with the introduction of ChatGPT and the heightened awareness in the plaintiffs over wrongful collection, you’re probably going to see it collide in a lot of ways.”
- Peter Hedberg, VP at Cyber Underwriting at Corvus Insurance
For insurance providers, LLMs can transform your underwriting practices by automating the research process, summarizing client information, and providing more accurate risk assessments. ChatGPT can also be used to create outlines of security policies and communications. While its work still requires human expertise and revision, it can help streamline some of the daily tasks for underwriters.
For enterprise organizations, the applications of ChatGPT across departments include handling customer service inquiries, streamlining external communications, writing emails and copy, creating presentations and software coding.
Generative AI has shown significant promise for security teams too. It can help defend insurers and policyholders against cyberattacks by addressing one of the most pressing issues in cybersecurity – a lack of resources and expertise.
According to research from Cybersecurity Ventures, there will be 3.5 million unfilled cybersecurity jobs in 2023. As companies fight to recruit new talent, existing security analysts are tackling increasing workloads that are difficult to manage. In recent years, some security practitioners have reported experiencing a 3x increase in the number of alerts per day.
"Cybersecurity has become the number one business risk. Every company needs to make sure that they're doing the due diligence and due care, protecting their company's assets and customers' data. To get cyber insurance, you have to show that you have all these things in place. But how do you do that when the demand for cybersecurity professionals is so strong? You have to look at ways to automate, streamline, and be more efficient. And I think this is where ChatGPT is really going to help out."
Greg Crowley, CISO at eSentire
Here are examples of how ChatGPT can support enterprise cyber defense efforts and create efficiencies, helping you do more with less:
ChatGPT can be an important tool for gathering threat intelligence and predicting where the next threat is coming from. Generative AI can enhance your threat-hunting capabilities by finding known vulnerabilities and exploits you may be susceptible to. By scanning the dark web for new threat vectors, ChatGPT can help you anticipate potential cyber threats and get ahead of them.
AI algorithms can be trained to detect malicious activity and enable faster response. By helping you analyze large volumes of data and identify potential threats, ChatGPT can streamline time-consuming tasks such as log analysis. AI-powered tools like Microsoft Copilot can also help boost the productivity of developers and security professionals. These tools use advanced natural language processing capabilities to identify potential vulnerabilities and provide code suggestions to prevent them from being exploited.
As generative AI helps threat actors write malware, it can also help security professionals address their skills gaps. Thanks to its natural language processing capabilities, ChatGPT enables you to search for threats in your environment even without the knowledge of correct syntax. Security analysts can synthesize data from multiple sources into clear, actionable insights using generative AI.
ChatGPT may revolutionize work and empower both policyholders and cyber insurance companies to be more efficient. However, you're also likely to see more cyber insurance claims associated with cyber risks stemming from malicious ChatGPT use.
AI considerations aren’t new for cyber insurance. But as the technology continues to evolve, insurance policies will need to adapt to evolving AI regulation and liability issues associated with LLMs. Given the expanding cyber risks associated with generative AI, it’s important to set comprehensive controls that allow for the safe use of LLMs without compromising your cyber insurance requirements, the privacy of your company or clients' sensitive data.
First, paid enterprise versions of ChatGPT are an essential stepping stone to addressing data privacy concerns. These business licenses allow you more control over how the data is used, stored, and deleted. Users with Pro or Business licenses also get access to OpenAI's API, creating greater transparency.
To further ensure the safety of your sensitive data, consider using secure gateways to access these paid versions of ChatGPT. A gateway would allow users to access the LLM by using verifiable tokens. This additional step can help mitigate the data privacy risks by allowing you to control what data leaves your company's servers and ensure that any sensitive information is encrypted.
Leveraging your cybersecurity partners can help you safely integrate and operationalize the technology. Security professionals will also be an invaluable resource for guiding your team to incorporate the security capabilities of LLMs into your workflows. While ChatGPT may be effective at augmenting your in-house resources, the knowledge of seasoned cybersecurity professionals is required to rapidly identify the attackers, contain threats, and prevent operational disruption.
"There is a lot of fear out there around ChatGPT being able to create malware or certain attack types that are evasive to traditional antiviruses or endpoint detection methods. But one thing I have yet to see it do is to be able to come up with new attack tactics, techniques, and processes. It's not going to be able to come up with a brand new way that professionals aren't able to detect. "
Greg Crowley, CISO at eSentire
It's important to note that ChatGPT's malware writing capabilities are still limited to replicating existing code and using known attack techniques. With a strong cybersecurity posture with a focus on cyber resilience, you will be able to anticipate and withstand cyber attacks created by using ChatGPT. You also need to emphasize the importance of staying up-to-date with the latest developments in AI to your policyholders. As technology progresses, insurers and policyholders will need to regularly assess your security measures and policies to ensure their effectiveness.
If your in-house team is not able to provide 24/7 threat detection, investigation, and response capabilities, consider outsourcing your security operations to a trusted vendor. A multi-signal Managed Detection and Response (MDR) provider will act as an expansion of your team to conduct 24/7 threat detection and containment and provide a complete response.
To learn more about how eSentire MDR can help you build a more resilient security operation, get in touch with an eSentire cybersecurity specialist.
eSentire, Inc., the Authority in Managed Detection and Response (MDR), protects the critical data and applications of 2000+ organizations in 80+ countries, across 35 industries from known and unknown cyber threats by providing Exposure Management, Managed Detection and Response and Incident Response services designed to build an organization’s cyber resilience & prevent business disruption. Founded in 2001, eSentire protects the world’s most targeted organizations with 65% of its global base recognized as critical infrastructure, vital to economic health and stability. By combining open XDR platform technology, 24/7 threat hunting, and proven security operations leadership, eSentire's award-winning MDR services and team of experts help organizations anticipate, withstand and recover from cyberattacks. For more information, visit: www.esentire.com and follow @eSentire.