Combine cutting-edge XDR technology, multi-signal threat intelligence and 24/7 Elite Threat Hunters to help you build a world-class security operation.
Our team delivers the fastest response time in the industry. Threat suppression within just 4 hours of being engaged.
Cyber risk and advisory programs that identify security gaps and build security strategies to address them.
24/7 SOC-as-a-Service with unlimited threat hunting and incident handling.
XDR with machine learning that eliminates noise, enables real-time detection and response, and automatically blocks threats.
Seamless integration and threat investigation across your existing tech stack.
Proactive threat intelligence, original threat research and a world-class team of seasoned industry veterans.
Extend your team capabilities and prevent business disruption with expertise from eSentire.
We balance automated blocks with rapid human-led investigations to manage threats.
Guard endpoints by isolating and remediating threats to prevent lateral spread.
Defend brute force attacks, active intrusions and unauthorized scans.
Investigation and threat detection across multi-cloud or hybrid environments.
Remediate misconfigurations, vulnerabilities and policy violations.
Investigate and respond to compromised identities and insider threats.
Stop ransomware before it spreads.
Meet regulatory compliance mandates.
Detect and respond to zero-day exploits.
End misconfigurations and policy violations.
Defend third-party and supply chain risk.
Prevent disruption by outsourcing MDR.
Adopt a risk-based security approach.
Meet insurability requirements with MDR.
Protect your most sensitive data.
Build a proven security program.
Operationalize timely, accurate, and actionable cyber threat intelligence.
THE THREAT On November 18th, 2024, Palo Alto disclosed a critical actively exploited authentication bypass zero-day vulnerability impacting Palo Alto Networks PAN-OS. The…
Nov 13, 2024THE THREAT On November 12th, Citrix disclosed two separate vulnerabilities identified in Citrix Session Recording, which impacted multiple versions of Citrix Virtual Apps and…
eSentire is The Authority in Managed Detection and Response Services, protecting the critical data and applications of 2000+ organizations in 80+ countries from known and unknown cyber threats. Founded in 2001, the company’s mission is to hunt, investigate and stop cyber threats before they become business disrupting events.
We provide sophisticated cybersecurity solutions for Managed Security Service Providers (MSSPs), Managed Service Providers (MSPs), and Value-Added Resellers (VARs). Find out why you should partner with eSentire, the Authority in Managed Detection and Response, today.
Multi-Signal MDR with 300+ technology integrations to support your existing investments.
24/7 SOC-as-a-Service with unlimited threat hunting and incident handling.
Three MDR package tiers are available based on per-user pricing and level of risk tolerance.
The latest security advisories, blogs, reports, industry publications and webinars published by TRU.
Compare eSentire to other Managed Detection and Response vendors to see how we stack up against the competition.
See why 2000+ organizations globally have chosen eSentire for their MDR Solution.
On the Underground Hacker Markets, a criminal can purchase any number of illegal goods. They can get everything from online banking credentials, credit card numbers, guns, Fullz (full identity packets used for identity theft), drugs, PayPal credentials, passports, and now they can even purchase your Generative AI (GenAI) account credentials. As seen in (Figure 1), these include credentials for ChatGPT, Quillbot, Notion, Huggingface, Replit and the list goes on, as discovered by eSentire’s cybersecurity research team, the Threat Response Unit (TRU).
Security research organization, Sysdig, describes LLM Jacking– whereby threat actors take control of a large pool of LLMs. Sysdig demonstrated evidence of a campaign leveraging a reverse proxy – a useful component for covertly reselling LLM access. They lacked any evidence of how LLMs are used by cybercriminals in the wild.
In the present report, eSentire’s Threat Response Unit leverages insights from the cybercriminal underground to showcase how threat actors acquire, resell, and abuse access to LLMs.
It should come as no surprise that cybercriminals are selling GenAI credentials because threat actors always discover ways to monetize a piece of data. TRU found that threat actors are selling the usernames and passwords for approximately 400 individual GenAI accounts a day (Figure 1).
Cybercriminals are advertising the credentials on popular Russian Underground Markets, which specialize in everything from malware to infostealers to crypters. Many of the GenAI credentials are stolen from corporate end-users' computers when they get infected with an infostealer.
An infostealer is a piece of malware which retrieves everything entered by an end-user into their computer’s Internet Browser. This can include a user’s log-in credentials for their company’s IT network, their online bank account, their Amazon account, PayPal account, and healthcare provider portal.
If the end-user is a subscriber to a GenAI platform, then their credentials will be captured as well. The culmination of all the information retrieved, via an infostealer, is called a Stealer Log. Currently on the underground markets, it costs $10 for each stealer log.
One of the underground services, which was selling stolen GenAI account credentials earlier this year, was called LLM Paradise. The threat actor running this market had a knack for marketing jargon, naming their store LLM Paradise and touting stolen GPT-4 and Claude API keys with ads reading: “The Only Place to get GPT-4 APIKEYS for unbeatable prices.”
The threat actor advertised GPT-4/Claude API keys starting at only $15 each, while typical prices for various OpenAI models run between $5 and $30 per million tokens utilized.
For whatever reason, it appears the proprietor of LLM Paradise couldn’t keep the doors open and the Underground Market was shuttered, although the threat actor even went as far as advertising the stolen GPT-4 API keys on TikTok, and the ad is still live (Figure 2).
eSentire discovered that cybercriminals are finding many ways to monetize stolen GenAI account credentials. Threat actors are using popular AI platforms to create convincing phishing campaigns, develop sophisticated malware, and produce chatbots for their underground forums.
Additionally, by accessing an organization’s corporate GenAI accounts, cybercriminals have the potential to get their hands on valuable and sensitive corporate data, such as customer information, financial data, valuable proprietary intellectual property and employee Personal Identifiable Information (PII).
Companies using GenAI platforms are not the only organizations at risk. Hackers are also attacking GenAI platform providers, which can pave the way for access to valuable data belonging to their corporate subscribers.
As of April 2024, there are three basic ways the Cybersecurity and Infrastructure Security Agency (CISA) defines GenAI threats:
TRU conducted an integrated analysis of Open-Source Intelligence (OSINT) and cyber threat intelligence, which demonstrates the presence of all three GenAI threats in the past year. In addition, TRU researchers observed news cycles and cybercriminal discussions exploring these threats. The primary GenAI threats observed by TRU are:
One stark example of a GenAI company suffering a breach is OpenAI. In July 2024, the New York Times broke a story confirming that OpenAI, the developers of ChatGPT, did suffer a breach in 2023. The Times reported that the attacker “did not access the systems housing and building the AI but did steal discussions from an employee forum.”
Additionally, the Times revealed that OpenAI “did not publicly disclose the incident nor inform the FBI because, it claims, no information about customers nor partners was stolen, and the breach was not considered a threat to national security. The firm decided that the attack was down to a single person with no known association to any foreign government.” TRU has also observed threat actors discussing new research showing how one can gain access to different GenAI platforms (Figure 6).
TRU has also observed threat actors discussing new research showing how one can gain access to different GenAI platforms (Figure 6).
*Traditionally, supply chain attacks often pertain to the software development layer – for example, poisoned updates as in the SolarWinds attack. Data supply chains consist of data going through several steps of storage, processing, analysis and dissemination – often this workload is distributed across several entities.
For example, healthcare documents are passed between several providers, government entities and access systems for the patients. If any point of this chain becomes compromised, the threat actors gain access to all data flowing through that point, giving them an opportunity to either exfiltrate – or poison – that data.
The most likely scenario is exfiltration of data for monetary incentives; poisoning is a more likely outcome of attackers motivated by political influence. One of the more well-known examples of a data supply chain attack is the MoveIT breach.
Of all the GenAI credentials being offered on the Underground, OpenAI usernames and passwords are the most prevalent. OpenAI has become one of the leading players in AI. ChatGPT has gained more than 100 million users since its public launch in November 2022.
TRU observes an average of over 200 OpenAI credentials posted for sale per day (Figure 7). These usernames and passwords are typically stolen from computer users’ internet browsers by cybercriminals using infostealers.
In the case of OpenAI, access to a subscriber’s account grants access to:
If a company is using GenAI solutions such as OpenAI in projects involving sensitive, valuable corporate data then the company should be implementing information security measures, starting with a general awareness of usage to help shape the development of policies around GenAI use. They must have a clear and thorough understanding of the potential risks and the protections implemented by the organization to defend against those risks.
The uncertain attack surface of LLMs lies mostly in the shadows of on-premises usage, and in cases where companies use LLMs as part of a data supply chain. As quickly as the corporate world has adopted GenAI, so have cybercriminals. Cybercriminals take interest in researching how to exploit artificial intelligence, and they share the articles on underground forums.
As LLMs become integrated with data supply chains, they create unexplored and fractal attack surfaces. And cybercriminals that have experience with hybrid attacks on LLMs will combine traditional attacks with prompt injection to explore these attack surfaces for exploitation.
If you are not currently engaged with a Managed Detection and Response (MDR) provider, we highly recommend you partner with us for security services to disrupt threats before they impact your business. Want to learn more? Connect with an eSentire Security Specialist.
Keegan Keplinger is a Senior Security Researcher at eSentire with exprience in detection engineering, incident reconstruction, and educational outreach. Keegan uses data-driven methods to help decision-makers understand security challenges in the wild.