Blog

Hackers Are Stealing GenAI Credentials, So What Sensitive Company Data Are They Getting their Hands On?

BY Keegan Keplinger

July 30, 2024 | 9 MINS READ

Cyber Risk

Sensitive Data Protection

Threat Intelligence

Threat Response Unit

AI/ML

Want to learn more on how to achieve Cyber Resilience?

TALK TO AN EXPERT

On the Underground Hacker Markets, a criminal can purchase any number of illegal goods. They can get everything from online banking credentials, credit card numbers, guns, Fullz (full identity packets used for identity theft), drugs, PayPal credentials, passports, and now they can even purchase your Generative AI (GenAI) account credentials. As seen in (Figure 1), these include credentials for ChatGPT, Quillbot, Notion, Huggingface, Replit and the list goes on, as discovered by eSentire’s cybersecurity research team, the Threat Response Unit (TRU).

Figure 1: An ad from a Russian Underground Market advertising credentials for various GenAI platforms over a three-day period

Security research organization, Sysdig, describes LLM Jacking– whereby threat actors take control of a large pool of LLMs. Sysdig demonstrated evidence of a campaign leveraging a reverse proxy – a useful component for covertly reselling LLM access. They lacked any evidence of how LLMs are used by cybercriminals in the wild.

In the present report, eSentire’s Threat Response Unit leverages insights from the cybercriminal underground to showcase how threat actors acquire, resell, and abuse access to LLMs.

GenAI Credentials on the Hacker Underground—Plenty to Go Around

It should come as no surprise that cybercriminals are selling GenAI credentials because threat actors always discover ways to monetize a piece of data. TRU found that threat actors are selling the usernames and passwords for approximately 400 individual GenAI accounts a day (Figure 1).

Cybercriminals are advertising the credentials on popular Russian Underground Markets, which specialize in everything from malware to infostealers to crypters. Many of the GenAI credentials are stolen from corporate end-users' computers when they get infected with an infostealer.

An infostealer is a piece of malware which retrieves everything entered by an end-user into their computer’s Internet Browser. This can include a user’s log-in credentials for their company’s IT network, their online bank account, their Amazon account, PayPal account, and healthcare provider portal.

If the end-user is a subscriber to a GenAI platform, then their credentials will be captured as well. The culmination of all the information retrieved, via an infostealer, is called a Stealer Log. Currently on the underground markets, it costs $10 for each stealer log.

Welcome to LLM Paradise—an Underground Market for Stolen GPT-4 and Claude API Keys

One of the underground services, which was selling stolen GenAI account credentials earlier this year, was called LLM Paradise. The threat actor running this market had a knack for marketing jargon, naming their store LLM Paradise and touting stolen GPT-4 and Claude API keys with ads reading: “The Only Place to get GPT-4 APIKEYS for unbeatable prices.”

The threat actor advertised GPT-4/Claude API keys starting at only $15 each, while typical prices for various OpenAI models run between $5 and $30 per million tokens utilized.

For whatever reason, it appears the proprietor of LLM Paradise couldn’t keep the doors open and the Underground Market was shuttered, although the threat actor even went as far as advertising the stolen GPT-4 API keys on TikTok, and the ad is still live (Figure 2).

Figure 2: API keys for a GPT-4 account are advertised for only $15 a piece on an underground market called LLM Paradise. The threat actor, running the illicit market, promotes the stolen API keys on both Hack Forums and TikTok. 

The Untold Value of Access to Stolen GenAI Credentials and GenAI Data

eSentire discovered that cybercriminals are finding many ways to monetize stolen GenAI account credentials. Threat actors are using popular AI platforms to create convincing phishing campaigns, develop sophisticated malware, and produce chatbots for their underground forums.

Additionally, by accessing an organization’s corporate GenAI accounts, cybercriminals have the potential to get their hands on valuable and sensitive corporate data, such as customer information, financial data, valuable proprietary intellectual property and employee Personal Identifiable Information (PII).

Companies using GenAI platforms are not the only organizations at risk. Hackers are also attacking GenAI platform providers, which can pave the way for access to valuable data belonging to their corporate subscribers.

As of April 2024, there are three basic ways the Cybersecurity and Infrastructure Security Agency (CISA) defines GenAI threats:

TRU conducted an integrated analysis of Open-Source Intelligence (OSINT) and cyber threat intelligence, which demonstrates the presence of all three GenAI threats in the past year. In addition, TRU researchers observed news cycles and cybercriminal discussions exploring these threats. The primary GenAI threats observed by TRU are:

One stark example of a GenAI company suffering a breach is OpenAI. In July 2024, the New York Times broke a story confirming that OpenAI, the developers of ChatGPT, did suffer a breach in 2023. The Times reported that the attacker “did not access the systems housing and building the AI but did steal discussions from an employee forum.”

Additionally, the Times revealed that OpenAI “did not publicly disclose the incident nor inform the FBI because, it claims, no information about customers nor partners was stolen, and the breach was not considered a threat to national security. The firm decided that the attack was down to a single person with no known association to any foreign government.” TRU has also observed threat actors discussing new research showing how one can gain access to different GenAI platforms (Figure 6).

TRU has also observed threat actors discussing new research showing how one can gain access to different GenAI platforms (Figure 6).

*Traditionally, supply chain attacks often pertain to the software development layer – for example, poisoned updates as in the SolarWinds attack. Data supply chains consist of data going through several steps of storage, processing, analysis and dissemination – often this workload is distributed across several entities.

For example, healthcare documents are passed between several providers, government entities and access systems for the patients. If any point of this chain becomes compromised, the threat actors gain access to all data flowing through that point, giving them an opportunity to either exfiltrate – or poison – that data.

The most likely scenario is exfiltration of data for monetary incentives; poisoning is a more likely outcome of attackers motivated by political influence. One of the more well-known examples of a data supply chain attack is the MoveIT breach.

Figure 3: An example of cybercriminals chatting on the Underground about using GPT to overcome development and coding gaps
Figure 4: Hackers on the underground discussing using LLMs for generating exploits
Figure 5: Abuse Prompts – EvilGPT offerings 
Figure 6: Threat actors sharing research on attacks targeting Generative AI

OpenAI—the Most Stolen GenAI Credentials

Of all the GenAI credentials being offered on the Underground, OpenAI usernames and passwords are the most prevalent. OpenAI has become one of the leading players in AI. ChatGPT has gained more than 100 million users since its public launch in November 2022.

TRU observes an average of over 200 OpenAI credentials posted for sale per day (Figure 7). These usernames and passwords are typically stolen from computer users’ internet browsers by cybercriminals using infostealers.

In the case of OpenAI, access to a subscriber’s account grants access to:

If a company is using GenAI solutions such as OpenAI in projects involving sensitive, valuable corporate data then the company should be implementing information security measures, starting with a general awareness of usage to help shape the development of policies around GenAI use. They must have a clear and thorough understanding of the potential risks and the protections implemented by the organization to defend against those risks.

Figure 7: Infostealer logs containing credentials for Generative AI

The Future of the LLM Threat Landscape

The uncertain attack surface of LLMs lies mostly in the shadows of on-premises usage, and in cases where companies use LLMs as part of a data supply chain. As quickly as the corporate world has adopted GenAI, so have cybercriminals. Cybercriminals take interest in researching how to exploit artificial intelligence, and they share the articles on underground forums.

As LLMs become integrated with data supply chains, they create unexplored and fractal attack surfaces. And cybercriminals that have experience with hybrid attacks on LLMs will combine traditional attacks with prompt injection to explore these attack surfaces for exploitation.

Recommendations on How to Protect Against GenAI Attacks

If you are not currently engaged with a Managed Detection and Response (MDR) provider, we highly recommend you partner with us for security services to disrupt threats before they impact your business. Want to learn more? Connect with an eSentire Security Specialist.

Keegan Keplinger
Keegan Keplinger Senior Security Researcher

Keegan Keplinger is a Senior Security Researcher at eSentire with exprience in detection engineering, incident reconstruction, and educational outreach. Keegan uses data-driven methods to help decision-makers understand security challenges in the wild.

Read the Latest from eSentire