As Large Language Models (LLMs) advance in their capabilities, cybercriminals are finding new ways to exploit them for illicit gain. One emerging trend, termed “LLMjacking” by Sysdig’s Threat Research Team (TRT), involves attackers using stolen credentials to gain unauthorised access to cloud-hosted LLMs, enabling them to evade high usage costs while burdening victims with the financial fallout.
Since its discovery, LLMjacking attacks have rapidly increased, both in frequency and complexity. The Sysdig team’s latest findings highlight how attackers are expanding their methods and exploiting LLMs for both financial gain and other purposes, including evading international sanctions.
What is LLMjacking?
LLMjacking occurs when attackers use stolen cloud credentials to access a victim’s LLM. This unauthorised use often results in significant financial costs, as LLMs require considerable computational resources to function. The cost for models like Claude 3 Opus can exceed $100,000 per day, and with attackers exploiting free access, victims are left to foot the bill.
In the past, cybercriminals focused on LLMs already available through compromised accounts. However, recent trends show that attackers are taking a more active approach, using stolen credentials to enable and run LLMs on cloud platforms like Amazon Bedrock. This shift has led to a surge in LLMjacking incidents, increasing the burden on businesses already struggling to manage cloud security risks.
The Growing Scale of LLMjacking Attacks
Sysdig TRT has tracked a dramatic rise in LLMjacking activity over the past few months. For example, in July 2024, more than 85,000 API requests were recorded related to LLM usage, with the bulk of these occurring in just a few hours. Attackers are also using an increasing number of unique IP addresses, doubling in the first half of 2024 alone.
Much of the abuse centres around the creation of prompts using APIs like Amazon Bedrock’s InvokeModel and Converse. While many of these prompts are innocuous, a significant portion involves role-playing scenarios and even adult content generation, which can quickly drive up the costs to victims.
New Attack Techniques and Concealment
As cybercriminals grow more adept at manipulating LLMs, their tactics have become more sophisticated. Attackers now utilise APIs like AWS’s Converse to initiate stateful conversations with LLMs, integrating external tools and expanding their exploitation methods. They’ve also started disabling logging features using the DeleteModelInvocationLoggingConfiguration API, which prevents their activities from being tracked in certain logs. This tactic helps attackers stay undetected for longer periods.
Additionally, Sysdig TRT has found that some attackers are using LLMs to write scripts to automate their own malicious activities. For instance, one script continuously interacts with Claude 3 Opus, generating responses and monitoring output while managing multiple tasks concurrently.
LLMjacking in the Geopolitical Landscape
The reach of LLMjacking extends beyond financial motives. In some cases, cybercriminals from sanctioned countries, such as Russia, have used stolen credentials to bypass restrictions imposed by companies like Amazon and Microsoft. Sysdig TRT documented instances where Russian nationals accessed advanced AI models for university projects, further highlighting the geopolitical implications of LLMjacking.
Protecting Against LLMjacking
LLMjacking presents a growing threat to organisations that rely on cloud-hosted LLMs. As attackers become more skilled, the costs and risks of these attacks will continue to rise. To mitigate the risk, organisations must adopt stronger security measures, such as:
- Protecting credentials and minimising access through strict policies.
- Monitoring cloud environments for unusual activity and indicators of AI abuse.
- Staying informed on evolving attacker tactics to bolster defences.
With the rapid growth of AI and cloud adoption, businesses must remain vigilant and take proactive steps to safeguard against the rising threat of LLMjacking.