The darknet, also known as the dark web, is a concealed section of the internet that's inaccessible via standard search engines. You can only access it using special software, settings, or authorization. This area comprises websites and content that are purposely kept hidden from public view.

Accessing darknet requires using Tor Browser, a special web browser that routes your internet traffic through a global network of relays managed by volunteers. This way, it becomes very difficult to trace which websites you're visiting, and these sites won't know where you are located.

When visiting the dark web, use a secure browser like Tor, do not reveal any of your personal information, and don't open suspicious files or links to stay safe.

The Darknet is often utilized for secure communication, discreet information or file sharing, anonymous research without identity exposure, and occasionally for engaging in illicit activities. It is also recognized for hosting underground black markets(darknet markets), whistleblowing platforms, and discussion boards that champion freedom of speech.

While accessing Darknet Markets themselves is typically not against the law in most places, engaging with illicit goods within them is generally considered a crime. On the other hand, some people might visit Darknet Markets for lawful purposes such as research, journalistic work, or simply to explore online communities. It's essential to know the local laws regarding online activities, and be cautious when using these platforms to avoid any potential issues.

News Updated on Aug. 10, 2024

Most Popular Product on the Darknet: Compromised GenAI Accounts

Malicious actors are selling stolen GenAI login credentials for platforms such as ChatGPT, Quillbot, Notion, Hugging Face, and Replit.

Cybercriminals aiming to exploit generative AI for phishing schemes and advanced malware can easily acquire access through darknet markets, where a significant number of threat actors are selling stolen GenAI credentials on a daily basis.

A study by eSentire found that hackers are trading usernames and passwords for about 400 GenAI accounts each day.

“Cybercriminals are promoting these credentials on well-known Russian Darknet Markets that deal in various illicit goods, including malware, infostealers, and crypters”. “Many of these GenAI credentials are extracted from corporate end-users' devices infected by infostealers.”

Additionally, a Stealer Log, which encompasses all the information an infostealer collects from victim machines, including GenAI credentials, is currently being offered for $10 on darknet markets.

LLM Paradise was one of the most popular choices.

Researchers identified one of the most notable underground markets facilitating the trade of generative AI credentials as LLM Paradise.

"The individual behind this market had a talent for marketing. They cleverly named it LLM Paradise. It promoted stolen GPT-4 and Claude API keys with ads saying: 'The Only Place to Get GPT-4 API KEYS for Unbeatable Prices,'" the researchers explained.

The threat actor offered GPT-4 and Claude API keys starting at just $15 each, while typical prices for various OpenAI models range from $5 to $30 per million tokens used, the researchers noted.

LLM Paradise was unable to remain operational and recently ceased its services for reasons unknown. However, the threat actors have navigated this setback and continue to run advertisements for stolen GPT-4 API keys on TikTok that were published prior to the marketplace's closure.

In addition to GPT-4 and Claude APIs, LLM Paradise and similar Darknet Markets listed credentials for Quillbot, Notion, Huggingface, and Replit for sale.

Credentials can be exploited for phishing, malware, and data breaches.

Researchers from eSentire highlighted that stolen credentials hold significant value for cybercriminals due to their potential for substantial returns. "Threat actors are using popular AI platforms to create phishing campaigns, advanced malware, and chatbots for underground forums," they noted.

Moreover, these credentials can grant access to an organization's corporate generative AI accounts, which further enables the retrieval of customers' personal and financial information, proprietary intellectual property, and personally identifiable information.

The compromised credentials can also provide access to data intended exclusively for corporate clients, thus impacting generative AI platform providers as well. OpenAI has been particularly affected, with over 200 OpenAI credentials reportedly listed for sale daily.


To protect against generative AI attacks, corporate users can take steps. They include:

  • Monitor employee use of generative AI.
  • Urge AI providers to add WebAuthn with multifactor authentication, like passkeys or strong passwords.
  • Use dark web monitoring to find stolen credentials.