Skip to content
AI Defense Lab

AI Defense Lab

Your AI Security Learning Platform

Learning Modules

Your Progress

Total Runs

Threats Blocked

Active Policies

Detection Rate

OWASP LLM Top 10 Coverage

IDNameDescriptionCoverage
LLM01Prompt Injection

An attacker crafts input that overrides the model's system instructions, causing unintended behavior.

Covered (2 detectors)
LLM02Insecure Output Handling

LLM output is used in downstream systems without validation, enabling code execution or data leaks.

Covered (1 detector)
LLM03Training Data Poisoning

Malicious data is injected into training or retrieval sources, causing the model to produce harmful outputs.

Covered (1 detector)
LLM04Model Denial of Service

Attackers craft inputs that consume excessive resources, causing the model to become unresponsive.

Not yet covered
LLM05Supply Chain Vulnerabilities

Compromised components in the LLM supply chain (models, plugins, data) introduce security risks.

Not yet covered
LLM06Sensitive Information Disclosure

The LLM reveals confidential data from its training data, system prompts, or connected systems.

Covered (2 detectors)
LLM07Insecure Plugin Design

LLM plugins accept unchecked input or have excessive permissions, creating attack surfaces.

Covered (1 detector)
LLM08Excessive Agency

The LLM has too much autonomy or access, enabling it to take harmful actions without oversight.

Covered (2 detectors)
LLM09Overreliance

Users trust LLM outputs without verification, leading to misinformation or flawed decisions.

Not yet covered
LLM10Model Theft

Unauthorized access to the LLM model weights, parameters, or fine-tuning data.

Not yet covered