AI/LLM Pentest Methodologies
Cobalt offers two levels of Artificial Intelligence (AI) and Large Language Model (LLM) pentesting of Web and Web + API Assets.
LLM/AI Prompt Injection (+4 credits) Focus on testing the security of your AI systems against prompt injection attacks. These attacks manipulate the AI’s input to generate malicious output, which can compromise the system’s integrity and confidentiality. Prompt Injection AI/LLM pentests can be run as an Agile pentest with an automated report.
Full Coverage LLM (+16 credits) Test your LLMs against the Open Web Application Security Project (OWASP) Top 10 for LLM Applications. Our tests check whether your AI applications are protected against unauthorized access, data breaches, and disruptions. For full coverage of your LLM and its web and API connections, this is run as a Comprehensive pentest, which includes a thorough final pentest analysis and report.
Specific categories that are covered as part of the Full Coverage LLM pentest include:
- Prompt Injection
- Insecure Output Handling
- Training Data Poisoning
- Model Denial of Service
- Supply Chain Vulnerabilities
- Sensitive Information Disclosure
- Insecure Plugin Design
- Excessive Agency
- Overreliance
- Model Theft
Learn more about how to scope an AI/LLM pentest.