MCP servers, agent skills, plugins, and AI-generated code introduce threats that traditional scanners can’t see. Arcwall scans all of it — in your IDE, on every PR, and from the web.
Start scanning →MCP servers, agent skills, plugins, AI-generated code, and system-level threat modeling — all in one platform.
Scan every MCP configuration in your workspace for threats hidden in tool descriptions, server definitions, and endpoint URLs.
Inspect skill definitions, manifests, and referenced scripts for threats that give AI agents unauthorized capabilities.
Assess AI plugins and tool integrations for insecure design patterns — mapped directly to OWASP LLM07 (Insecure Plugin Design).
Detect security issues that AI coding assistants commonly introduce — patterns that traditional scanners don't flag because they look like intentional code.
Test system prompts, few-shot examples, and dynamic prompt templates for injection vulnerabilities, jailbreak susceptibility, and unsafe content generation patterns.
Design-stage STRIDE analysis for AI systems — reasoning about trust boundaries, agent autonomy risks, memory poisoning vectors, and LLM privilege escalation. Mapped against OWASP LLM Top 10 and MITRE ATLAS for compliance-grade output.
Every finding is mapped to the OWASP LLM Top 10 and MITRE ATLAS — the two leading AI security frameworks.
| Reference | Threat Category | What Arcwall Checks |
|---|---|---|
| LLM01 | Prompt Injection | MCP tool descriptions, skill manifests, system prompts for hidden instructions |
| LLM02 | Insecure Output Handling | How agent responses are consumed, acted on, and rendered downstream |
| LLM03 | Training Data Poisoning | Data sources the agent can access and modify — RAG pipelines, memory stores |
| LLM04 | Model Denial of Service | Unbounded token usage, recursive tool calls, resource exhaustion patterns |
| LLM05 | Supply Chain Vulnerabilities | MCP server origins, skill dependencies, plugin source verification |
| LLM06 | Sensitive Info Disclosure | Data flows to and from the LLM — what can leak through responses |
| LLM07 | Insecure Plugin Design | Plugin permission scope, input validation, authentication enforcement |
| LLM08 | Excessive Agency | Actions the agent can take autonomously without human approval |
| LLM09 | Overreliance | Missing human approval gates and validation checkpoints |
| LLM10 | Model Theft | Prompt extraction risks, system prompt exposure paths |
Install the VS Code extension or run scans from the web app. Every finding maps to OWASP, MITRE ATLAS, and CWE.