Prompting

Prompt Injection

Quick Answer

A security vulnerability where attacker-controlled text overrides intended instructions.

Prompt injection occurs when untrusted input overrides model instructions. Example: user input contains 'Ignore previous instructions...'. Prompt injection can cause models to behave unexpectedly. Defenses: parameterization, sandboxing, explicit instruction boundaries. Prompt injection is particularly dangerous with untrusted user input in system prompts. Careful prompt design and validation help. Prompt injection is a critical security concern.

Last verified: 2026-04-08

Compare models

See how different LLMs compare on benchmarks, pricing, and speed.

Browse all models →