Bedrock guardrails have the Prompt attack filter set to HIGH strength to detect and block injection and jailbreak patterns. Guardrails missing this setting or using lower strengths are identified.
Risk
Without HIGH prompt-attack filtering, models are exposed to prompt injection/jailbreaks:
- Confidentiality: coerced disclosure of sensitive data
- Integrity: policy evasion and manipulated outputs
- Operations: unintended tool execution and workflow tampering
Run this check with Prowler CLI
prowler aws --checks bedrock_guardrail_prompt_attack_filter_enabled
Recommendation
Set the Prompt attack filter to HIGH and apply defense in depth:
- Tag user/external inputs as untrusted for evaluation
- Combine with denied topics and sensitive-info filters
- Enforce least privilege and approvals for risky actions
- Monitor guardrail hits and tune to reduce false negatives
Remediation
CLI
aws bedrock update-guardrail --guardrail-identifier <guardrail_id> --content-policy-config 'filtersConfig=[{type=PROMPT_ATTACK,inputStrength=HIGH}]'
Native IaC
Terraform
Other
- Open the AWS Console and go to Amazon Bedrock
- Select Guardrails, then choose your guardrail
- In Content filters, find Prompt attacks
- Set Strength to High
- Click Save
Source Code
Resource Type
Other
References
- https://trendmicro.com/trendaivisiononecloudriskmanagement/knowledge-base/aws/Bedrock/prompt-attack-strength.html
- https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-injection.html
- https://support.icompaas.com/support/solutions/articles/62000233535-ensure-prompt-attack-filter-is-configured-at-highest-strength-for-amazon-bedrock-guardrails
- https://docs.aws.amazon.com/bedrock/latest/userguide/guardrails.html