The global fashion industry is estimated to be valued at $1.84 trillion in 2025, accounting for approximately 1.63% of the world’s GDP (Statista, 2025). With such massive amounts of generated capital, so too comes the enormous potential for toxic content and misuse.
In the fashion industry, teams are frequently innovating quickly, often utilizing AI. Sharing content, whether it be through videos, designs, or otherwise, can lead to content moderation challenges. There remains a risk (through intentional or unintentional actions) of inappropriate, offensive, or toxic content being produced and shared. This can lead to violation of company policy and irreparable brand reputation damage. Implementing guardrails while utilizing AI to innovate faster within this industry can provide long lasting benefits.
In this post, we cover the use of the multimodal toxicity detection feature of Amazon Bedrock Guardrails to guard against toxic content. Whether you’re an enterprise giant in the fashion industry or an up-and-coming brand, you can use this solution to screen potentially harmful content before it impacts your brand’s reputation and ethical standards. For the purposes of this post, ethical standards refer to toxic, disrespectful, or harmful content and images that could be created by fashion designers.
Brand reputation represents a priceless currency that transcends trends, with companies competing not just for sales but for consumer trust and loyalty. As technology evolves, the need for effective reputation management strategies should include using AI in responsible ways. In this growing age of innovation, as the fashion industry evolves and creatives innovate faster, brands that strategically manage their reputation while adapting to changing consumer preferences and global trends will distinguish themselves from the rest in the industry (source). Take the first step toward responsible AI within your creative practices with Amazon Bedrock Guardrails.
To incorporate multimodal toxicity detection guardrails in an image generating workflow with Amazon Bedrock, you can use the following AWS services:
The following diagram illustrates the solution architecture.

For this solution, you must have the following:
The following IAM policy grants specific permissions for a Lambda function to interact with Amazon CloudWatch Logs, access objects in an S3 bucket, and apply Amazon Bedrock guardrails, enabling the function to log its activities, read from Amazon S3, and use Amazon Bedrock content filtering capabilities. Before using this policy, update the placeholders with your resource-specific values:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CloudWatchLogsAccess",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "arn:aws:logs:<REGION>:<ACCOUNT-ID>:*"
},
{
"Sid": "CloudWatchLogsStreamAccess",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:<REGION>:<ACCOUNT-ID>:log-group:/aws/lambda/<FUNCTION-NAME>:*"
]
},
{
"Sid": "S3ReadAccess",
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<BUCKET-NAME>/*"
},
{
"Sid": "BedrockGuardrailsAccess",
"Effect": "Allow",
"Action": "bedrock:ApplyGuardrail",
"Resource": "arn:aws:bedrock:<REGION>:<ACCOUNT-ID>:guardrail/<GUARDRAIL-ID>"
}
]
}
The following steps walk you through how to incorporate multimodal toxicity detection guardrails in an image generation workflow with Amazon Bedrock.
The foundation of our moderation system is a guardrail in Amazon Bedrock configured specifically for image content. To create a multimodality toxicity detection guardrail, complete the following steps:
Next, you configure the content filters. Complete the following steps:
By setting up these filters, you create a comprehensive safeguard that can detect potentially harmful content across multiple modalities, enhancing the safety and reliability of your AI applications.

You need a place for users (or other processes) to upload the images that require moderation. To create an S3 bucket, complete the following steps:
This bucket is where our workflow begins—new images landing here will trigger the next step.

We use a Lambda function, a serverless compute service, written in Python. This function is invoked when a new image arrives in the S3 bucket. The function will send the image to our guardrail in Amazon Bedrock for analysis. Complete the following steps to create your function:
s3:GetObject) and permission to interact with Amazon Bedrock Guardrails using the bedrock:ApplyGuardrail action for your specific guardrail.
Let’s explore the Python code that powers this function. We use the AWS SDK for Python (Boto3) to interact with Amazon S3 and Amazon Bedrock. The code first identifies the uploaded image from the S3 event trigger. It then checks if the image format is supported (JPEG or PNG) and verifies that the size doesn’t exceed the guardrail limit of 4 MB.
The key step involves preparing the image data for the ApplyGuardrail API call. We package the raw image bytes along with its format into a structure that Amazon Bedrock understands. We use the ApplyGuardrail API; this is efficient because we can check the image against our configured policies without needing to invoke a full foundation model.
Finally, the function calls ApplyGuardrail, passing the image content, the guardrail ID, and the version you noted earlier. It then interprets the response from Amazon Bedrock, logging whether the content was BLOCKED or NONE (meaning it passed the check), along with specific harmful categories detected if it was blocked.
The following is Python code you can use as a starting point (remember to replace the placeholders):
import boto3
import json
import os
import traceback
s3_client = boto3.client('s3')
# Use 'bedrock-runtime' for ApplyGuardrail and InvokeModel
bedrock_runtime_client = boto3.client('bedrock-runtime')
GUARDRAIL_ID = '<YOUR_GUARDRAIL_ID>'
GUARDRAIL_VERSION = '<SPECIFIC_VERSION>' #e.g, '1'
# Supported image formats by the Guardrail feature
SUPPORTED_FORMATS = {'jpg': 'jpeg', 'jpeg': 'jpeg', 'png': 'png'}
def lambda_handler(event, context):
# Get bucket name and object key
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
print(f"Processing s3://{bucket}/{key}")
# Extract file extension and check if supported
try:
file_ext = os.path.splitext(key)[1].lower().lstrip('.')
image_format = SUPPORTED_FORMATS.get(file_ext)
if not image_format:
print(f"Unsupported image format: {file_ext}. Skipping.")
return {'statusCode': 400, 'body': 'Unsupported image format'}
except Exception as e:
print(f"Error determining file format for {key}: {e}")
return {'statusCode': 500, 'body': 'Error determining file format'}
try:
# Get image bytes from S3
response = s3_client.get_object(Bucket=bucket, Key=key)
image_bytes = response['Body'].read()
# Basic size check (Guardrail limit is 4MB)
if len(image_bytes) > 4 * 1024 * 1024:
print(f"Image size exceeds 4MB limit for {key}. Skipping.")
return {'statusCode': 400, 'body': 'Image size exceeds 4MB limit'}
# 3. Prepare content list for ApplyGuardrail API
content_to_assess = [
{
"image": {
"format": image_format, # 'jpeg' or 'png'
"source": {
"bytes": image_bytes # Pass raw bytes
}
}
}
]
# Call ApplyGuardrail API
print(f"Calling ApplyGuardrail for {key} (Format: {image_format})")
guardrail_response = bedrock_runtime_client.apply_guardrail(
guardrailIdentifier=GUARDRAIL_ID,
guardrailVersion=GUARDRAIL_VERSION,
source='INPUT', # Assess as user input
content=content_to_assess
)
# Process response
print("Guardrail Assessment Response:", json.dumps(guardrail_response))
action = guardrail_response.get('action')
assessments = guardrail_response.get('assessments', [])
outputs = guardrail_response.get('outputs', []) # Relevant if masking occurs
print(f"Guardrail Action for {key}: {action}")
if action == 'BLOCKED':
print(f"Content BLOCKED. Assessments: {json.dumps(assessments)}")
# Add specific handling for blocked content
elif action == 'NONE':
print("Content PASSED.")
# Add handling for passed content
else:
# Handle other potential actions (e.g., content masked)
print(f"Guardrail took action: {action}. Outputs: {json.dumps(outputs)}")
return {
'statusCode': 200,
'body': json.dumps(f'Successfully processed {key}. Guardrail action: {action}')
}
except bedrock_runtime_client.exceptions.ValidationException as ve:
print(f"Validation Error calling ApplyGuardrail for {key}: {ve}")
# You might get this for exceeding size/dimension limits or other issues
return {'statusCode': 400, 'body': f'Validation Error: {ve}'}
except Exception as e:
print(f"Error processing image {key}: {e}")
# Log the full error for debugging
traceback.print_exc()
return {'statusCode': 500, 'body': f'Internal server error processing {key}'}
Check the function’s default execution timeout (found under Configuration, General configuration) to verify it has enough time to download the image and wait for the Amazon Bedrock API response, perhaps setting it to 30 seconds.
With the S3 bucket ready and the function coded, you must now connect them. This is done by setting up an Amazon S3 trigger on the Lambda function:


It’s time to see your automated workflow in action! Upload a few test images (JPEG or PNG, under 4 MB) to your designated S3 bucket. Include images that are clearly safe and others that might trigger the harmful content filters you configured in your guardrail. On the CloudWatch console, find the log group associated with your Lambda function. Examining the latest log streams will show you the function’s execution details. You should see messages confirming which file was processed, the call to ApplyGuardrail, and the final guardrail action (NONE or BLOCKED). If an image was blocked, the logs should also show the specific assessment details, indicating which harmful category was detected.

By following these steps, you have established a robust, serverless pipeline for automatically moderating image content using the power of Amazon Bedrock Guardrails. This proactive approach helps maintain safer online environments and aligns with responsible AI practices.
{
"ResponseMetadata": {
"RequestId": "fa025ab0-905f-457d-ae19-416537e2c69f",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"content-type": "application/json",
"content-length": "1008",
"connection": "keep-alive",
},
"RetryAttempts": 0
},
"usage": {
"topicPolicyUnits": 0,
"contentPolicyUnits": 0,
"wordPolicyUnits": 0,
"sensitiveInformationPolicyUnits": 0,
"sensitiveInformationPolicyFreeUnits": 0,
"contextualGroundingPolicyUnits": 0
},
"action": "GUARDRAIL_INTERVENED",
"outputs": [
{
"text": "Sorry, the model cannot answer this question."
}
],
"assessments": [
{
"contentPolicy": {
"filters": [
{
"type": "HATE",
"confidence": "MEDIUM",
"filterStrength": "HIGH",
"action": "BLOCKED"
}
]
},
"invocationMetrics": {
"guardrailProcessingLatency": 918,
"usage": {
"topicPolicyUnits": 0,
"contentPolicyUnits": 0,
"wordPolicyUnits": 0,
"sensitiveInformationPolicyUnits": 0,
"sensitiveInformationPolicyFreeUnits": 0,
"contextualGroundingPolicyUnits": 0
},
"guardrailCoverage": {
"images": {
"guarded": 1,
"total": 1
}
}
}
}
],
"guardrailCoverage": {
"images": {
"guarded": 1,
"total": 1
}
}
}
When you’re ready to remove the moderation pipeline you built, you must clean up the resources you created to avoid unnecessary charges. Complete the following steps:
With these cleanup steps complete, you have successfully removed the components of your image moderation pipeline. You can recreate this solution in the future by following the steps outlined in this post—this highlights the ease of cloud-based, serverless architectures.
In the fashion industry, protecting your brand’s reputation while maintaining creative innovation is paramount. By implementing Amazon Bedrock Guardrails multimodal toxicity detection, fashion brands can automatically screen content for potentially harmful material before it impacts their reputation or violates their ethical standards. As the fashion industry continues to evolve digitally, implementing robust content moderation systems isn’t just about risk management—it’s about building trust with your customers and maintaining brand integrity. Whether you’re an established fashion house or an emerging brand, this solution offers an efficient way to uphold your content standards. The solution we outlined in this post provides a scalable, serverless architecture that accomplishes the following:
If you’re interested in further insights on Amazon Bedrock Guardrails and its practical use, refer to the video Amazon Bedrock Guardrails: Make Your AI Safe and Ethical, and the post Amazon Bedrock Guardrails image content filters provide industry-leading safeguards, helping customer block up to 88% of harmful multimodal content: Generally available today.
Jordan Jones is a Solutions Architect at AWS within the Cloud Sales Center organization. He uses cloud technologies to solve complex problems, bringing defense industry experience and expertise in various operating systems, cybersecurity, and cloud architecture. He enjoys mentoring aspiring professionals and speaking on various career panels. Outside of work, he volunteers within the community and can be found watching Golden State Warriors games, solving Sudoku puzzles, or exploring new cultures through world travel.
Jean Jacques Mikem is a Solutions Architect at AWS with a passion for designing secure and scalable technology solutions. He uses his expertise in cybersecurity and technological hardware to architect robust systems that meet complex business needs. With a strong foundation in security principles and computing infrastructure, he excels at creating solutions that bridge business requirements with technical implementation.
Manuel Rioux est fièrement propulsé par WordPress