Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. With Amazon Bedrock, you can experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that run tasks using your enterprise systems and data sources. Because Amazon Bedrock is serverless, you don’t have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.
In this post, we demonstrate how to use Amazon Bedrock with the AWS SDK for Python (Boto3) to programmatically incorporate FMs.
The solution uses an AWS SDK for Python script with features that invoke Anthropic’s Claude 3 Sonnet on Amazon Bedrock. By using this FM, it generates an output using a prompt as input. The following diagram illustrates the solution architecture.

Before you invoke the Amazon Bedrock API, make sure you have the following:
After you complete the prerequisites, you can start using Amazon Bedrock. Begin by scripting with the following steps:
import boto3
import json
# Set up the Amazon Bedrock client
bedrock_client = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1"
)
# Define the model ID
model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
# Prepare the input prompt.
prompt = "Hello, how are you?"
Prompt engineering techniques can improve FM performance and enhance results.
Before invoking the Amazon Bedrock model, we need to define a payload, which acts as a set of instructions and information guiding the model’s generation process. This payload structure varies depending on the chosen model. In this example, we use Anthropic’s Claude 3 Sonnet on Amazon Bedrock. Think of this payload as the blueprint for the model, and provide it with the necessary context and parameters to generate the desired text based on your specific prompt. Let’s break down the key elements within this payload:
payload = {
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 2048,
"temperature": 0.9,
"top_k": 250,
"top_p": 1,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt
}
]
}
]
}
# Invoke the Amazon Bedrock model
response = bedrock_client.invoke_model(
modelId=model_id,
body=json.dumps(payload)
)
# Process the response
result = json.loads(response["body"].read())
generated_text = "".join([output["text"] for output in result["content"]])
print(f"Response: {generated_text}")
Let’s look at our complete script:
import boto3
import json
# Set up the Amazon Bedrock client
bedrock_client = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1"
)
# Define the model ID
model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
# Prepare the input prompt
prompt = "Hello, how are you?"
# Create the request payload
payload = {
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 2048,
"temperature": 0.9,
"top_k": 250,
"top_p": 1,
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt
}
]
}
]
}
# Invoke the Amazon Bedrock model
response = bedrock_client.invoke_model(
modelId=model_id,
body=json.dumps(payload)
)
# Process the response
result = json.loads(response["body"].read())
generated_text = "".join([output["text"] for output in result["content"]])
print(f"Response: {generated_text}")
Invoking the model with the prompt “Hello, how are you?” will yield the result shown in the following screenshot.
![]()
When you’re done using Amazon Bedrock, clean up temporary resources like IAM users and Amazon CloudWatch logs to avoid unnecessary charges. Cost considerations depend on usage frequency, chosen model pricing, and resource utilization while the script runs. See Amazon Bedrock Pricing for pricing details and cost-optimization strategies like selecting appropriate models, optimizing prompts, and monitoring usage.
In this post, we demonstrated how to programmatically interact with Amazon Bedrock FMs using Boto3. We explored invoking a specific FM and processing the generated text, showcasing the potential for developers to use these models in their applications for a variety of use cases, such as:
Stay curious and explore how generative AI can revolutionize various industries. Explore the different models and APIs and run comparisons of how each model provides different outputs. Find the model that will fit your use case and use this script as a base to create agents and integrations in your solution.

Merlin Naidoo is a Senior Technical Account Manager at AWS with over 15 years of experience in digital transformation and innovative technical solutions. His passion is connecting with people from all backgrounds and leveraging technology to create meaningful opportunities that empower everyone. When he’s not immersed in the world of tech, you can find him taking part in active sports.
Manuel Rioux est fièrement propulsé par WordPress