Amazon Bedrock has emerged as the preferred choice for tens of thousands of customers seeking to build their generative AI strategy. It offers a straightforward, fast, and secure way to develop advanced generative AI applications and experiences to drive innovation.
With the comprehensive capabilities of Amazon Bedrock, you have access to a diverse range of high-performing foundation models (FMs), empowering you to select the most suitable option for your specific needs, customize the model privately with your own data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and create managed agents that run complex business tasks.
Fine-tuning pre-trained language models allows organizations to customize and optimize the models for their specific use cases, providing better performance and more accurate outputs tailored to their unique data and requirements. By using fine-tuning capabilities, businesses can unlock the full potential of generative AI while maintaining control over the model’s behavior and aligning it with their goals and values.
In this post, we delve into the essential security best practices that organizations should consider when fine-tuning generative AI models.
Cloud security at AWS is the highest priority. Amazon Bedrock prioritizes security through a comprehensive approach to protect customer data and AI workloads.
Amazon Bedrock is built with security at its core, offering several features to protect your data and models. The main aspects of its security framework include:
Model customization is the process of providing training data to a model to improve its performance for specific use cases. Amazon Bedrock currently offers the following customization methods:
Model customization in Amazon Bedrock involves the following actions:
In this post, we explain these steps in relation to fine-tuning. However, you can apply the same concepts for continued pre-training as well.
The following architecture diagram explains the workflow of Amazon Bedrock model fine-tuning.

The workflow steps are as follows:
This workflow provides secure data handling across multiple AWS accounts while maintaining customer control over sensitive information using customer managed encryption keys.
The customer is in control of the data; model providers don’t have access to the data, and they don’t have access to a customer’s inference data or their customization training datasets. Therefore, data will not be available to model providers for them to improve their base models. Your data is also unavailable to the Amazon Bedrock service team.
In the following sections, we go through the steps of fine-tuning and deploying the Meta Llama 3.1 8B Instruct model in Amazon Bedrock using the Amazon Bedrock console.
Before you get started, make sure you have the following prerequisites:
For this post, we use the us-west-2 AWS Region. For instructions on assigning permissions to the IAM role, refer to Identity-based policy examples for Amazon Bedrock and How Amazon Bedrock works with IAM.
To fine-tune a text-to-text model like Meta Llama 3.1 8B Instruct, prepare a training and optional validation dataset by creating a JSONL file with multiple JSON lines.
Each JSON line is a sample containing a prompt and completion field. The format is as follows:
{"prompt": "<prompt1>", "completion": "<expected generated text>"}
{"prompt": "<prompt2>", "completion": "<expected generated text>"}
The following is an example from a sample dataset used as one-line input for fine-tuning Meta Llama 3.1 8B Instruct in Amazon Bedrock. In JSONL format, each record is one text line.
{"prompt": "consumer complaints and resolutions for financial products", "completion": "{'Date received': '01/01/24', 'Product': 'Credit card', 'Sub-product': 'Store credit card', 'Issue': 'Other features, terms, or problems', 'Sub-issue': 'Other problem', 'Consumer complaint narrative': None, 'Company public response': None, 'Company': 'Bread Financial Holdings, Inc.', 'State': 'MD', 'ZIP code': '21060', 'Tags': 'Servicemember', 'Consumer consent provided?': 'Consent not provided', 'Submitted via': 'Web', 'Date sent to company': '01/01/24', 'Company response to consumer': 'Closed with non-monetary relief', 'Timely response?': 'Yes', 'Consumer disputed?': None, 'Complaint ID': 8087806}"}
When uploading your training data to Amazon S3, you can use server-side encryption with AWS KMS. You can create KMS keys on the AWS Management Console, the AWS Command Line Interface (AWS CLI) and SDKs, or an AWS CloudFormation template. Complete the following steps to create a KMS key in the console:

Complete the following steps to create an S3 bucket and configure encryption:


Complete the following steps to upload the training data:

To create a VPC using Amazon Virtual Private Cloud (Amazon VPC), complete the following steps:

You can further secure your VPC by setting up an Amazon S3 VPC endpoint and using resource-based IAM policies to restrict access to the S3 bucket containing the model customization data.
Let’s create an Amazon S3 gateway endpoint and attach it to VPC with custom IAM resource-based policies to more tightly control access to your Amazon S3 files.

The following code is a sample resource policy. Use the name of the bucket you created earlier.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RestrictAccessToTrainingBucket",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::$your-bucket",
"arn:aws:s3:::$your-bucket/*"
]
}
]
}
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. This VPC endpoint security group only allows traffic originating from the security group attached to your VPC private subnets, adding a layer of protection. Complete the following steps to create the security group:
bedrock-kms-interface-sg).

Now you can create a security group to establish rules for controlling Amazon Bedrock custom fine-tuning job access to the VPC resources. You use this security group later during model customization job creation. Complete the following steps:
bedrock-fine-tuning-custom-job-sg).

Now you can create an interface VPC endpoint (PrivateLink) to establish a private connection between the VPC and AWS KMS.

For the security group, use the one you created in the previous step.

Attach a VPC endpoint policy that controls the access to resources through the VPC endpoint. The following code is a sample resource policy. Use the Amazon Resource Name (ARN) of the KMS key you created earlier.
{
"Statement": [
{
"Sid": "AllowDecryptAndView",
"Principal": {
"AWS": "*"
},
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:DescribeKey",
"kms:ListAliases",
"kms:ListKeys"
],
"Resource": "$Your-KMS-KEY-ARN"
}
]
}
Now you have successfully created the endpoints needed for private communication.

Let’s create a service role for model customization with the following permissions:
Let’s first create the required IAM policies:
You can use the following IAM permissions policy as a template for VPC permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcs",
"ec2:DescribeDhcpOptions",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
],
"Resource":[
"arn:aws:ec2:${{region}}:${{account-id}}:network-interface/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/BedrockManaged": ["true"]
},
"ArnEquals": {
"aws:RequestTag/BedrockModelCustomizationJobArn": ["arn:aws:bedrock:${{region}}:${{account-id}}:model-customization-job/*"]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
],
"Resource":[
"arn:aws:ec2:${{region}}:${{account-id}}:subnet/${{subnet-id}}",
"arn:aws:ec2:${{region}}:${{account-id}}:subnet/${{subnet-id2}}",
"arn:aws:ec2:${{region}}:${{account-id}}:security-group/security-group-id"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterface",
"ec2:DeleteNetworkInterfacePermission",
],
"Resource": "*",
"Condition": {
"ArnEquals": {
"ec2:Subnet": [
"arn:aws:ec2:${{region}}:${{account-id}}:subnet/${{subnet-id}}",
"arn:aws:ec2:${{region}}:${{account-id}}:subnet/${{subnet-id2}}"
],
"ec2:ResourceTag/BedrockModelCustomizationJobArn": ["arn:aws:bedrock:${{region}}:${{account-id}}:model-customization-job/*"]
},
"StringEquals": {
"ec2:ResourceTag/BedrockManaged": "true"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:${{region}}:${{account-id}}:network-interface/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": [
"CreateNetworkInterface"
]
},
"ForAllValues:StringEquals": {
"aws:TagKeys": [
"BedrockManaged",
"BedrockModelCustomizationJobArn"
]
}
}
}
]
}
You can use the following IAM permissions policy as a template for Amazon S3 permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::training-bucket",
"arn:aws:s3:::training-bucket/*",
"arn:aws:s3:::validation-bucket",
"arn:aws:s3:::validation-bucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::output-bucket",
"arn:aws:s3:::output-bucket/*"
]
}
]
}
Now let’s create the IAM role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "bedrock.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"aws:SourceAccount": "account-id"
},
"ArnEquals": {
"aws:SourceArn": "arn:aws:bedrock:us-west-2:account-id:model-customization-job/*"
}
}
}
]
}


In the KMS key you created in the previous steps, you need to update the key policy to include the ARN of the IAM role. The following code is a sample key policy:
{
"Version": "2012-10-17",
"Id": "key-consolepolicy-3",
"Statement": [
{
"Sid": "BedrockFineTuneJobPermissions",
"Effect": "Allow",
"Principal": {
"AWS": "$IAM Role ARN"
},
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey",
"kms:Encrypt",
"kms:DescribeKey",
"kms:CreateGrant",
"kms:RevokeGrant"
],
"Resource": "$ARN of the KMS key"
}
]
}
For more details, refer to Encryption of model customization jobs and artifacts.
Complete the following steps to set up your fine-tuning job:




When you specify the VPC subnets and security groups for a job, Amazon Bedrock creates elastic network interfaces (ENIs) that are associated with your security groups in one of the subnets. ENIs allow the Amazon Bedrock job to connect to resources in your VPC.
We recommend that you provide at least one subnet in each Availability Zone.

Refer to Custom model hyperparameters for additional details.



On the Amazon Bedrock console, choose Custom models in the navigation pane and locate your job.

You can monitor the job on the job details page.

After fine-tuning is complete (as shown in the following screenshot), you can use the custom model for inference. However, before you can use a customized model, you need to purchase provisioned throughput for it.

Complete the following steps:


After the provisioned throughput is in service, you can use the model for inference.

Now you’re ready to use your model for inference.

Now you can ask sample questions, as shown in the following screenshot.

Implementing these procedures allows you to follow security best practices when you deploy and use your fine-tuned model within Amazon Bedrock for inference tasks.
When developing a generative AI application that requires access to this fine-tuned model, you have the option to configure it within a VPC. By employing a VPC interface endpoint, you can make sure communication between your VPC and the Amazon Bedrock API endpoint occurs through a PrivateLink connection, rather than through the public internet.
This approach further enhances security and privacy. For more information on this setup, refer to Use interface VPC endpoints (AWS PrivateLink) to create a private connection between your VPC and Amazon Bedrock.
Delete the following AWS resources created for this demonstration to avoid incurring future charges:
In this post, we implemented secure fine-tuning jobs in Amazon Bedrock, which is crucial for protecting sensitive data and maintaining the integrity of your AI models.
By following the best practices outlined in this post, including proper IAM role configuration, encryption at rest and in transit, and network isolation, you can significantly enhance the security posture of your fine-tuning processes.
By prioritizing security in your Amazon Bedrock workflows, you not only safeguard your data and models, but also build trust with your stakeholders and end-users, enabling responsible and secure AI development.
As a next step, try the solution out in your account and share your feedback.
Vishal Naik is a Sr. Solutions Architect at Amazon Web Services (AWS). He is a builder who enjoys helping customers accomplish their business needs and solve complex challenges with AWS solutions and best practices. His core area of focus includes Generative AI and Machine Learning. In his spare time, Vishal loves making short films on time travel and alternate universe themes.
Sumeet Tripathi is an Enterprise Support Lead (TAM) at AWS in North Carolina. He has over 17 years of experience in technology across various roles. He is passionate about helping customers to reduce operational challenges and friction. His focus area is AI/ML and Energy & Utilities Segment. Outside work, He enjoys traveling with family, watching cricket and movies.
Manuel Rioux est fièrement propulsé par WordPress