Cloud technologies are progressing at a rapid pace. Businesses are adopting new innovations and technologies to create cutting-edge solutions for their customers. However, security is a big risk when adopting the latest technologies. Enterprises often rely on reactive security monitoring and notification techniques, but those techniques might not be sufficient to safeguard your enterprises from vulnerable assets and third-party attacks. You need to establish proper security guardrails in the cloud environment and create a proactive monitoring practice to strengthen your cloud security posture and maintain required compliance standards.
To address this challenge, this post demonstrates a proactive approach for security vulnerability assessment of your accounts and workloads, using Amazon GuardDuty, Amazon Bedrock, and other AWS serverless technologies. This approach aims to identify potential vulnerabilities proactively and provide your users with timely alerts and recommendations, avoiding reactive escalations and other damages. By implementing a proactive security monitoring and alerting system, users can receive personalized notifications in preferred channels like email, SMS, or push notifications. These alerts concisely summarize the identified security issues and provide succinct troubleshooting steps to fix the problem promptly, without the need for escalation.
GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior across your AWS environment. GuardDuty combines machine learning (ML), anomaly detection, and malicious file discovery, using both AWS and industry-leading third-party sources, to help protect AWS accounts, workloads, and data. GuardDuty integrates with Amazon EventBridge by creating an event for EventBridge for new generated vulnerability findings. This solution uses a GuardDuty findings notification through EventBridge to invoke AWS Step Functions, a serverless orchestration engine, which runs a state machine. The Step Functions state machine invokes AWS Lambda functions to get a findings summary and remediation steps through Amazon Bedrock.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.
By using generative AI FMs on Amazon Bedrock, users can quickly analyze vast amounts of security data to identify patterns and anomalies that may indicate potential threats or breaches. Furthermore, by recognizing patterns in network traffic, user behavior, or system logs, such FMs can help identify suspicious activities or security vulnerabilities. Generative AI can make predictions about future security threats or attacks by analyzing historical security data and trends. This can help organizations proactively implement security measures to prevent breaches before they occur. This form of automation can help improve efficiency and reduce the response time to security threats.
The solution uses the built-in integration between GuardDuty and EventBridge to raise an event notification for any new vulnerability findings in your AWS accounts or workloads. You can configure the EventBridge rule to filter the findings based on severity so that only high-severity findings are prioritized first. The EventBridge rule invokes a Step Functions workflow. The workflow invokes a Lambda function and passes the GuardDuty findings details. The Lambda function calls Anthropic’s Claude 3 Sonnet model through Amazon Bedrock APIs with the input request. The API returns the finding summarization and mitigation steps. The Step Functions workflow sends findings and remediation notifications to the subscribers or users using Amazon Simple Notification Service (Amazon SNS). In this post, we use email notification, but you can extend the solution to send mobile text or push notifications.
The solution uses the following key services:
The following diagram illustrates the solution architecture.

The workflow includes the following steps:
The solution provides the following benefits for end-users:
Complete the following prerequisite steps:
Complete the following steps to deploy the solution:
The example rule in the following screenshot filters high-severity findings at severity level 8 and above. For a complete list of GuardDuty findings, refer to the GetFindings API.

You need to provide proper IAM permissions to your Lambda function to call Amazon Bedrock APIs. You can configure parameters in the environment variables in the Lambda function. The following function uses three configuration parameters:
modelId is set as claude-3-sonnet-20240229-v1:0findingDetailType is set as GuardDuty finding to filter the payloadsource is set as guardduty to only evaluate GuardDuty findingsimport json
import boto3
import urllib.parse
import os
region = os.environ['AWS_REGION']
model_Id = os.environ['modelId']
finding_detail_type = os.environ['findingDetailType']
finding_source = os.environ['source']
# Bedrock client used to interact with APIs around models
bedrock = boto3.client(service_name='bedrock', region_name= region)
# Bedrock Runtime client used to invoke and question the models
bedrock_runtime = boto3.client(service_name='bedrock-runtime', region_name= region)
evaluator_response = []
max_tokens=512
top_p=1
temp=0.5
system = ""
def lambda_handler(event, context):
message = ""
try:
file_body = json.loads(json.dumps(event))
print(finding_detail_type)
print(finding_source)
if file_body['detail-type'] == finding_detail_type and file_body['source'] == finding_source and file_body['detail']:
print(f'File contents: {file_body['detail']}')
description = file_body["detail"]["description"]
finding_arn = file_body["detail"]["arn"]
try:
body= createBedrockRequest(description)
message = invokeModel(body)
print(message)
evaluator_response.append(message)
evaluator_response.append(finding_arn)
except Exception as e:
print(e)
print('Error calling model')
else:
message = "Invalid finding source"
except Exception as e:
print(e)
print('Error getting finding id from the guard duty record')
raise e
return message
def createBedrockRequest(description):
prompt = "You are an expert in troubleshooting AWS logs and sharing details with the user via an email draft as stated in <description>. Do NOT provide any preamble. Draft a professional email summary of details as stated in description. Write the recipient as - User in the email and sender in the email should be listed as - Your Friendly Troubleshooter. Skip the preamble and directly start with subject. Also, provide detailed troubleshooting steps in the email draft." + "<description>" + description + "</description>"
messages = [{ "role":'user', "content":[{'type':'text','text': prompt}]}]
body=json.dumps(
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": max_tokens,
"messages": messages,
"temperature": temp,
"top_p": top_p,
"system": system
}
)
return body
def invokeModel(body):
response = bedrock_runtime.invoke_model(body= body, modelId = model_Id)
response_body = json.loads(response.get('body').read())
message = response_body.get('content')[0].get("text")
return message
It’s crucial to perform prompt engineering and follow prompting best practices in order to avoid hallucinations or non-coherent responses from the LLM. In our solution, we created the following prompt to generate responses from Anthropic’s Claude 3 Sonnet:
Prompt = ```You are an expert in troubleshooting AWS logs and sharing details with the user via an email draft as stated in <description>. Do NOT provide any preamble. Draft a professional email summary of details as stated in description. Write the recipient as - User in the email and sender in the email should be listed as - Your Friendly Troubleshooter. Skip the preamble and directly start with subject. Also, provide detailed troubleshooting steps in the email draft." + "<description>" + description + "</description>```
The prompt makes sure the description of the issue under consideration is categorized appropriately within XML tags. Further emphasis has been provided upon jumping directly into generating the answer and skipping any additional information that may be generated from the model.
The following screenshot shows the topic details with some test subscribers.

Now you can create the Step Functions state machine and integrate the Lambda and Amazon SNS calls in the workflow.
You need to provide appropriate IAM permissions to the Step Functions role so it can call Lambda and Amazon SNS.
The following diagram illustrates the Step Functions state machine.

The following sample code shows how to use the Step Functions optimized integration with Lambda and Amazon SNS.

As seen in the following screenshot, the rule needs to have proper IAM permission to invoke the Step Functions state machine.

You can test the setup by generating some sample findings on the GuardDuty console. Based on the sample findings volume, the test emails will be triggered accordingly.

Based on a sample generation, the following screenshot shows an email from Amazon SNS about a potential security risk in an Amazon Elastic Container Service (Amazon ECS) cluster. The email contains the vulnerability summary and a few mitigation steps to remediate the issue.

The following screenshot is a sample email notification about a potential Bitcoin IP address communication.

This proactive approach enables users to take immediate action and remediate vulnerabilities before they escalate, reducing the risk of data breaches or security incidents. It empowers users to maintain a secure environment within their AWS accounts, fostering a culture of proactive security awareness and responsibility. Furthermore, a proactive security vulnerability assessment and remediation system can streamline the resolution process, minimizing the time and effort required to address security concerns.
To avoid incurring unnecessary costs, complete the following steps:
By cleaning up the resources created for this solution, you can prevent any ongoing charges to your AWS account.
By providing users with clear and actionable recommendations, they can swiftly implement the necessary fixes, reducing the likelihood of untracked or lost tickets and enabling swift resolution. Adopting this proactive approach not only enhances the overall security posture of AWS accounts, but also promotes a collaborative and efficient security practice within the organization, fostering a sense of ownership and accountability among users.
You can deploy this solution and integrate it with other services to have a holistic omnichannel solution. To learn more about Amazon Bedrock and AWS generative AI services, refer to the following workshops:
Shikhar Kwatra is a Sr. Partner Solutions Architect at Amazon Web Services, working with leading Global System Integrators. He has earned the title of one of the Youngest Indian Master Inventors with over 500 patents in the AI/ML and IoT domains. Shikhar aids in architecting, building, and maintaining cost-efficient, scalable cloud environments for the organization, and support the GSI partners in building strategic industry solutions on AWS.
Rajdeep Banerjee is a Senior Partner Solutions Architect at AWS helping strategic partners and clients in the AWS cloud migration and digital transformation journey. Rajdeep focuses on working with partners to provide technical guidance on AWS, collaborate with them to understand their technical requirements, and designing solutions to meet their specific needs. He is a member of Serverless technical field community. Rajdeep is based out of Richmond, Virginia.
Manuel Rioux est fièrement propulsé par WordPress