Organizations can optimize their migration and modernization projects by streamlining the containerization process for legacy applications. With the right tools and approaches, teams can transform traditional applications into containerized solutions efficiently, reducing the time spent on manual coding, testing, and debugging while enhancing developer productivity and accelerating time-to-market. During containerization initiatives, organizations can address compatibility, dependencies, and configurations efficiently using automated tools and best practices, helping to keep projects on schedule and within budget parameters. Development teams can focus more on innovation by automating routine tasks such as application architecture analysis, deployment script creation, and environment configuration, leading to smoother transitions across different stages of the modernization journey.
In this post, you’ll learn how you can use Amazon Q Developer command line interface (CLI) with Model Context Protocol (MCP) servers integration to modernize a legacy Java Spring Boot application running on premises and then migrate it to Amazon Web Services (AWS) by deploying it on Amazon Elastic Kubernetes Service (Amazon EKS). The Amazon Q Developer CLI helps automate common tasks in the modernization process. You’ll introduce chaos into the system after successful modernization. Then you’ll troubleshoot it using Amazon Q Developer CLI. You’ll perform all these activities using natural language prompts without writing code.
Amazon Q Developer goes beyond coding to help developers and IT professionals with many of their tasks—from coding, testing, and deploying to troubleshooting, performing security scanning and fixes, modernizing applications, optimizing AWS resources, and creating data engineering pipelines. Amazon Q for the command line integrates contextual information, providing Amazon Q with an enhanced understanding of your use case, enabling it to provide relevant and context-aware responses. The MCP is an open standard that enables AI assistants to interact with external tools and services. It defines a structured way for AI models to discover available tools, request tool execution with specific parameters, and receive and process tool results. You’ll use MCP to extend the capabilities of Amazon Q Developer CLI by connecting it to custom tools and services.
Although we’re showcasing the capability of Amazon Q Developer CLI in this end -to-end migration and modernization journey, if you’re using one of the supported integrated development environments (IDEs) and Java versions, you can use the /transform command to perform the step 2. For more information, visit Upgrading Java versions from Amazon Q Developer.
MCP servers act like a universal connector for AI models, enabling them to interact with external systems, fetch live data, and integrate with various tools seamlessly. This allows Amazon Q to provide more contextually relevant assistance by accessing the information it needs in real-time. The following architecture diagram shows how Amazon Q Developer CLI connects to external data sources through MCP servers.

The following is a summary of the functionality of the architecture:
The solution follows these high-level steps, as shown in the following graphic:

You need to have the following configured before you start setting up the demo:
MCP configuration in Amazon Q Developer CLI is managed through JSON files. You’ll configure Amazon Bedrock EKS MCP Server. At the time of this writing, only the stdio transport is supported in Amazon Q Developer CLI.
Amazon Q Developer CLI supports two levels of MCP configuration:
~/.aws/amazonq/mcp.json – Applies to all workspaces.amazonq/mcp.json – Specific to the current workspaceIn this demonstration, we’re using workspace configuration, but you can use either of them. Follow these steps:
.amazonq/mcp.json file with the following content:{
"mcpServers": {
"awslabs.eks-mcp-server": {
"command": "uvx",
"args": [
"awslabs.eks-mcp-server",
"--allow-write",
"--allow-sensitive-data-access"
],
"env": {
"AWS_PROFILE": "your-profile-name",
"AWS_REGION": "your-region",
"FASTMCP_LOG_LEVEL": "ERROR"
},
"autoApprove": [
"manage_eks_stacks",
"manage_k8s_resource",
"list_k8s_resources",
"get_pod_logs",
"get_k8s_events",
"get_cloudwatch_logs",
"get_cloudwatch_metrics",
"get_policies_for_role",
"search_eks_troubleshoot_guide",
"list_api_versions"
],
"disabled": false
}
}
}
q login
q and then /tools to validate that the Amazon EKS MCP server is configured, as shown in the following screenshot. By default, it won’t be trusted.
To migrate and modernize a Java Spring Boot application, complete the steps in the following sections.
To create a legacy Java Spring Boot application, you first build a legacy Java 8, Spring Boot 2.3.x bookstore application, which you’ll modernize and migrate to AWS. Go back to Amazon Q Developer CLI and use natural language query to create the preceding application. Follow these steps:
You will bootstrap the current directory with a java 8 spring boot RESTful microservice application which provides an API. The microservice provides operations for storing, updating, deleting and finding book information. The supported attributes are isbn, book_title, author, price, and currency. The price attribute is numeric and everything else is String. The isbn attribute format needs to be validated as per standard ISBN code format. You can use regex for that. Store the book information in local cache. It should also provide an Actuator endpoint. The project should be built using Maven. Share example payloads to test the microservice now. Once I review and approve the payload proceed with bootstrapping.
Approved. Go ahead.
Amazon Q Developer CLI generates the working code with Java 8 using Spring Boot 2.3.x framework.
Can you run the microservice?
Refer to the following video for a quick demo.
Upgrade the legacy Java Spring Boot application that you created in the previous step using Amazon Q Developer CLI.
As mentioned previously, if you’re using one of the supported integrated development environments (IDEs) and Java versions, you can use the /transform command within your IDE to perform this step.
Can you update the microservice from Java 8 to Java 21? Also update the spring-boot version to version 3.5.0?
<properties>
<java.version>21</java.version>
</properties>
…
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>3.5.0</version>
<relativePath/>
</parent>
Can you run the microservice?
Refer to the following video to know more.
Containerize the application so that the application can be run on both x86_64 and ARM64 hardware:
Can you containerize this application? Create a Dockerfile, build the container image and tag with "eks-bookstore-java-microservice"? Run and test the endpoints. Make sure the container is built as a multi-architecture image supporting both x86_64 and ARM64.
Can you run the docker image on my laptop?
Stop the container.
Your application is now containerized and tested on the local system. The heavy lifting of the making the code and configuration changes, updating the dependency, and writing Dockerfile is done by Amazon Q Developer.
Refer to the following video for a demo.
Deploy the containerized application on Amazon EKS. To do so, create a new EKS cluster and use Helm Chart to deploy the application. Amazon Q Developer CLI uses Amazon EKS MCP server to perform some of these actions. Amazon Q Developer CLI uses the default profile unless instructed to use another.
Build the Dockerfile and push this image to an Amazon ECR repository called "eks-bookstore-java-microservice" in the AWS account. Provide the image URL once this is complete.
Create a new Amazon EKS cluster. Deploy the microservice to the EKS cluster using Helm chart. I want to test the microservice over the public Internet. Share the microservice URL to test once done.
!kubectl get pods
Your containerized application is now running successfully on the EKS cluster. This completes the migration and modernization of the legacy Java Spring Boot application.
Refer to the following video for a demo.
In real-world complex applications, while Amazon Q Developer performs the heavy lifting of upgrading, containerizing, and deploying the application on AWS, you might encounter application-specific environmental issues. In this step, you’ll simulate an out-of-memory (OOM) issue by introducing chaos into the system. You can introduce the chaos using one of the below options:
You can introduce the chaos using AWS Fault Injection Service EKS Pod actions or by following the below steps:
apiVersion: apps/v1
kind: Deployment
metadata:
name: bookstore-bookstore
spec:
template:
spec:
containers:
- name: bookstore
env:
- name: JAVA_OPTS
value: "-Xms200m -Xmx200m -XX:+CrashOnOutOfMemoryError"
resources:
limits:
memory: "128Mi"
cpu: "500m"
requests:
memory: "64Mi"
cpu: "100m"
apiVersion: batch/v1
kind: Job
metadata:
name: stress-test
spec:
template:
spec:
containers:
- name: stress-test
image: polinux/stress
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "350M", "--vm-hang", "0"]
resources:
limits:
memory: "400Mi"
cpu: "500m"
requests:
memory: "200Mi"
cpu: "200m"
env:
- name: TARGET_POD
valueFrom:
fieldRef:
fieldPath: spec.nodeName
restartPolicy: Never
Wait until the pods crashes with an OOM error.
Refer to the following video for a demo.
Troubleshoot the issue using Amazon Q Developer CLI with EKS MCP server:
I have used Helm chart to deploy the application on EKS cluster. But application is not running. Can you identify the root cause of this the issue and share the potential fix for the issue?
Please go ahead and fix.
Modifications related to security in Amazon EKS are out of scope for this post. Follow the security best practices before moving into production. Using Amazon Q Developer, you can accelerate the migration and modernization journey, but as the owner of the code, you need to do the due diligence on the changes.
Due to the inherent nondeterministic nature of the FMs, the responses of the Amazon Q Developer CLI might not be exactly same as those shown in the demo. You need to adjust the prompts accordingly.
Refer to the following video for a demo.
Properly decommissioning provisioned AWS resources is an important best practice to optimize costs and enhance security posture after concluding proofs of concept and demonstrations. Follow the steps to delete the resources created in your AWS account:
I have tested the application. It is time for clean-up. Can you list down the AWS resources that you created for this microservices? Do not delete anything, give me the list. You will delete only after I confirm.
Ok, please go ahead and delete.
.amazonq/mcp.json file from your workspace folder to remove MCP configuration for Amazon Q Developer CLI.In this post, you learned how Amazon Q Developer CLI with Amazon EKS MCP server integration interprets natural language queries, automatically converts them into appropriate commands, and identifies the necessary tools for execution. You upgraded a legacy Java Spring Boot application, then containerized it to support deployment on multi-architectural computes. You deployed the application on Amazon EKS, introduced chaos, and resolved the issues using natural language queries. Using Amazon Q Developer CLI, you can improve your developer’s productivity many times over. We encourage you to explore additional use cases and share your feedback with us!
For more information on Amazon Q Developer CLI and AWS MCP servers:

Biswanath Mukherjee is a Senior Solutions Architect at Amazon Web Services. He works with large strategic customers of AWS by providing them technical guidance to migrate and modernize their applications on AWS Cloud. With his extensive experience in cloud architecture and migration, he partners with customers to develop innovative solutions that leverage the scalability, reliability, and agility of AWS to meet their business needs. His expertise spans diverse industries and use cases, enabling customers to unlock the full potential of the AWS Cloud.
Upendra V is a Senior Solutions Architect at Amazon Web Services, specializing in Generative AI and cloud solutions. He helps enterprise customers design and deploy production-ready Generative AI workloads, implement Large Language Models (LLMs) and Agentic AI systems, and optimize cloud deployments. With expertise in cloud adoption and machine learning, he enables organizations to build and scale AI-driven applications efficiently.
Manuel Rioux est fièrement propulsé par WordPress