Businesses face a growing challenge: customers need answers fast, but support teams are overwhelmed. Support documentation like product manuals and knowledge base articles typically require users to search through hundreds of pages, and support agents often run 20–30 customer queries per day to locate specific information.
This post demonstrates how to solve this challenge by building an AI-powered website assistant using Amazon Bedrock and Amazon Bedrock Knowledge Bases. This solution is designed to benefit both internal teams and external customers, and can offer the following benefits:
The solution uses Retrieval-Augmented Generation (RAG) to retrieve relevant information from a knowledge base and return it to the user based on their access. It consists of the following key components:
The following diagram illustrates the architecture of this solution.

The workflow consists of the following steps:
In the following sections, we demonstrate how to crawl and configure the external website as a knowledge base, and also upload internal documentation.
You must have the following in place to deploy the solution in this post:
The first step is to build a knowledge base to ingest data from a website and operational documents from an S3 bucket. Complete the following steps to create your knowledge base:


https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html.



You have now created a knowledge base with the data source configured as the website link you provided.

Complete the following steps to configure documents from your S3 bucket as an internal data source:


For this example, we upload a document in the new S3 bucket data source. The following screenshot shows an example of our document.

Complete the following steps to upload the document:


Note the knowledge base ID and the data source IDs for the external and internal data sources. You use this information in the next step when deploying the solution infrastructure.
To deploy the solution infrastructure using the AWS CDK, complete the following steps:
cd ./customer-support-ai/iac
"external_source_id": "Set this to value from Amazon Bedrock Knowledge Base datasource",
"internal_source_id": "Set this to value from Amazon Bedrock Knowledge Base datasource",
"knowledge_base_id": "Set this to value from Amazon Bedrock Knowledge Base",
When the deployment is complete, you can find the Application Load Balancer (ALB) URL and demo user details in the script execution output.

You can also open the Amazon EC2 console and choose Load Balancers in the navigation pane to view the ALB.

On the ALB details page, copy the DNS name. You can use it to access the UI to try out the solution.

Let’s explore an example of Amazon S3 service support. This solution supports different classes of users to help resolve their queries while using Amazon Bedrock Knowledge Bases to manage specific data sources (such as website content, documentation, and support tickets) with built-in filtering controls that separate internal operational documents from publicly accessible information. For example, internal users can access both company-specific operational guides and public documentation, whereas external users are limited to publicly available content only.
Open the DNS URL in the browser. Enter the external user credentials and choose Login.

After you’re successfully authenticated, you will be redirected to the home page.

Choose Support AI Assistant in the navigation pane to ask questions related to Amazon S3. The assistant can provide relevant responses based on the information available in the Getting started with Amazon S3 guide. However, if an external user asks a question that is related to information available only for internal users, the AI assistant will not provide the internal information to user and will respond only with information available for external users.

Log out and log in again as an internal user, and ask the same queries. The internal user can access the relevant information available in the internal documents.

If you decide to stop using this solution, complete the following steps to remove its associated resources:
cd iac
./cleanup.sh
cd iac
cdk destroy --all


This post demonstrated how to create an AI-powered website assistant to retrieve information quickly by constructing a knowledge base through web crawling and uploading documents. You can use the same approach to develop other generative AI prototypes and applications.
If you’re interested in the fundamentals of generative AI and how to work with FMs, including advanced prompting techniques, check out the hands-on course Generative AI with LLMs. This on-demand, 3-week course is for data scientists and engineers who want to learn how to build generative AI applications with LLMs. It’s the good foundation to start building with Amazon Bedrock. Sign up to learn more about Amazon Bedrock.
Shashank Jain is a Cloud Application Architect at Amazon Web Services (AWS), specializing in generative AI solutions, cloud-native application architecture, and sustainability. He works with customers to design and implement secure, scalable AI-powered applications using serverless technologies, modern DevSecOps practices, Infrastructure as Code, and event-driven architectures that deliver measurable business value.
Jeff Li is a Senior Cloud Application Architect with the Professional Services team at AWS. He is passionate about diving deep with customers to create solutions and modernize applications that support business innovations. In his spare time, he enjoys playing tennis, listening to music, and reading.
Ranjith Kurumbaru Kandiyil is a Data and AI/ML Architect at Amazon Web Services (AWS) based in Toronto. He specializes in collaborating with customers to architect and implement cutting-edge AI/ML solutions. His current focus lies in leveraging state-of-the-art artificial intelligence technologies to solve complex business challenges.
Manuel Rioux est fièrement propulsé par WordPress