Amazon Bedrock Knowledge Bases offers a fully managed Retrieval Augmented Generation (RAG) feature that connects large language models (LLMs) to internal data sources. This feature enhances foundation model (FM) outputs with contextual information from private data, making responses more relevant and accurate.
At AWS re:Invent 2024, we announced Amazon Bedrock Knowledge Bases support for natural language querying to retrieve structured data from Amazon Redshift and Amazon SageMaker Lakehouse. This feature provides a managed workflow for building generative AI applications that access and incorporate information from structured and unstructured data sources. Through natural language processing, Amazon Bedrock Knowledge Bases transforms natural language queries into SQL queries, so users can retrieve data directly from supported sources without understanding database structure or SQL syntax.
In this post, we discuss how to make your Amazon Aurora PostgreSQL-Compatible Edition data available for natural language querying through Amazon Bedrock Knowledge Bases while maintaining data freshness.
Structured data retrieval in Amazon Bedrock Knowledge Bases enables natural language interactions with your database by converting user queries into SQL statements. When you connect a supported data source like Amazon Redshift, Amazon Bedrock Knowledge Bases analyzes your database schema, table relationships, query engine, and historical queries to understand the context and structure of your information. This understanding allows the service to generate accurate SQL queries from natural language questions.
At the time of writing, Amazon Bedrock Knowledge Bases supports structured data retrieval directly from Amazon Redshift and SageMaker Lakehouse. Although direct support for Aurora PostgreSQL-Compatible isn’t currently available, you can use the zero-ETL integration between Aurora PostgreSQL-Compatible and Amazon Redshift to make your data accessible to Amazon Bedrock Knowledge Bases structured data retrieval. Zero-ETL integration automatically replicates your Aurora PostgreSQL tables to Amazon Redshift in near real time, alleviating the need for complex extract, transform, and load (ETL) pipelines or data movement processes.
This architectural pattern is particularly valuable for organizations seeking to enable natural language querying of their structured application data stored in Amazon Aurora database tables. By combining zero-ETL integration with Amazon Bedrock Knowledge Bases, you can create powerful applications like AI assistants that use LLMs to provide natural language responses based on their operational data.
The following diagram illustrates the architecture we will implement to connect Aurora PostgreSQL-Compatible to Amazon Bedrock Knowledge Bases using zero-ETL.

The workflow consists of the following steps:
Make sure you’re logged in with a user role with access to create an Aurora database, run DDL (CREATE, ALTER, DROP, RENAME) and DML (SELECT, INSERT, UPDATE, DELETE) statements, create a Redshift database, set up zero-ETL integration, and create an Amazon Bedrock knowledge base.
In this section, we walk through creating and configuring an Aurora PostgreSQL database with a sample schema for our demonstration. We create three interconnected tables: products, customers, and orders.
Let’s begin by setting up our database environment. Create a new Aurora PostgreSQL database cluster and launch an Amazon Elastic Compute Cloud (Amazon EC2) instance that will serve as our access point for managing the database. The EC2 instance will make it straightforward to create tables and manage data throughout this post.
The following screenshot shows the details of our database cluster and EC2 instance.

For instructions to set up your database, refer to Creating and connecting to an Aurora PostgreSQL DB cluster.
After you connect to your database using SSH on your EC2 instance (described in Creating and connecting to an Aurora PostgreSQL DB cluster), it’s time to create your data structure. We use the following DDL statements to create three tables:
-- Create Product table
CREATE TABLE product (
product_id SERIAL PRIMARY KEY,
product_name VARCHAR(100) NOT NULL,
price DECIMAL(10, 2) NOT NULL
);
-- Create Customer table
CREATE TABLE customer (
customer_id SERIAL PRIMARY KEY,
customer_name VARCHAR(100) NOT NULL,
pincode VARCHAR(10) NOT NULL
);
-- Create Orders table
CREATE TABLE orders (
order_id SERIAL PRIMARY KEY,
product_id INTEGER NOT NULL,
customer_id INTEGER NOT NULL,
FOREIGN KEY (product_id) REFERENCES product(product_id),
FOREIGN KEY (customer_id) REFERENCES customer(customer_id)
);
After you create the tables, you can populate them with sample data. When inserting data into the orders table, remember to maintain referential integrity by verifying the following:
product_id exists in the product tablecustomer_id exists in the customer tableWe use the following example code to populate the tables:
INSERT INTO product (product_id, product_name, price) VALUES (1, 'Smartphone X', 699.99);
INSERT INTO product (product_id, product_name, price) VALUES (2, 'Laptop Pro', 1299.99);
INSERT INTO product (product_id, product_name, price) VALUES (3, 'Wireless Earbuds', 129.99);
INSERT INTO customer (customer_id, customer_name, pincode) VALUES (1, 'John Doe', '12345');
INSERT INTO customer (customer_id, customer_name, pincode) VALUES (2, 'Jane Smith', '23456');
INSERT INTO customer (customer_id, customer_name, pincode) VALUES (3, 'Robert Johnson', '34567');
INSERT INTO orders (order_id, product_id, customer_id) VALUES (1, 1, 1);
INSERT INTO orders (order_id, product_id, customer_id) VALUES (2, 1, 2);
INSERT INTO orders (order_id, product_id, customer_id) VALUES (3, 2, 3);
INSERT INTO orders (order_id, product_id, customer_id) VALUES (4, 2, 1);
INSERT INTO orders (order_id, product_id, customer_id) VALUES (5, 3, 2);
INSERT INTO orders (order_id, product_id, customer_id) VALUES (6, 3, 3);
Make sure to maintain referential integrity when populating the orders table to avoid foreign key constraint violations.
You can also use similar examples to build your schema and populate data for this.
Now that you have set up your Aurora PostgreSQL database, you can establish the zero-ETL integration with Amazon Redshift. This integration automatically syncs your data between Aurora PostgreSQL-Compatible and Amazon Redshift.
First, create an Amazon Redshift Serverless workgroup and namespace. For instructions, see Creating a data warehouse with Amazon Redshift Serverless.
The zero-ETL integration process involves two main steps:
The following screenshot shows our zero-ETL integration details.

After you complete the integration, you can verify its success through several checks.
Firstly, you can check the zero-ETL integration details in the Amazon Redshift console. You should see an Active status for your integration, along with source and destination information, as shown in the following screenshot.

Additionally, you can use the Redshift Query Editor v2 to verify that your data has been successfully populated. A simple query like SELECT * FROM customer; should return the synchronized data from your Aurora PostgreSQL database, as shown in the following screenshot.

The final step is to create an Amazon Bedrock knowledge base that will enable natural language querying of our data.
Create a new Amazon Bedrock knowledge base with the structured data option. For instructions, see Build a knowledge base by connecting to a structured data store. Then you must synchronize the query engine to enable data access.
Before the sync process can succeed, you need to grant appropriate permissions to the Amazon Bedrock Knowledge Bases AWS Identity and Access Management (IAM) role. This involves executing GRANT SELECT commands for each table in your Redshift database.
Run the following command in Redshift Query Editor v2 for each table:GRANT SELECT ON <table_name> TO "IAMR:<KB Role name>";For example:GRANT SELECT ON customer TO "IAMR:AmazonBedrockExecutionRoleForKnowledgeBase_ej0f0";
For production setups, integrating the end-user identity into the data access flow requires identity federation. Refer to AWS documentation on structured database access for the role-based access model. For federating identities from web clients, Amazon Cognito or SAML federation with AWS Security Token Service (AWS STS) might be required depending on your architecture.
After you complete the configuration, your knowledge base should show the following details:
You can now start querying your data using natural language.
Now that you have set up your Amazon Bedrock knowledge base, you can begin testing its capabilities by running natural language queries against your structured data. Amazon Bedrock Knowledge Bases structured data translates plain English questions into SQL and uses FMs to generate human-readable responses.
You can test your Amazon Bedrock knowledge base in two ways:
In this section, we illustrate the console experience. On the Amazon Bedrock console, you can interact with your Amazon Bedrock knowledge base in two modes:


The following table contains some examples of queries and their respective SQL and model response generation.
| Natural Language Query | Generate SQL API Result | Retrieval and Response Generation | Model Used for Response Generation |
How many customers do we have? |
|
We currently have 11 unique customers. |
Amazon Nova Lite |
Which all customers have purchased the most products? |
|
Based on the data, the customers who have purchased |
Amazon Nova Lite |
Who all have purchased more than one number of the most expensive product? |
|
The customers who have purchased more than one number of the |
Amazon Nova Micro |
When you’re done using this solution, clean up the resources you created to avoid ongoing charges.
In this post, we demonstrated how to enable natural language querying of Aurora PostgreSQL data using Amazon Bedrock Knowledge Bases through zero-ETL integration with Amazon Redshift. We showed how to set up the database, configure zero-ETL integration, and establish the knowledge base connection for seamless data access. Although this solution provides an effective way to interact with your data using natural language, you should consider the additional storage costs in Amazon Redshift when implementing this architecture for your use case.
Please try out this solution for yourself and share your feedback in the comments.
Girish B is a Senior Solutions Architect at AWS India Pvt Ltd based in Bengaluru. Girish works with many ISV customers to design and architect innovative solutions on AWS
Dani Mitchell is a Generative AI Specialist Solutions Architect at AWS. He is focused on helping accelerate enterprises across the world on their generative AI journeys with Amazon Bedrock
Manuel Rioux est fièrement propulsé par WordPress