This content originally appeared on DEV Community and was authored by Jenna Pederson
I read to my kid a lot. To be clear, I read one or two books a lot to my kid. To the point that I have the words to both books memorized. What if we could create our own story, together, to tell a new story each night with new characters? In this article, I will show how we can use an AI chat integration and customize it to act as an interactive bedtime storyteller. With this approach, you could build a personalized meal planner, a travel guide, or an assistant to help you ideate and refine your social media copy.
Let's first check out the high-level solution.
Our solution
I'm working within a React app built and deployed with AWS Amplify. If you'd like to start here, you can spin up the sample app using this quickstart guide and then incorporate the code I share.
We'll use Amazon Bedrock as a datasource and make a custom query to generate a story using the new Converse API. The Converse API will allow us to handle multi-turn conversations and maintain a conversation history. We'll go a step further, utilizing the systemPrompt
feature to provide context, instructions, and guidelines on how it should respond.
The example code below is Typescript, but you can also find sample code for other languages here.
Let's get started building!
Add a data source
Our first task is to add Amazon Bedrock as a data source to our app. This approach gives an AWS Lambda function (we'll define that in the next step) permission to call a specific foundation model in Amazon Bedrock. We're limiting the actions it can make to bedrock:InvokeModel
and the specific model resource we specify.
To do this, let's add the following to the amplify/backend.ts
file:
// amplify/backend.ts
import { defineBackend } from '@aws-amplify/backend';
import { auth } from './auth/resource';
import { data, CHAT_MODEL_ID, generateChatResponseFunction } from './data/resource';
import { Effect, PolicyStatement } from 'aws-cdk-lib/aws-iam';
export const backend = defineBackend({
auth,
data,
generateChatResponseFunction,
});
backend.generateChatResponseFunction.resources.lambda.addToRolePolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: ["bedrock:InvokeModel"],
resources: [
`arn:aws:bedrock:*::foundation-model/${CHAT_MODEL_ID}`,
],
})
);
Next, we need to define the custom query and connect it to the handler function that will make the call to Amazon Bedrock.
Define a custom query
In this step, we first define the generateChatResponseFunction
using defineFunction
and configure it with the model id, timeout, and which Node runtime top use. The entry
key will specify the handler that with the core logic.
Now, we define a custom query generateChatResponse
and add it to the schema. Here, we define the arguments allowed, the return type, and which function to use.
To do this, update the amplify/data/resource.ts
file like this:
// amplify/data/resource.ts
import { type ClientSchema, a, defineData, defineFunction } from "@aws-amplify/backend";
export const CHAT_MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0";
export const generateChatResponseFunction = defineFunction({
entry: "./generateChatResponse.ts",
environment: {
CHAT_MODEL_ID,
},
timeoutSeconds: 180,
runtime: 20
});
const schema = a.schema({
generateChatResponse: a
.query()
.arguments({ conversation: a.json().required(), systemPrompt: a.string().required() })
.returns(a.string())
.authorization((allow) => [allow.publicApiKey()])
.handler(a.handler.function(generateChatResponseFunction)),
});
export type Schema = ClientSchema<typeof schema>;
export const data = defineData({
schema,
authorizationModes: {
defaultAuthorizationMode: "apiKey",
apiKeyAuthorizationMode: {
expiresInDays: 30,
},
},
});
In the code above, the first argument is a JSON string, conversation
. This will include the entire conversation from our interactions with our interactive storyteller.
The second argument is a string, systemPrompt
. We'll use this to customize our interactions with the model, by providing context, instructions, and guidelines on how to respond.
Create the handler code
Next, we'll create the handler code with our core logic that makes the call to Amazon Bedrock. Here, we'll initialize the BedrockRuntimeClient
, prepare the input and conversation as a ConverseCommandInput
, and then make the call.
When this handler code is called, the conversation
argument will be passed a JSON string representing the full conversation between the user and the interactive story teller. This is then parsed and loaded as an object with this structure:
[
{
role: "user",
content: [{ text: firstUserMessage }]
},
{
role: "assistant",
content: [{ text: firstResponseMessage }]
},
{
role: "user",
content: [{ text: secondUserMessage }]
}
]
This allows us to maintain a conversation history and handle multi-turn conversations.
To implement this, create the amplify/data/generateChatResponse.ts
file with the code below:
// amplify/data/generateChatResponse.ts
import type { Schema } from "./resource";
import {
BedrockRuntimeClient,
ConverseCommand,
ConverseCommandInput,
} from "@aws-sdk/client-bedrock-runtime";
const client = new BedrockRuntimeClient();
export const handler: Schema["generateChatResponse"]["functionHandler"] = async (
event
) => {
const conversation = event.arguments.conversation;
// System prompt for context
const systemPrompt = [{text: event.arguments.systemPrompt}];
const input = {
modelId: process.env.CHAT_MODEL_ID,
system: systemPrompt,
messages: conversation,
inferenceConfig: {
maxTokens: 1000,
temperature: 0.5,
}
} as ConverseCommandInput;
const command = new ConverseCommand(input);
const response = await client.send(command);
const jsonResponse = JSON.stringify(response.output?.message);
return jsonResponse;
};
The maxTokens
inference parameter defines the maximum number of tokens the model can generate and temperature
controls how creative it can get (a number between 0 and 1 with closer to 1 being more creative). Read more about these inference parameters here.
Create the component to facilitate the conversation
Our last big step is creating the component that makes the function call via the custom query. This component will facilitate the conversation between the user and the interactive story teller. Here's what our component will look like:
Below, I'll cover the major parts of the ChatComponent.tsx
and how each part works, and then I'll share the full code (jump to [here] if that's all you're looking for).
In the JSX portion of ChatComponent.tsx
, we iterate over the conversation
between the AI (labeled assistant here) and the user (labeled human). The conversation alternates between the assistant and human, so we style each a little different.
We add a TextField
component to capture the user's message, setting the inputValue
via handleInputChange
when text is entered and calling setNewUserMessage
whenever the Enter
key is pressed or the Send button is pressed. There are also properties to show an error message if one exists.
<View width="60vw">
<Flex direction="column" wrap="wrap" justifyContent="space-between">
{conversation.map((item, i) => item.role === "assistant" ? (
<Message width="40vw" className="assistant-message" colorTheme="neutral" key={i}>{item.content[0].text}</Message>
) : (
<Message width="40vw" className="human-message" hasIcon={false} colorTheme="info" key={i}>{item.content[0].text}</Message>
))}
{isLoading ? (<Loader />) : (<div></div>)}
<TextField label="What would you like to chat about?"
name="prompt"
value={inputValue}
onChange={handleInputChange}
onKeyUp={(event) => {
if (event.key === 'Enter') {
setNewUserMessage();
}
}}
labelHidden={true}
hasError={error !== ""}
errorMessage={error}
width="60vw"
outerEndComponent={<Button onClick={setNewUserMessage}>Send</Button>} />
</Flex>
</View>
We'll use a simple handleInputChange
function that clears the error if there was one whenever the user starts typing and then sets the input value.
const handleInputChange = (e: ChangeEvent<HTMLInputElement>) => {
setError("");
setInputValue(e.target.value);
};
Next, we'll implement setNewUserMessage
to add the new message from the human (using the role user
) to the conversation. This is in the same structure as we covered earlier, alternating between user
and assistant
roles.
const setNewUserMessage = async () => {
const newUserMessage = { role: "user", content: [{ text: inputValue }] };
setConversation(prevConversation => [...prevConversation, newUserMessage]);
};
Now it's time to make the call to the generateChatResponse
query to send the message to the model with Amazon Bedrock. We use the useEffect
hook because we need to wait on setConversation
to be complete before making the call. We implement fetchChatResponse
as an async function to make the actual call and only call it when there is a conversation where the last message is from the user. We do this last check because we only want to send user messages to the model. We do not want to send the assistant's responses, which we also push back onto the conversation (remember that alternating user-assistant conversation array from earlier?) so the model has our entire conversation history as context.
useEffect(() => {
const fetchChatResponse = async () => {
setInputValue('');
setIsLoading(true);
const { data, errors } = await client.queries.generateChatResponse({
conversation: JSON.stringify(conversation),
systemPrompt: systemPrompt
});
if (!errors && data) {
setConversation(prevConversation => [...prevConversation, JSON.parse(data)]);
} else {
setError(errors?.[0].message || "An unknown error occurred.")
console.error("errors", errors);
}
setIsLoading(false);
}
// only fetch the response if there is a conversation and it ends with a user role message
if (conversation.length > 0 && conversation[conversation.length - 1].role === "user") {
fetchChatResponse();
}
}, [conversation]);
Full code for the component
Here is the full code for the component:
// ChatComponent.tsx
import { ChangeEvent, useState, useEffect } from "react";
import { Button, Flex, Loader, TextField, View, Message } from "@aws-amplify/ui-react";
import { generateClient } from "aws-amplify/api";
import { Schema } from "../../../amplify/data/resource";
import "./chat.css"
const client = generateClient<Schema>();
export default function ChatComponent({systemPrompt} : { systemPrompt: string}) {
const [conversation, setConversation] = useState<{ role: string, content: { text: string }[] }[]>([]);
const [inputValue, setInputValue] = useState("");
const [error, setError] = useState("");
const [isLoading, setIsLoading] = useState(false);
const handleInputChange = (e: ChangeEvent<HTMLInputElement>) => {
setError("");
setInputValue(e.target.value);
};
useEffect(() => {
const fetchChatResponse = async () => {
setInputValue('');
setIsLoading(true);
const { data, errors } = await client.queries.generateChatResponse({
conversation: JSON.stringify(conversation),
systemPrompt: systemPrompt
});
if (!errors && data) {
setConversation(prevConversation => [...prevConversation, JSON.parse(data)]);
} else {
setError(errors?.[0].message || "An unknown error occurred.")
console.error("errors", errors);
}
setIsLoading(false);
}
// only fetch the response if there is a conversation and it ends with a user role message
if (conversation.length > 0 && conversation[conversation.length - 1].role === "user") {
fetchChatResponse();
}
}, [conversation]);
const setNewUserMessage = async () => {
const newUserMessage = { role: "user", content: [{ text: inputValue }] };
setConversation(prevConversation => [...prevConversation, newUserMessage]);
};
return (
<View width="60vw">
<Flex direction="column" wrap="wrap" justifyContent="space-between">
{conversation.map((item, i) => item.role === "assistant" ? (
<Message width="40vw" className="assistant-message" colorTheme="neutral" key={i}>{item.content[0].text}</Message>
) : (
<Message width="40vw" className="human-message" hasIcon={false} colorTheme="info" key={i}>{item.content[0].text}</Message>
))}
{isLoading ? (<Loader />) : (<div></div>)}
<TextField label="What would you like to chat about?"
name="prompt"
value={inputValue}
onChange={handleInputChange}
onKeyUp={(event) => {
if (event.key === 'Enter') {
setNewUserMessage();
}
}}
labelHidden={true}
hasError={error !== ""}
errorMessage={error}
width="60vw"
outerEndComponent={<Button onClick={setNewUserMessage}>Send</Button>} />
</Flex>
</View>
)
}
And to style the human-assistant messages:
/* chat.css */
.assistant-message {
margin-right: auto;
}
.human-message {
margin-left: auto;
}
Putting it all together
Now we have our backend code - the custom query with a handler backed by a Lambda function that makes the call to Amazon Bedrock. And our chat component - a React component that displays the full chat conversation between human and AI assistant with a text field to collect the human's next message.
We'll add the following to our App.tsx
file:
// App.tsx
<View as="section">
<Heading
width='60vw'
level={2}>
Let's tell a story together.
</Heading>
<Text
variation="primary"
as="p"
lineHeight="1.5em"
fontWeight={400}
fontSize="1em"
fontStyle="normal"
textDecoration="none"
width="60vw">
Start by introducing yourself and saying hi.
</Text>
<Chat
systemPrompt={ "Pretend you are an author of a choose your own adventure style story for children age 3-5. Start by asking the user a series of three questions to understand the theme of the adventure. Tell the first part of four parts of the story and then ask the user to make a choice about the path they would like to take. Repeat this until all for parts of the story are complete. Each part is 2-4 paragraphs long." }
/>
</View>
To make this customized to our use case -- interactive bedtime storyteller -- we set the systemPrompt
to:
Pretend you are an author of a choose your own adventure style story for children age 3-5. Start by asking the user a series of three questions to understand the theme of the adventure. Tell the first part of four parts of the story and then ask the user to make a choice about the path they would like to take. Repeat this until all for parts of the story are complete. Each part is 2-4 paragraphs long.
You can customize this to your own use case so that your assistant has context, instructions, and guidelines on how to respond to your user.
Additionally, you may also want to update the CHAT_MODEL_ID
in amplify/data/resource.ts
to use a different model that works for your use case. You can find supported Amazon Bedrock models here.
Wrapping up
And that's it! In this article, we used the interactive bedtime storyteller use case to show how to integrate the Amazon Bedrock Converse API into your Amplify (Gen 2) app, how to send the full conversation to maintain a history and handle multi-turn conversations, and how to use the systemPrompt
and inference parameters, maxTokens
and temperature
, to customize the assistant even more.
I hope this has been helpful. If you'd like more like this, smash that like button 👍, share this with your friends 👯, or drop a comment below 💬.
This content originally appeared on DEV Community and was authored by Jenna Pederson
Jenna Pederson | Sciencx (2024-08-15T13:00:00+00:00) Telling bedtime stories with generative AI. Retrieved from https://www.scien.cx/2024/08/15/telling-bedtime-stories-with-generative-ai/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.