This content originally appeared on Google Developers Blog and was authored by Google Developers
Posted by Wally Brill and Jessica Dene Earley-Cha
Since we launched Interactive Canvas, and especially in the last year we have been helping developers create great storytelling and gaming experiences for Google Assistant on smart displays. Along the way we’ve learned a lot about what does and doesn’t work. Building these kinds of interactive voice experiences is still a relatively new endeavor, and so we want to share what we've learned to help you build the next great gaming or storytelling experience for Assistant.
Here are three key things to keep in mind when you’re designing and developing interactive games and stories. These three were selected from a longer list of lessons learned (stay tuned to the end for the link for the 10+ lessons) because they are dependent on Action Builder/SDK functionality and can be slightly different for the traditional conversation design for voice only experiences.
1. Keep the Text-To-Speech (TTS) brief
Text-to-speech, or computer generated voice, has improved exponentially in the last few years, but it isn’t perfect. Through user testing, we’ve learned that users (especially kids) don’t like listening to long TTS messages. Of course, some content (like interactive stories) should not be reduced. However, for games, try to keep your script simple. Wherever possible, leverage the power of the visual medium and show, don’t tell. Consider providing a skip button on the screen so that users can read and move forward without waiting until the TTS is finished. In many cases the TTS and text on a screen won’t always need to mirror each other. For example the TTS may say "Great job! Let's move to the next question. What’s the name of the big red dog?" and the text on screen may simply say "What is the name of the big red dog?"
Implementation
You can provide different audio and screen-based prompts by using a simple response, which allows different verbiage in the speech and text sections of the response. With Actions Builder, you can do this using the node client library or in the JSON response. The following code samples show you how to implement the example discussed above:
candidates:
- first_simple:
variants:
- speech: Great job! Let's move to the next question. What’s the name of the big red dog?
text: What is the name of the big red dog?
Note: implementation in YAML for Actions Builder
app.handle('yourHandlerName', conv => {
conv.add(new Simple({
speech: 'Great job! Let\'s move to the next question. What’s the name of the big red dog?',
text: 'What is the name of the big red dog?'
}));
});
Note: implementation with node client library
2. Consider both first-time and returning users
Frequent users don't need to hear the same instructions repeatedly. Optimize the experience for returning users. If it's a user's first time experience, try to explain the full context. If they revisit your action, acknowledge their return with a "Welcome back" message, and try to shorten (or taper) the instructions. If you noticed the user has returned more than 3 or 4 times, try to get to the point as quickly as possible.
An example of tapering:
- Instructions to first time users: “Just say words you can make from the letters provided. Are you ready to begin?”
- For a returning user: “Make up words from the jumbled letters. Ready?”
- For a frequent user: “Are you ready to play?”
Implementation
You can check the lastSeenTime
property in the User object of the HTTP request. The lastSeenTime
property is a timestamp of the last interaction with this particular user. If this is the first time a user is interacting with your Action, this field will be omitted. Since it’s a timestamp, you can have different messages for a user who’s last interaction has been more than 3 months, 3 weeks or 3 days. Below is an example of having a default message that is tapered. If the lastSeenTime
property is omitted, meaning that it's the first time the user is interacting with this Action, the message is updated with the longer message containing more details.
app.handle('greetingInstructions', conv => {
let message = 'Make up words from the jumbled letters. Ready?';
if (!conv.user.lastSeenTime) {
message = 'Just say words you can make from the letters provided. Are you ready to begin?';
}
conv.add(message);
});
Note: implementation with node client library
3. Support strongly recommended intents
There are some commonly used intents which really enhance the user experience by providing some basic commands to interact with your voice app. If your action doesn’t support these, users might get frustrated. These intents help create a basic structure to your voice user interface, and help users navigate your Action.
- Exit / Quit
Closes the action
- Repeat / Say that again
Makes it easy for users to hear immediately preceding content at any point
- Play Again
Gives users an opportunity to re-engage with their favorite experiences
- Help
Provides more detailed instructions for users who may be lost. Depending on the type of Action, this may need to be context specific. Defaults returning users to where they left off in game play after a Help message plays.
- Pause, Resume
Provides a visual indication that the game has been paused, and provides both visual and voice options to resume.
- Skip
Moves to the next decision point.
- Home / Menu
Moves to the home or main menu of an action. Having a visual affordance for this is a great idea. Without visual cues, it’s hard for users to know that they can navigate through voice even when it’s supported.
- Go back
Moves to the previous page in an interactive story.
Implementation
Actions Builder & Actions SDK support System Intents that cover a few of these use case which contain Google support training phrase:
- Exit / Quit ->
actions.intent.CANCEL
This intent is matched when the user wants to exit your Actions during a conversation, such as a user saying, "I want to quit." - Repeat / Say that again ->
actions.intent.REPEAT
This intent is matched when a user asks the Action to repeat.
For the remaining intents, you can create User Intents and you have the option of making them Global (where they can be triggered at any Scene) or add them to a particular scene. Below are examples from a variety of projects to get you started:
- Play Again -> Snow Pal's play_again
- Skip -> Gnome Garden's skip
- Home / Menu -> Gnome Garden's setting_menu
So there you have it. Three suggestions to keep in mind for making amazing interactive games and story experiences that people will want to use over and over again. To check out the full list of our recommendations go to the Lessons Learned page.
Thanks for reading! To share your thoughts or questions, join us on Reddit at r/GoogleAssistantDev.
Follow @ActionsOnGoogle on Twitter for more of our team's updates, and tweet using #AoGDevs to share what you’re working on. Can’t wait to see what you build!
This content originally appeared on Google Developers Blog and was authored by Google Developers
Google Developers | Sciencx (2021-04-26T19:00:00+00:00) Recommended strategies and best practices for designing and developing games and stories on Google Assistant. Retrieved from https://www.scien.cx/2021/04/26/recommended-strategies-and-best-practices-for-designing-and-developing-games-and-stories-on-google-assistant/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.