What are script groups/scripts and how do I use them?
Starting with script groups
Script groups allow you to group conversation flows according to their related topic. For instance, within a banking assistant, you may create separate groups for "Customer Support," "Payments and Transfers," "Bill Payments," etc.
You have the convenience of easily moving scripts between groups simply by dragging them, allowing you to modify or alter your groups as your assistant expands.
If you're not sure what groups to create, you can start by creating a single group for all of your scripts and then separate them later on.
Scripts give you the ability to choose how you structure various conversation flows. Every script starts with a user input that doesn't necessarily have to be an intent. Within a script you can design a conversation flow that includes branches for various scenarios, as well as error handling.
You can create general and comprehensive scripts that cater to a variety of use cases, such as handling all customer support queries in a healthcare conversational assistant or, you can also create more specific scripts that target particular cases, such as customer support for a specific medical device or medication.
Defining the scope of a script
Scripts can also be accessed from each other. You can define a script's scope by choosing whether it can be accessed from any other script in the project (global) or only scripts within the same group (local).
Selecting which additional scripts can be reached from a specific turn in a script
At a particular assistant response step within your script you can choose which global and local scripts can be accessed (if any) by the user to enable and control context-switching. By default, all global and local scripts are accessible.
Flow branches within a script
In the script browser, you can also see all of the conversation branches within that script. Branch names are taken from user inputs, condition labels, and auto-generated using AI when conversation branches are connected to each other creating new potential conversation paths.
How does the designer work?
The VoiceXD designer uses predetermined logic to show you which conversation steps you can add to your design at any point. This removes the need for dragging, dropping, and and connecting individual steps yourself so that you can focus more on the design.
The basic logic the VoiceXD designer follows is a user input followed by an assistant output with more advanced steps that can be added in between. You can find more information about them in our advanced design topics section. The basic building blocks of our designer are built so that they can work with any conversational channel or platform.
What are the script view and model view?
Our script view allows you to view a single branch of your script at a time. This is a more focused and linear view for editing.
The model view feature enables you to see and modify all branches of your script on a larger canvas. By clicking on the model shapes located at the bottom left corner of the model view, you can auto-arrange the model or change its orientation to either top-down or left-right at any point.
How do I design user inputs?
To begin designing your conversation, you can add in utterances or actions from your user in the User step. You can add a User step after any Assistant step.
Click on the User step
Provide an example of what the user could say or write. Additionally, you can include a button for the user to interact with. The User step can contain either a text/voice input sample, a button, or both.
Add in additional utterances in the right side panel under training phrases to help train your assistant with the different ways a user might invoke that step
Tip: Use AI-Companion
Use our AI-companion built with GPT-3 to automatically generate training phrases for your User step. Simply click auto-generate next to the training phrases dropdown.
How do I design assistant responses?
You can define assistant responses in the Assistant step. You can add an Assistant steps after any User step. You can also add multiple Assistant steps in a row. This is useful if you would like to separate the response and prompt from the assistant into two different steps.
Click the + button after any user step to add a an assistant response.
Click on the Assistant step
Type in a default assistant response
Add in additional response variations in the right side panel
How do I design no-match and no-input responses?
You can add no-match and no-input responses in right side panel when selecting any Assistant step. No-match and no-input responses are added in a hierarchical order so that the "first no-match" or "first no-input" will be first triggered response from the assistant.
How do I add in images?
You can add images into any Assistant step through the right panel under the Display Media section. You can add one image per Assistant step.
Tip: Use AI-Companion
Use our AI-companion built with GPT-3 to automatically generate response variations, no-match responses, and no-input responses for your Assistant step. Simply click auto-generate next to the appropriate section.
How do I remove steps from my conversation?
To remove a step(s) from any part of the conversation just click on the trash icon. The selected step and any other relevant steps will be removed from the design and the entire conversation will be reconnected. When removing certain steps, you will have the option to delete the selected step along with some of the following steps.