Design is inherently a creative pursuit. It's our way of taking our knowledge of the world around us, combining it with our own imagination, and outputting a reconstruction of those ideas in a way that can create a new impact in the world.
At VoiceXD, we think about conversation design in the same light. It's another method of combining our knowledge about the world around us and creativity to build conversational assistants that can provide value to people and businesses. When we design experiences with an innovative technology such as conversational AI, we need the freedom to explore what's possible to take full advantage of it. We need to think beyond the constraints that exist today.
It's the reason we built VoiceXD, a conversation design tool that allows you to design a conversational assistant without needing to create a single intent.
Intent-based assistants benefit from a much needed structure when processing inputs from the user. Since every user utterance is classified into a intent based on its perceived meaning and similarity to other utterances, there is a high level of organization when it comes to designing your assistant. Intents also provide a more scalable approach than rule-based flows by acting as reusable components. This means that designers can use intents as building blocks for a larger "conversation design system" to be used for different conversational interfaces and channels.
As with most design systems however, there comes a time to "detach the instances" when the components themselves begin to restrict the quality of the output.
When you're designing a new chatbot or voice assistant, it's important to understand what the user needs and how your assistant might help them achieve a particular goal. Creating sample scripts and storyboards is helpful way of beginning this process, however, many design and development platforms require that you connect everything to an "intent" right from the beginning. This can be limiting because it doesn't allow you to explore different possibilities and get creative with your dialogue. It also means creating these early drafts in softwares like Excel, Visio, etc. which may not be very adaptable once you start building your assistant in a design tool or working with a developer.
Creating engaging conversations that feel natural and intuitive to users requires a more flexible approach than just focusing on intents. Designers need tools that support messy creativity at the start but, also make it easy to reuse that output in useful ways further down the design and development process.
At VoiceXD, we believe that an intent is what a user wants to achieve in a conversation, not everything you say. In the example below, if your goal is to learn about "what an intent is", saying "yes" to a question within the dialogue shouldn't also be classified as an intent, however, most development platforms put everything the user says into intents. Defining intents for every user dialogue makes the conversation feel stiff and less natural. To create more natural conversations, it's important for design and development tools to distinguish the intent from the dialogue by focusing on the meaning of a user's words rather than simply categorizing them.
Since intents rely on predefined phrases or keywords, they may miss the nuances of a user's language or fail to understand the context of the conversation. This can lead to incorrect responses or a failure to understand the users. Additionally, users may have different ways of expressing the same intent, making it difficult for the system to accurately understand what they mean.
As your chatbot or voice assistant becomes more advanced and you add more ways for it to help users, you may find that some requests sound the same but have different meanings. For example, a user might say "transfer money to another account" or "transfer money to a friend," which could be interpreted as the same thing, even though they're actually different actions. To manage these conflicts, you can break down or combine intents, but as the list of intents grows, it becomes harder to keep track of everything and make sure the assistant understands what the user wants.
Most chat and voice assistant platforms require that you use intents to build conversations. At VoiceXD, we wanted to create a conversation design platform that allows designers to be creative and think outside the box without being limited by these technical constraints.
We also want to prepare for the future, where conversational AI systems won't rely solely on intents, as we are seeing now with LLM-supported systems. To achieve this, we built a design platform that is based on the basic structure of a conversation - a back-and-forth between two people. This way, designers can create conversations that work on any platform, with or without intents.
In VoiceXD, there are three main building blocks within each project: user inputs, assistant responses, and scripts.
User inputs allow you to capture any way a user might interact with the assistant (text, voice, or buttons), and assistant responses represent the ways in which the assistant may interact with the user (text, voice, or images). User inputs can exist as individual dialogues or can be connected to an intent.
Scripts are used to organize different conversation paths. They act like containers that can hold a complete conversation or just a part of a larger conversation. Each script starts with a user's input but, it doesn't have to be marked as a specific intent.
These building blocks are independent of any particular platform or channel, which means you can design for voice, chat, or both all within a single project.
Whether you are creating an early sample script for a new assistant or modifying a feature in production, you can create and manage all of your designs in one platform. You can even test entire conversations just by creating user-assistant dialogues with no intents labelled.
Having a single source of truth for each phase of the design process makes it faster to quickly diverge on lots of design ideas and collaborate with other team members to ensure that new designs meet the needs and expectations of users more effectively. It also becomes easier to evaluate feasibility and to ensure consistency across the entire user experience, from initial ideation to final implementation.
We understand that most providers still require that you label all of your user dialogues as intents, but we give you the flexibility to easily revert any intent back to a regular user dialogue with the click of a button.
This can help future-proof your designs and ensure that you have the freedom to create more natural, user-friendly conversations once your development platform supports it, without having to re-design your entire assistant. We also believe that this approach will allow designers and teams to begin exploring more effective ways of using intents. As platforms evolve to support more advanced features such as NLUs supported by large language models, this approach will help teams stay ahead of the game.
In VoiceXD, you can create scripts that can either represent an entire conversation flow or a smaller segment of it. You have the freedom to choose the level of detail for each script and they are completely disconnected from intents.
A powerful feature of scripts is that they can be accessed by one another. For instance, a script for resetting passwords can be accessed from another script that deals with user account management. Instead of using intents to jump between scripts, you can choose when a script is accessible by defining its scope. You can place it into a specific group (such as Home, Navigation, or Music below) or mark it as globally accessible 🌐 from any group.
Within an individual assistant response, you can also control which scripts are accessible at that point of the conversation. We believe that this adds an immense amount of design flexibility and allows you to simulate contextual conversations without needing to define intents. We will cover this topic more in another article.
We are just starting to scratch the surface, and as we grow and improve our our design and testing capabilities, our goal is to create simple tools that anyone can use to make intent-augmented assistants for any platform or channel. We are currently developing new features to support teams that are building assistants that use a mix of pre-defined and real-time AI-generated responses, incorporate real-time conversation context into messages, and use LLMs to backup the assistant when the NLU itself cant resolve a user input. If you'd like to read about the ways that we are already using LLMs check out Large language models: The next step towards a more conversational future.
NLU management (intent marking) and script scope management features are currently only available in private beta but, we will be rolling it out to all users next week. We'd love hear your questions and perspectives on what we are building so feel free to reach out at email@example.com.