Large language models are all the hype right now. It resembles a lot of the hype that existed when Siri was first introduced to a mass market as a new way of interacting with technology through natural language. It resembled one of the first ways that we could interact with technology that was natural to us, not computers. While the path for conversational AI since then hasn't been the smoothest, the recent boom in interest reflects what a lot of what people within the industry already knew and believed: Natural language conversations with technology are the future. The question is, how do we utilize this technology in a way that provides people with real value?
Just because something has the AI label or is deep tech doesn't necessarily mean that it solves real human problems. While a great feat of technological innovation, LLMs out of the box have inherent flaws that can result in poor user experiences.
Large language models are trained on incredibly large amounts of text written by humans which are inevitably subject to human bias. The benefit here is that applications like ChatGPT can provide information about nearly anything (or at least they think they can), however, due to the scale of the data being used to train models like GPT there is a high risk of content that is incorrect, abusive, or offensive making it through the training process. Without any insight into an LLM's decision making black box, putting it out to chat with customers is currently a high risk effort.
Similarly, because LLMs are trained on large amounts of data, they are not prepared to handle business specific use cases that require a knowledge of the business, contextual information about customers, and rules for how to interact with them. If unrestricted, it’s also easy for the conversation to quickly trail away from discussing a business objective and for your assistant to lose focus on solving core use cases.
While there are many technology related reasons that conversational assistants have had limited success in wide spread adoption, one of the biggest issues that remains is that these assistants need to be solving real user problems in effective ways. What this reveals is a design problem that lies at the core of a lot of the negative assistant experiences that people have today. Impactful user experiences aren't generated by great technology but, are built by people which is why large language models will never work as a singular out of the box solution for any domain or use case.
Even though LLMs have their fair share of flaws, there are practical ways to utilize them to improve your existing conversational experiences today. At VoiceXD we believe that you need great designers and developers to create useful and usable conversational assistants and we view LLMs as another tool in their toolkit. We see LLMs providing significant value in three specific areas:
1) Augmenting the workflows of designers and developers building purpose-driven assistants
2) Making intent-based assistants less rigid by allowing them to better incorporate real-time context
3) Handling edge cases that the assistant hasn't been trained for
LLMs have a major advantage over people when it comes to curating a large amount of content fast. With the right prompt engineering, an LLM can speed up the process of generating contextually-relevant content for any use case or domain. For a conversation designer this can expedite the process of generating utterances, assistant responses, variable values, and even entire sample conversations. This content can also be customized for different scenarios, tones, user groups, and languages. In a controlled environment, AI content generation can significantly reduce the time and cost to getting an assistant up and running or expanding its functionalities.
LLMs can also extend creative possibilities. Using AI as a companion, conversational AI teams can quickly generate and explore many different possibilities for solving a particular use case and have a more fruitful ideation process.
It's hard to design for every scenario, context, and user that your assistant might interact with and, with an intent-based structure, the possibilities of customization are even more limited. LLMs make the process of understanding and handling new scenarios and contexts much easier than before. By allowing an LLM to work in real time along with your predesigned NLU and dialogue flows, you can generate or augment messages from your assistant with real-time context from the user to generate a more personalized and relevant response.
LLM's can also be viewed as a great discovery tool for teams. By using transcripts from a conversation between a user and your assistant (working in tandem with an LLM) designers and developers can explore an increasing number of contexts and situations and use that data to improve the core assistant itself.
Another area where LLMs are valuable is providing an intelligent safe guard for your assistant when the user interacts with it in an unexpected way to avoid the dreaded endless loop of understanding errors. By using the context of what your user says and the information at it's disposal, the LLM can help redirect your user back to a relevant use case. This poses a significant improvement to assistants that exist today and has the potential to significantly decrease a user's likelihood of exiting the conversation with a negative experience or negative impression of the business.
At VoiceXD, we are using LLMs as the foundation for our AI-Companion design assistant that allows you to generate content using AI including entire conversation models.
Use AI-Companion to generate training phrases for your User steps based on the default utterance. If you have a single or multiple variables in your utterance, AI-Companion will attempt to come up with training phrases that include all of the variables.
Similarly, you can generate responses, no-matches, and no-inputs for your assistant at any System step based on a sample default response.
You can also generate values and synonyms for both dialogue and context variables using auto generate. Simply, add a new variable, assign it a type, and click auto-generate.
Last but not least, we've added the ability to generate an entire conversation flow by simply providing a description of what the conversation goal is. Not only will AI-Companion auto-generate a series of steps between the user and assistant, but it will also include your existing variables or create new ones where it makes sense to do so. You can learn more here.
What we've built so far is just the start. We are exploring many new ways of incorporating LLMs into the conversation design workflow and also ways to augment your own assistant with an LLM working along side it. This includes setting your own prompts for an LLM to generate content in real-time, using an LLM to enhance pre-written assistant responses with context from the user utterance and, much more!