AI-enhanced customer service experience led to happier customers waiting less time for answers.
The opportunity
A business asked us to craft a solution to labor-intensive and time-consuming calls to customer service agents.
Our solution
We created a Flask-based web application that curated knowledge for customers across FAQs and other documentation. Through an easy-to-use LLM interface, customers can now search across documentation and retrieve both answers and cited links to reference for further follow-up.
For this, we used tooling based primarily in the AWS cloud: distributed application serving, AWS databases, and we deployed the app with containers. For languages used, we primarily worked in Python, HTML, CSS, and JavaScript. To get documentation (FAQs, site information, etc.) into the model, we used an embedding model from a popular large language model. We could take documents and store them in our AWS database with embeddings.
The results
Along with the power of Generative AI, this gave us the ability to provide context and specialized knowledge that large language models do not possess. This means we get the unique and specialized knowledge that the models don’t have, coupled with the power of generative AI to reason and answer questions.
Surveys demonstrated that customers preferred getting an answer in real-time, and overall customer NPS improved.
Call center wait time decreased by 24%, and customers answering “strongly agree” to “this resolved my question” increased by 35%.
Get in touch.
We’d love to speak with you about how we can design and implement AI solutions to help your business thrive.