Generative AI Research Application

THE CHALLENGE
Our client, a digital agency wanted to reduce time- and cost-to-serve of manual user interview and call center transcript analysis, affinity mapping, synthesis, insights and persona creation.
OUR PROCESS
We realized from the start that Large Language Models (LLMs) were going to be an integral part of the solution, but we needed to limit token usage to keep cost down. Our approach was to first understand how leading human researchers performed this task manually. We interviewed leading researchers (luckily we know a few!) Once we had the process mapped, we started exploring which steps could leverage available libraries, and which would require the use of tokenized LLMs.
Another perennial challenge is the context window. The manual researcher approach is iterative, and we mimicked that workflow but needed to ensure we maintained traceability back to the various data sources. This requires creating bite-sized chunks out of large data sets without losing the context of the original data. This was important so that our customers could take an AI-generated report and locate the observations and context within which the insight was derived - an essential part of keeping humans in the loop, building communication and trust.
Once we had the functionality established, we moved on to create a simple, intuitive front end so that all the complexity of the app became invisible to the end user. We created attractive, easy to digest output formats and flexibility to customize and deep dive into insights and reporting.
OUTCOMES
Our intuitive interface, and efficient solution design reduced analysis and synthesis time in MVP release from 24 to 2 hours, a 12x efficiency gain.
Let's create something beautiful together