Large Language Models are poised to unlock new levels of service experiences that are flexible, personalized, and engaging like never before. But not all LLMs are created equal. With the release of Cognigy.AI v4.52, we are excited to introduce a dedicated LLMs Resource in Virtual Agents that empowers you to harness and orchestrate a variety of Generative AI models to your advantage.
Since the meteoric rise of ChatGPT in late 2022, the LLM race has been relentless. In just over six months, we have witnessed numerous players joining OpenAI in the Generative AI league, from tech giants like Google and Meta to emerging startups like Anthropic. Even ChatGPT itself has undergone multiple evolutions - from text-davinci-003 to GPT-3.5 turbo and now to the latest GPT-4 model.
With the AI market moving at a breakneck pace, the LLM choice is no easy task for enterprises looking to embrace this disruptive technology for CX transformation. Latency, output volume, data privacy, and price are among the leading decision factors, to name a few. The broad spectrum of LLM applications for customer service means that the technology decision often necessitates a mix-and-match approach, employing different models optimized for different use cases.
To address these challenges, Cognigy.AI v4.52 features a new native LLMs Resource section that allows you to leverage and orchestrate the right combination of Generative AI models for your virtual agents. You have the flexibility to configure any of the existing and future supported models and precisely manage which model should be adopted for the individual Generative AI features - depending on its strengths and weaknesses.
Ultimately, the ability to take full advantage of each model combined with Cognigy's Flow Editor allows to build powerful applications through prompt chaining: By leveraging multiple generative prompts together in flexible locations, bot designers can define (or programmatically decide) which model to call at any step of the process. In between API calls, prompts are customized at runtime and data can be transformed as needed at every step of the way.
You can now find the new LLMs Resource under Build in the left menu panel. For a complete list of supported models and step-by-step instructions, refer to our documentation here.
Read more about Multi-Model Orchestration and Prompt Chaining.
With the last two releases, a new Call Events option has been added to the Voice Gateway Endpoint and the Lookup Node to enable advanced handling when a certain event is triggered. For example, when a customer call needs to be transferred from the bot to a human agent for handover, you can ensure proper handling of events like Call Failed or Answering Machine Detection (e.g., outside service hours) through support escalation or automated callback scheduling. Likewise, a wrap-up workflow can be automatically executed after a completed call to increase operational efficiency.
Supported call events include:
For further information, check out our complete Release Notes here