Large Language Models are poised to unlock new levels of service experiences that are flexible, personalized, and engaging like never before. But not all LLMs are created equal. With the release of Cognigy.AI v4.52, we are excited to introduce a dedicated LLMs Resource in Virtual Agents that empowers you to harness and orchestrate a variety of Generative AI models to your advantage.
Flexible and Future-Proof LLM Deployment and Prompt Chaining
Since the meteoric rise of ChatGPT in late 2022, the LLM race has been relentless. In just over six months, we have witnessed numerous players joining OpenAI in the Generative AI league, from tech giants like Google and Meta to emerging startups like Anthropic. Even ChatGPT itself has undergone multiple evolutions - from text-davinci-003 to GPT-3.5 turbo and now to the latest GPT-4 model.
With the AI market moving at a breakneck pace, the LLM choice is no easy task for enterprises looking to embrace this disruptive technology for CX transformation. Latency, output volume, data privacy, and price are among the leading decision factors, to name a few. The broad spectrum of LLM applications for customer service means that the technology decision often necessitates a mix-and-match approach, employing different models optimized for different use cases.
To address these challenges, Cognigy.AI v4.52 features a new native LLMs Resource section that allows you to leverage and orchestrate the right combination of Generative AI models for your virtual agents. You have the flexibility to configure any of the existing and future supported models and precisely manage which model should be adopted for the individual Generative AI features - depending on its strengths and weaknesses.
Chaining LLM API Calls for Best Results
Ultimately, the ability to take full advantage of each model combined with Cognigy's Flow Editor allows to build powerful applications through prompt chaining: By leveraging multiple generative prompts together in flexible locations, bot designers can define (or programmatically decide) which model to call at any step of the process. In between API calls, prompts are customized at runtime and data can be transformed as needed at every step of the way.
You can now find the new LLMs Resource under Build in the left menu panel. For a complete list of supported models and step-by-step instructions, refer to our documentation here.
Enhanced Call Events Handling
With the last two releases, a new Call Events option has been added to the Voice Gateway Endpoint and the Lookup Node to enable advanced handling when a certain event is triggered. For example, when a customer call needs to be transferred from the bot to a human agent for handover, you can ensure proper handling of events like Call Failed or Answering Machine Detection (e.g., outside service hours) through support escalation or automated callback scheduling. Likewise, a wrap-up workflow can be automatically executed after a completed call to increase operational efficiency.
Supported call events include:
- Recognized Speech
- Recognized DTMF
- Call Created
- Call Reconnected
- Call Completed
- Call Failed
- User Input Timeout
- Answering Machine Detection
Other Improvements for Cognigy.AI
Cognigy Virtual Agents
- Improved by resetting the user session when an Endpoint points to a new Snapshot and the old Snapshot has been deleted
- Improved the error logs on the Logs page when an invalid Flow is targeted by the Go To Node
- Optimized the performance for finding keyphrases
- Added the settings for both maintenance and out-of-business-hours modes to the Webchat Endpoint
- Removed legacy Live Chat Lite from Cognigy.AI
- Removed displaying notifications for the Yes/No training task during project creation
- Added different icons for Change Member and Contact Profile
- Extended the Answering Machine Detection section in the Voice Gateway Transfer Node for the Dial type
- Added the Genesys section to the Handover to Agent Node to extend it with 3 new fields and a JSON field to send Custom attributes when doing the handover
- Applied a new Continuous Counting Method to count conversations based on 24-hour time window
- Removed unavailable Livechat Lite from the user menu and configs
Cognigy Live Agent
- Added the opportunity to import canned responses via CSV
- Removed the OData tags and tagging from Live Agent
- Added support for Redis Sentinel
Agent Assist Workspace
- Improved the visual appearance of the embedded Agent Assist Workspace Status screens
- Improved by explaining missing query parameters on the Agent Assist Status screen
For further information, check out our complete Release Notes here