Seamless Operation with LLM Fallback and More in Cognigy.AI v4.90

4 min read
Nhu Ho
Authors name: Nhu Ho November 29, 2024
LLM Fallback

Cognigy.AI v4.90 introduces LLM Fallback to help you optimize service continuity when leveraging Generative AI for your AI Agents.

Maintain Seamless Performance of LLM-Powered AI Agents

LLM connections can occasionally fail due to multiple reasons like outdated/incorrect credentials, exceeded rate or token limits, server timeout, and more.

To combat these challenges, Cognigy’s LLM Fallback ensures continuous operation by automatically activating an alternative model when the primary one encounters an issue. This proactive safeguard minimizes downtime and keeps your AI Workforce running smoothly.

Key Benefits:

  • Improved service reliability and user experience
  • Rapid recovery with automatic fallback switching during outages
  • Increased flexibility by deploying region-specific backups or using fallback models to handle high-demand periods

How it Works:

You can set up a fallback for any chat or completion LLM by adding or selecting another model – regardless of the vendor within your LLM Resource.

The fallback model acts as a temporary replacement when the primary model encounters issues. At the same time, an email notification indicating that the main model requires attention can be triggered, allowing for timely resolution.

Other Improvements

Cognigy.AI 

  • Added a feature flag to activate and deactivate the OAuth2 connection type for the Azure OpenAI provider
  • Granted users with the fullSupportUser role access to view Audit Events
  • Renamed the Fallback Text field to Textual Description in Say Nodes
  • Resolved a performance issue that slowed NLU Intent Training
  • Renamed the Complete Goal Node to Complete Task and changed Goals to Tasks on the Insights Overview and Engagement dashboards

Cognigy Voice Gateway

Agent Copilot

For further information, check out our complete Release Notes here.

image119-1
image119-1
image119-1