Nhu Ho
Authors name: Nhu Ho January 11, 2024
Cognigy.AI_v4.67 Hero

From swift Locale migration to enhanced STT/TTS flexibility and streamlined voice design, here’s what awaits you in Cognigy.AI v4.67.

Transfer Locales between AI Agent Projects with Ease

To facilitate multi-lingual self-service, Cognigy lets you configure multiple Locales within an AI Agent to swiftly localize resources like Flows and Intents for diverse markets and target audiences.

That said, in certain cases, you might prefer developing independent AI Agents for distinct languages and markets – rather than having a single multi-locale AI Agent.

The newest release enables you to do so faster and easier than ever by introducing the ability to migrate Locales across AI Agent projects within seconds. Let’s say you’ve built a master Agent with Flow and Intent templates that you want to replicate to other market-specific Agents. With v4.67, you can effortlessly export any secondary Locale from the master Agent and import it as the primary Locale in a different Agent project.

Generic Voice Settings - Cognigy.AI

Global Voice Settings in the Voice Gateway Endpoint

Your AI Agent’s voice reflects your brand identity. To help you design the perfect voice experiences, Cognigy’s SSML Editor allows you to fine-tune every text-to-speech output granularly within the Say and Question Nodes.

Adding to this, we have introduced a new Generic Settings Section in the Voice Gateway Endpoint for a more overarching, global optimization of voice configuration across all outputs and sessions, helping you streamline the VUX design process.

The Generic Settings further contain an option to send only the best transcript from the Speech Provider through the input.data object, filtering out less relevant results within Cognigy.AI.

Generic Voice Settings - Cognigy.AI

Extended Speech Model and Vendor Support

Our latest release also features two new native STT and TTS integrations, broadening your options for state-of-the-art speech technology.

  • Deepgram’s STT Nova-2: This is Deepgram’s most advanced STT model yet besides the Base, Enhanced, and Nova-1 models which have already been supported by Cognigy.AI. Nova-2 reportedly outperforms its competitors with an average 30% lower word error rate (WER) and 5-40x faster performance – all at 3-5x lower costs.

  • ElevenLabs TTS Models: Last year, ElevenLabs made headlines for its AI-powered voice-generating platform specializing in emotionally rich and human-quality speech synthesis. In addition to 29 language support, including diverse accents, it offers innovative voice cloning capabilities that allow users to profile and replicate a voice from just a few minutes of audio. 

    With the new integration, Cognigy supports all the latest ElevenLabs TTS Models, expanding your voice synthesis freedom of choice. For the best latency, we recommend opting for Turbo v2 model.

Other Improvements for Cognigy.AI

Cognigy Virtual Agents

  • Added the Referred By option to the Refer transfer type in both the Call Failover and Call Events (specifically for Transfer Dial Error and Transfer Refer Error events) sections within the Voice Gateway Endpoint
  • Improved the Voice Gateway Endpoint by adding a setting to trim input.data for better transcripts

Cognigy Insights

  • Improved the UI of the Insights dashboards
  • Changed the dashboards' width to use all the available space on the screen

Cognigy Live Agent

  • Added the capability for human agents to create draft messages. If a reply or a private message is composed but not sent immediately, the message will be saved for 24 hours
  • Renamed Live Agent Assist Bot to Copilot Bot

For further information, check out our complete Release Notes here.

image119-1
image119-1
image119-1