Cognigy.AI v4.64 unveils the new native integration with Deepgram speech service and contextualized conversations using the LLM Prompt Node.
Enable Dynamic Conversations with the LLM Prompt Node
Previously, the LLM Prompt Node lets you send a prompt to the selected Generative AI model for content generation. While this is ideal for executing single-turn tasks like summarization, sentiment analysis, intent detection, or persona definition, it is not optimized for dynamic conversations.
The latest update of Cognigy.AI now introduces a Chat mode in the LLM Prompt Node that enables context-aware multi-turn interactions. With this feature, the AI Agent can incorporate previous conversation history to navigate the dialogue more intuitively, providing precise and helpful answers to customer queries.
Consider the example below: each new user query refers to the previous exchange. The AI Agent intuitively understands that “there” indicates “Ha Long Bay” mentioned in its last answer, and “other landmarks” imply those in Vietnam that it should provide further suggestions. Note that context awareness is also available for the Search Extract and Output Node in Cognigy Knowledge AI.
In addition to the Chat mode, you can now select your preferred Generative AI model with ease directly from the LLM Prompt Node. This means you leverage different LLMs for different LLM Prompt Nodes and use cases within the same Flow.
Debug logging can also be activated to record the token count as well as request and completion text in Cognigy Logs for troubleshooting purposes.
Enhanced STT Flexibility with Deepgram Integration
With v4.64, Cognigy.AI gives you more freedom of choice for speech recognition solutions, adding native support for Deepgram.
Deepgram differentiates itself from other speech-to-text providers in its use of deep learning technology, speed, customizability, and real-time processing capabilities.
- Deep learning focus: Through the extensive use of deep learning in its STT model, Deepgram boasts a 22% improvement in recognition accuracy. It can effectively handle background noise, different accents, and varied speech patterns in challenging audio environments.
- Real-time streaming: Deepgram is designed to provide fast and accurate real-time transcription with <300ms latency and an average 30% reduction in word error rate (WER).
- Speed: Its latest model is reported to achieve a median inference time of 29.8 seconds per hour of diarized audio, outpacing comparable vendors by 5 to 40 times.
- Customizability: Deepgram allows for the creation of custom models suited to specific use cases or industries, offering potentially higher accuracy for unique terminologies or specific audio types, like unique accents or noisy backgrounds.
Other Improvements for Cognigy.AI
Cognigy Virtual Agents
- Stored the translation provider credentials as connections
- Changed the color of Cognigy product logos in the user menu
- Added the capability to set up a custom URL to the LLM for the Microsoft Azure OpenAI provider
- Removed the Twilio Autopilot Endpoint and Twilio Autopilot built-in NLU connector
- Exposed the knowledge query count per project or organization for each day in a month or a year via the REST API
- Improved by avoiding scenarios where Genesys Bot Connector sessions would be interrupted for consecutive equal responses from Cognigy.AI
For further information, check out our complete Release Notes here.