Cognigy.AI v4.61 brings onboard native GPT-4 support, coupled with advanced features for error management in voice applications.
Leverage GPT-4 for Knowledge AI and LLM Prompting
Cognigy continues to expand our LLM support. In this update, you can select GPT-4 as your preferred Generative AI model for the LLM Prompt Node and LLM-powered Answer Extraction.
- The LLM Prompt Node allows direct prompts to GPT-4 for executing a wide variety of tasks like sentiment analysis, intent classification, output enhancement, summarization, persona setting, and beyond.
- The Search, Extract, and Output Node or LLM-powered Answer Extraction, lets you use Generative AI to summarize extracted answers from a Knowledge Store.
GPT-4 outperforms GPT-3.5 in intricate language tasks that involve more complex and nuanced instructions. Plus, it can understand and generate text in more languages with increased precision. However, the model comes with a significantly higher price tag, and the difference between the two is subtle for casual conversations or simpler tasks.
Ensure Service Continuation with Advanced Error Handling
Cognigy.AI v4.61 further comes with multiple features to bolster stability and uptime for voice applications.
Specifically, a new Call Failover setting enables backup transfers in the event of runtime errors when executing a Flow. This can be set up with parameters like Transfer Type, Reason, Target, Caller ID, and more in the Voice Gateway Endpoint.
What’s more, the Transfer Refer Error and Transfer Dial Error call events have also been added for handling transfer failures due to various issues like unavailable destinations or network problems. In such cases, you can execute a new Flow or use the Transfer option to initiate a fresh call or refer again to the current call.
Configuration for these call events is available in both the Voice Gateway Endpoint and the Flow Editor via the Lookup Node.
Other Improvements for Cognigy.AI
Cognigy Virtual Agents
- Added the toggle to the Mute Speech Input Node to control whether a DTMF input collection is muted or not
- Added the Glossary ID and Formality fields to the Endpoint Real-Time Translation Settings of the DeepL Translate Pro translation provider
- Improved the sorting functionality in the resource dropdown lists within the Endpoint editor. Snapshots are now organized based on their creation date, with the newest one displayed at the top, while Flows are sorted alphabetically
- Added the ability to log error messages in functions as Errors not Warnings in the system logs if
- Updated the Search Extract Output Node's default prompt for better output results
- Added the explicit error descriptions for users for failed ingestion jobs
- Added user-friendly error feedback with hints and suggestions for failed processing tasks for Knowledge AI sources
- Deprecated the Knowledge Search Node. Instead, the Search Only Mode of the Search Extract Output Node should be used
Cognigy Live Agent
- Added the button to sort conversations based on priority, creation, and activity
- Added stability and new actions to the Automation Rules section
Cognigy Voice Gateway
- Added the capability to display call traces for up to 14 days in the Voice Gateway Self-Service Portal
For further information, check out our complete Release Notes here.