Advanced LLM Prompting, Deepgram Aura Integration, and More with Cognigy.AI v4.70

6 min read
Nhu Ho
Authors name: Nhu Ho March 5, 2024

Cognigy.AI v4.70 unveils powerful features for advanced LLM orchestration and humanlike voice conversations at scale.

Advanced LLM Prompting and Operations

Precise Data Extraction and Parsing Using JSON Mode

LLMs can generate complex sentences with nuanced meanings, idiomatic expressions, and creative structures. While this is advantageous for humanlike text generation, it poses a challenge for automated systems and workflows that require consistent, structured data formats. For example, when being asked to return a JSON output, instead of a valid JSON string, the LLM can generate something like this:  

Sure here is the JSON for your provided input:{"sentiment":"1"}

The JSON Mode solves this challenge by facilitating structured LLM data extraction. As such, it improves data interoperability with your backend applications and external services, making it easier to add transactional logic.

When JSON Object is selected as the Response Format, the LLM Prompt Node will return results in the JSON schema, enabling precise data extraction and reducing the overhead associated with post-processing the LLM’s output.


The JSON Mode can be further combined with Seed Parameters for reproducible outputs and variability control. The generative nature of LLM outputs means they can vary with each prompt, making it difficult for enterprise systems to parse the data consistently. Setting a specific seed ensures that the model generates the same output for the same input prompt every time, optimizing data consistency and precision.

Note that the JSON Mode is only supported by certain models such as GPT-3.5 Turbo and GPT 4.

Tailor LLM Requests with Custom Options

In addition to the JSON Mode, two new Custom Options have been added to let you tailor GenAI requests with parameters not yet included in the LLM Prompt Node or overwrite existing parameters. 

Custom Model Options allow for precise adjustments to the model settings on a Node level. By tweaking parameters such as temperature, top_k, or presence_penalty, users can influence the creativity, randomness, and focus of the model's outputs. Likewise, you can determine any specific model you want to leverage for a certain scenario. Here is an example of all configurable parameters from OpenAI and Azure OpenAI.

Custom Request Options extend the flexibility beyond model behavior to the technical aspects of request handling. Adjusting timeouts, retries, and headers can be crucial for managing service reliability and authentication. For instance, customizing headers allows for secure API calls by including authorization tokens directly within the request, ensuring that interactions with the LLM provider are secure and compliant with organizational policies.

Cognigy.AI_LLM Orchestration

Lifelike Voice Interactions with Deepgram Aura

With Cognigy.AI v4.70, Deepgram Aura is the latest addition to Cognigy’s suite of natively integrated speech synthesis services. Launched in beta earlier this year, Aura is set for its official release shortly. This efficient, natural-sounding text-to-speech (TTS) model is tailored for real-time voicebots and conversational AI, achieving exceptional low latency through advanced batch processing and real-time TTS technologies.

Revamped Cognigy Insights Design for User-Centricity

Cognigy Insights has recently featured a new look that prioritizes user-centricity and simplicity. By focusing on a clean, streamlined design, users can navigate the dashboards more effortlessly and access the information they need more quickly. An example of this is the new horizontal toolbar for global filtering.

Cognigy.AI v4.70

Other Improvements for Cognigy.AI

Cognigy Virtual Agents

  • Added the Custom URL field to Aleph Alpha LLM configuration
  • Made package uploads more user-friendly by keeping extracted content when navigating during the upload process
  • Improved by forwarding custom attributes, including queue, languages, skills, and manually defined attributes, into the Genesys inbound message flow. This data can be retrieved by using the Get Participant Data action in the Genesys flow design

Cognigy Live Agent

  • Removed the Live Agent onboarding screen that was previously used for creating accounts

Cognigy Insights

  • Enhanced by opening the Cognigy product in a new tab rather than the current one
  • Changed the shadow style of the Agent selector
  • Redesigned the Transcript Explorer
  • Renamed the Sessions Count tile to Total Sessions
  • Changed the naming of the explorers section in the sidebar from Explorer to Explorers

For further information, check out our complete Release Notes here.