Virtual Users

Leverage AI-driven bots to use within your workflows

INC Preview Feature

This feature is in PREVIEW and may not yet be available to all customers. Functionality, scope and design may change considerably.

INC AI and Virtual User Preconditions

PRECONDITIONS: AI and Virtual Users

AI use will require your Tenant data (e.g. caller information) to be processed by external services

✅ Nimbus Tenant Admin rights are required to grant consent to enable the use of AI features. This is done in the Data Privacy Tenant Settings.

💡Both Tenant Admins and OU Admins are able to create various AI Configuration items in the Nimbus Admin Portal. This also includes:

  • Access to Service Distribution Settings, e.g. to test workflows with your configured Virtual Users. 
  • Steer feature visibility via Companion Service Settings (e.g. when activities are supposed to happen unobtrusive in the background).
 
 

Virtual User Licensing

✅Nimbus License Management

  • Contact Center Enterprise Routing service licenses are required to use AI-driven Virtual User functionality. 
    ⮑With the license applied, corresponding Service Features (e.g. workflow activities, configuration tabs) are enabled.
  • Virtual User license requirement: each Virtual User requires a separate license. This license can either be applied directly in the Virtual User configuration steps below or - in bulk - via Admin > Licensing view. 
    ⮑The license enables usage of the “Add Virtual User” Workflow Activity. For additional Virtual User licenses on your service, get in touch with your Customer Success representative.
 
 

Virtual User Bot Configuration

✅Nimbus Configuration dependencies:

  • ALL: Virtual Users require Bots to be configured
    ☝Your bot choice also determines the underlying AI (Large Language Model) in which the Nimbus Virtual User integrates into. As each LLM has different advantages, the choice should be made based on the intended Use Case.

    Show Bot (LLM) comparison…

    INC Language model comparison matrix

    Type Primary Use Case Limitations Integration Effort
    Nimbus AI Services – Audio Intent Analyzer Low latency, conversational AI, real-time audio stream. The bot is communicating directly with the user. Used to detect the intent of a caller and to return the selection done for intelligent routing and IVR replacement.
    • Specialized for providing intent analysis. 
    • Uses Nimbus-native AI Model with fixed-purpose instruction set.
    • Limited parameter sets for data storage.

    Low – Easy to use, configuration effort stays within Nimbus. Parametrization is done in workflows as part of the “Virtual User” activity.

     

    🔎 See: Use Case - Setting up a Nimbus  Virtual User for Intent Analysis 

    Nimbus AI Services - AI Workflow Low latency, conversational AI, real-time audio stream. Used as speech‑in / speech‑out conversational AI with interactions specific to data requests from external systems (e.g. banking or insurance services, ticketing platforms and CRM systems).
    • Flexible, but depends on external system uptime and responsiveness for seamless customer interaction.
    • Requires purpose-driven prompt knowledge and context for all parameters to handle conversation effectively.

    Medium - Requires knowledge to handle Web Requests and MCP servers. 

    Customers can use Nimbus-native AI models, but must ensure that the web requests and required data is sufficiently covered with the AI prompts.

     

    🔎 See: Use Case - Integrating Nimbus Virtual User with external systems 

    M365 Copilot – Direct Line 3.0 Enterprise voice interactions with Microsoft 365 Copilot, enabling conversation storage, transcription and intent detection of spoken conversations. Grounded in O365 work data.
    • For generalized use (e.g. data lookup, topics handling) and customer intent detection.
    • Higher latency due to Speech-to-Text and Text-to-Speech generation.
    • Subject to Microsoft licensing and service capacity limits.

    Medium – Requires Microsoft 365 and Copilot ecosystem knowledge. Requires configuration within Nimbus, plus integration with Azure Copilot Studio API to handle topics.


    🔎 See: Use Case - Setting up a Nimbus Virtual User using Copilot 

    Azure OpenAI – Audio GPT Realtime1 Low latency, conversational AI, real-time audio stream. Used as speech‑in / speech‑out conversational AI. Allows admins “bring their own” LLMs and data instances to implement intent analysis.
    • Flexible, but requires customers to bring their own bot configuration.
    • Needs custom client and backend integration, with regional availability determined by Microsoft1.
    • Limited to intent analysis purposes for now, a new bot type will be introduced for wider range of use cases

    High –  Requires configuration within Nimbus also requires customers to bring their own LLM and 3rd party integrations.

     

    🔎 See: Use Case - Setting up a Nimbus Virtual User using OpenAI GPT Realtime 

    Comparison: Types of Bots for Nimbus Virtual Users

    Notes

    1 Due to Microsoft Azure AI Foundry GPT-Realtime availability in limited regions, data processed by Nimbus Virtual User GPT-Realtime integration will temporarily leave your regional boundary: 

    • DE01, DE02, CH01, CH02, UK01, EU01, AU01 computation will be primarily performed in Sweden Central region.
    • US01 cluster computation will be primarily performed in the US East region.
     

     

     
     
  • OPTIONAL: Specific models (e.g. Microsoft Copilot) may also need further Nimbus configuration items: 

🔎For more details and specific configuration steps, refer to our AI Use Cases category and the specific pages linked above.

 
 
 
Available Virtual User Integration methods in Nimbus

Use Cases

Virtual Users are helpful, rapid-response self-service bots that can meet customer requests without the need of a Nimbus queue. You can configure them at your convenience, to be used variably within your Nimbus Workflows. The benefits are as follows:

  • Responsive: A bot can answer your customers immediately, without them having to wait in queues.
  • Scaling: Virtual Users can be added to any service, either as a fallback or first-responder.
  • Universal: By adjusting each Virtual User's instructions and parameterization, the Virtual User can address and help the customer in many ways.

Scenario comparison - Intent detection

Usually during a classic IVR scenario, the verbal input from a customer needs to be very specific. When editing your workflows you need to consider all outputs in the Conversation Handling Activities, as the correct exit is taken only when the exact word is said. Meanwhile, the Nimbus AI Intent Analyzer understands a natural sentence and recognizes the semantics, which is mapped to a predefined list of intents. The intent with the highest probability is then selected.

Intent detection steers calls through described workflow exits (e.g., orders, billing, issue handling, etc.) while also considering timeouts and failure handling. Virtual User detects urgency and other key characteristics to route the call intelligently. 
💡To showcase, here is an example comparison: 

  Classic IVR scenario Virtual User scenario
Workflow structure
  1. Announcement: “Welcome to our Helpdesk Service”
  2. Static IVR prompt:  “If you want to talk to sales, say "Sales", If you want to to speak to Accounting, say “Accounting”. 
  3. Customer replies: “Sales”
  1. Add Virtual User which uses an initial message like "Hello, I am your AI assistant. How can I help you today?
  2. Customer: I would like to discuss my current product licensing terms, specifically the addition of additional features.
  3. Virtual User answers: “Sure! I can connect you to the Sales department”.
Expected Outcome

The announcement now checks the audio for the exact correct wording pattern

Any misinterpretations (e.g. dialects, quality) could lead to a repeat loop or false redirect.

The AI identifies the customer's intent by scanning for multiple keywords, then decides on the intent to forward the call to the right destination. 

Any missing or potentially ambiguous information can be requested in an active dialogue. This improves caller's IVR experience, as there is no need to press digits or listen to the same instruction again.

💡Further usage ideas:

Virtual Users - and their underlying AI models - can be extended with a variety of use cases. In extension to the basic intent analysis, you may achieve other results such as: 

  • Caller identification against a CRM system or another 3rd party platform / database: The AI automatically requests and provides parameters during natural conversation, then intelligently decides where to route a call next.
  • Caller verification via Web Requests: On request by the Virtual User, external 3rd party systems could send a randomized code to a previously registered number or email address associated with the customer.
  • Support ticket system interaction via MCP: Customers can directly check on their ticket status by mentioning a previously known ID. The Virtual User can directly look up the status from a 3rd party system and – in case of missing entries - offer to create a new ticket instead.
  • Handling Frequently Asked Questions (FAQ). Based on an a provided Knowledge Base (e.g. PDFs or websites) the Virtual User can answer common questions.
  • Outbound conversations & campaigns: Conducting automated campaigns/customer surveys using a Virtual User instead of an human Agent. This can be useful to retrieve feedback, e.g. after a Customer has given consent.
 

Setup

General Configuration

Field Description
Name The name of the Virtual User, as it will appear in other selection dialogues of Nimbus (e.g. within Workflows).
Organization Unit Determines where the Bot will be available for selection.
Description

A Nimbus-internal description, e.g. “Support Bot for first response IVR”.

💡This field is just for identification within Nimbus and will have no impact on the Bot behavior.

Bot Configuration

Virtual Users in Nimbus are defined as Configuration > Virtual Users within your Nimbus Administration. By using the Organization Units approach, the bot configuration can be used flexibility across multiple services if necessary. 

Field Description
Bot

✅ Precondition: Requires a Bot to be configured before becoming available for selection.
💡Determines which underlying API and AI large language model (LLM) used to interact with Customers and formulate a response. Note that bots leverage external features and APIs with cost outside of your Nimbus subscription.2

⮑ Note: Depending on your bot choice, configurable fields below will change.

Which bot should I select?

INC Language Model Comparison Matrix

Type Primary Use Case Limitations Integration Effort
Nimbus AI Services – Audio Intent Analyzer Low latency, conversational AI, real-time audio stream. The bot is communicating directly with the user. Used to detect the intent of a caller and to return the selection done for intelligent routing and IVR replacement.
  • Specialized for providing intent analysis. 
  • Uses Nimbus-native AI Model with fixed-purpose instruction set.
  • Limited parameter sets for data storage.

Low – Easy to use, configuration effort stays within Nimbus. Parametrization is done in workflows as part of the “Virtual User” activity.

 

🔎 See: Use Case - Setting up a Nimbus  Virtual User for Intent Analysis 

Nimbus AI Services - AI Workflow Low latency, conversational AI, real-time audio stream. Used as speech‑in / speech‑out conversational AI with interactions specific to data requests from external systems (e.g. banking or insurance services, ticketing platforms and CRM systems).
  • Flexible, but depends on external system uptime and responsiveness for seamless customer interaction.
  • Requires purpose-driven prompt knowledge and context for all parameters to handle conversation effectively.

Medium - Requires knowledge to handle Web Requests and MCP servers. 

Customers can use Nimbus-native AI models, but must ensure that the web requests and required data is sufficiently covered with the AI prompts.

 

🔎 See: Use Case - Integrating Nimbus Virtual User with external systems 

M365 Copilot – Direct Line 3.0 Enterprise voice interactions with Microsoft 365 Copilot, enabling conversation storage, transcription and intent detection of spoken conversations. Grounded in O365 work data.
  • For generalized use (e.g. data lookup, topics handling) and customer intent detection.
  • Higher latency due to Speech-to-Text and Text-to-Speech generation.
  • Subject to Microsoft licensing and service capacity limits.

Medium – Requires Microsoft 365 and Copilot ecosystem knowledge. Requires configuration within Nimbus, plus integration with Azure Copilot Studio API to handle topics.


🔎 See: Use Case - Setting up a Nimbus Virtual User using Copilot 

Azure OpenAI – Audio GPT Realtime1 Low latency, conversational AI, real-time audio stream. Used as speech‑in / speech‑out conversational AI. Allows admins “bring their own” LLMs and data instances to implement intent analysis.
  • Flexible, but requires customers to bring their own bot configuration.
  • Needs custom client and backend integration, with regional availability determined by Microsoft1.
  • Limited to intent analysis purposes for now, a new bot type will be introduced for wider range of use cases

High –  Requires configuration within Nimbus also requires customers to bring their own LLM and 3rd party integrations.

 

🔎 See: Use Case - Setting up a Nimbus Virtual User using OpenAI GPT Realtime 

Comparison: Types of Bots for Nimbus Virtual Users

Notes

1 Due to Microsoft Azure AI Foundry GPT-Realtime availability in limited regions, data processed by Nimbus Virtual User GPT-Realtime integration will temporarily leave your regional boundary: 

  • DE01, DE02, CH01, CH02, UK01, EU01, AU01 computation will be primarily performed in Sweden Central region.
  • US01 cluster computation will be primarily performed in the US East region.
 

 

 
 
 
Bot Response template

✅ Preconditions: 

  • Copilot Direct Line 3.0 was selected as a Bot
  • Requires a (previously created) Bot Response Template to be applied in this menu.

💡The Bot Response Template determines the JSON format that is received from the bot, allowing to map answers to Parameters in Nimbus for further processing within workflows.

Language

💡Defines the spoken LANGUAGE of your bot, using direct Audio-to-Audio speech models.
✅Precondition: Visible for the following Bot selections:

  • “Nimbus AI Services - Audio Intent Analyzer” 
  • “Nimbus AI Services -  AI Workflow"
  • “OpenAI GPT Realtime” 

💡Further languages will be made available in the near future.

User Voice

💡Defines the TONE of voice for your bot. The selection is automatically adjusted to be compatible with the Language selected above.
✅Precondition: Visible for the following Bot selections:

  • “Nimbus AI Services - Audio Intent Analyzer” 
  • “Nimbus AI Services -  AI Workflow"
  • “OpenAI GPT Realtime” 

💡Further voices will be made available in the near future.

System Instruction

💡Allows to define WHO the bot impersonates. You can add Nimbus Custom Parameters or System Fields to be part of this instruction set. 
✅Precondition: Visible for the following Bot selections:

  • “Nimbus AI Services - Audio Intent Analyzer” 
  • “Nimbus AI Services -  AI Workflow"
  • “OpenAI GPT Realtime” 

💡An instruction example could be as follows:

Act as a helpful customer service assistant. Be concise, friendly, direct, and prioritize solving the customers issue. Tell them you are an AI assistant and you will identify their needs and route them to the appropriate department. Do not try to solve technical issues yourself; your job is to route the user.

💡As your bot capabilities (and possible failure-cases) grow, you can of course adjust the instruction complexity to meet the needs. Here is a more specific instruction example: 

You are a helpful assistant. Your role is to help customers manage work items (service desk tickets) through connected MCP servers.
Use available tools to create issues, update key fields, add comments, and fetch work item details or recent changes.
When a supported request is identified, choose the correct MCP tool, collect the missing details, and execute it.
You must collect and validate information in the correct order:
  Ensure no information outside of the field contents is disclosed to the customer.
  Always validate each piece of information before proceeding to the next step.
  Be polite, professional, and ensure data privacy by handling sensitive information carefully.

☝Using parameters in your instructions

Note that referencing System Fields or your own custom Parameters may rely on a working Power Automate Connector, also fallback instructions for the AI to handle cases where the parameters are not available

For example, once a customer calls a specific service, Trigger Events start the Power Automate flow, which then connects to external systems (e.g. a CRM) to retrieve customer details (e.g. based on their PSTN phone number) via Flow Actions. If your Virtual User should already verify that customer, it needs to be instructed only to engage with the customer when these details were clearly retrievable.

 
Initial Message

💡Allows to define HOW the conversation starts. It is the first opening message when your Virtual User joins a conversation.

Note that this field is optional

Depending on chosen your Bot configuration, underlying AI may already engage with the customer automatically. It does however help to add at least a short initial message to standardize the customer approach.

Hi! I'm here to retrieve and update service tasks. How can I help?

💡Optionally – e.g. in the case when using Microsoft Copilot as your bot - you can use this message to go directly into specific Conversation Topics1, e.g. by adding Nimbus Custom Parameters or System Fields right with the first message.


Parameters in fields

💡A possible example of an Initial Message could be to look up your customer details2 and have the Virtual User address them personally:

Hello, $(Customer.DisplayName). How can I help you today?

☝Of course, when using parameters as your initial message, the same restrictions mentioned above apply, e.g. no or only default parameters are available2.


1 🔎Also see: MSFT Learn Documentation | Use system topics to learn more.

2 Assuming that the customer can indeed be identified by their CallerID, PSTN or other means, e.g. using the Power Automate Connector and your database / CRM of choice.

 

Extension Tools

💡These settings allow to extend the Virtual User with external system capabilities.

✅Precondition: Visible when “Nimbus AI Services - AI Workflow” Bot was selected as part of the previous “Bot Configuration”.

 

Web Requests

Field Description
Flow Description

💡Allows to define HOW the bot must execute steps.

This field tells the Virtual User how to execute the conversation and tool usage, step by step. It should explicitly define: 

  • How the Virtual User shall use the Web Requests during a session. The Web Requests can be identified via their name.
  • The order of execution (e.g. FIRST validate bank name, IBAN,  THEN retrieve user name, ONLY THEN answer the inquiry)
  • Mandatory and optional steps with clear signal words (e.g. these steps must be successful | this step can be optional)

💡Unlike MCPs more capability-based flow, Web Requests should have a clear order sequence, as information retrieved can be often co-dependent, specific and granular.  An example Web Request flow description could therefore look as follows:

This workflow is designed to handle complex banking inquiries with multiple steps and dependencies.
    The assistant will guide the customer through a series of questions to gather necessary information, 
    validate it, and provide appropriate responses based on the detected intents. 
    The workflow should work as follows:
        1. Ask for the users bank name and validate it.
        2. Ask for the last 4 digits of the iban number and validate it.
        3. Ask for the users name name and validate it.
        4. Only when all the information is collected and validated, 
            the assistant should proceed to answer the users inquiry based on the detected intent.
Web Requests 

✅Precondition: Mutually exclusive to MCP Server

Allows to define previously configured Web Requests. Use this when …

  • … you want transparent definition and orchestration in Nimbus.
  • … the validation logic must be auditable directly in Nimbus
  • … your called APIs are time-sensitive and tightly sequenced

Each Web Request:

  • Is called directly from within Nimbus
  • Uses HTTP Methods (POST, GET, PUT PATCH DELETE)
  • Needs Headers / Authentication
  • Has Name fields for reference and…
  • Description fields for AI handling instructions.

💡Learnings

  • Web Request Name / Description fields are used by the AI for reference during the Flow Description. Not providing a description and clear naming may increase the risk that the AI doesn't use the web request correctly.
  • In the → Flow Description, you should clearly define how Web Requests are used in sequence. The model will follow the order and logic you specify in the flow description, not the order in which the Web Requests were added to the Nimbus UI.

💡Following the banking example above, you can differentiate your Web Requests with specific “Description” field contents as follows:

WEB REQUEST A (IBAN): For IBAN: “international bank account number (IBAN)”
WEB REQUEST B (Account): For Account Number: “bank-specific account number”
 
 
 

MCP Server

Field Description
Flow Description

💡Allows to define WHICH TOOLS the bot must use to achieve a goal.

As each MCP server can expose multiple (variable) tools, and the AI decides which ones to call. 

  • How the Virtual User shall use and understand the MCP extension tools .
  • When to best use which tool, based on user inputs and (missing) context.
  • What to expect as results of using the tools, and their expected output.

💡Unlike Web Requests mandatory sequencing, for MCPs we need to specify the flow to follow conditionally, based on intent and available tools, rather than each MCP server entry itself.  An example MCP flow description could therefore look as follows:

- Understand whether the user wants to create a new service work item / ticket, review recent changes to his items, add a comment to them, or update anything in their description.
- Prefer the connected MCP tools for every supported work-item action.
- Select the MCP tool that best matches the user's request, collect any missing required details, and confirm the intended action.
- Execute the MCP tool only after the request is fully specified.
-  If no MCP tool can complete the action, explain the limitation and suggest the closest supported alternative.
- After a successful MCP call, summarize exactly what was created, fetched, or changed.
MCP Server

✅Precondition: Mutually exclusive to Web Request

Allows you to connect to a running MCP server1, which offers more flexibility. Use this when …

  • …you already operate or connect to existing MCP servers.
  • …you want tools and capabilities to evolve independently of Nimbus, but constrain their usage.
  • …you want long‑term flexibility with less UI configuration.

A MCP server:

  • Uses the Model Context Protocol1
  • Runs outside as server and is consumed by Nimbus, with an URL like: 
    https://<yourdomain.com>/mcp/
  • Uses Headers and API keys for authentication
  • Uses available tools (discovered dynamically)

1 🔎MCP Server Notes

From https://modelcontextprotocol.info/docs/ 

MCP is a standardized protocol designed to enhance the interaction between Large Language Models (LLMs) and applications by providing structured context management.

💡Good to know: Nimbus Virtual User is compatible with MCP servers designed for OpenAI GPT‑Realtime, including remote MCP servers exposed over HTTP. More information can be found in the OpenAI MCP connector guide and Realtime API documentation.

 

💡Learnings

  • In the flow description, you should clearly define how MCP tools are to be used. The model will follow the order and logic you specify in the flow description, not the order in which the MCP servers were added.

💡Following the banking example above, you can specify an MCP server “Description” field as follows, narrowing down the tools / capabilities:

MCP server for interacting with service desk tickets. Enables creating bugs, updating fields, adding comments, and retrieving work item data.
 
 
 

🔎 Good to know: Failure Case behavior 

For both MCP and Web Request your flow description doesn't need to handle failure cases manually. This is done in the "Add Virtual User" Conversation Handling Activity.

Show more details…

Add Virtual User

Add Virtual User 

INC Preview Feature

This feature is in PREVIEW and may not yet be available to all customers. Functionality, scope and design may change considerably.

Description

Adds a preconfigured Virtual User (AI Bot) to engage with the Customer. 

✅Preconditions for this activity must be met. Refer to the Virtual User page for more details.


☝Note: Missing “Virtual User” licenses will affect productive workflows

During an incoming call – and with no license applied to a Virtual User in your workflow – the “Failed” exit is taken automatically. The exit is also taken on any technical errors or service outages.

→  To handle this case, a fallback Queue / Transfer or Announcement activity is recommended after the “Failed” exit.

 
Required Predecessor

Accept

Known Limitation:  While all Virtual Users are planned to be fully supported by Outbound Call Workflows, a prior “Announcement” workflow activity is required before the “Add Virtual User” to give the AI sufficient spin-up time

 
License
Advanced Routing Enterprise Routing Contact Center
Modalities Audio / Video Instant Messaging Email External Task

Common Properties

Configurable Properties Description
Virtual User

Virtual User preconditions and License must be applied for your Virtual User to be able to process customer inputs.


💡The Virtual User pulldown directly affects the behavior and configurable properties within the Virtual User activity. 

Virtual User activity properties with Copilot Direct Line as underlying model

 

Virtual User  activity properties with “AI Workflow as underlying model. This example showcases a warning shown when the Virtual User is unlicensed.

 

Virtual User activity Audio Intent Analyzer as underlying model
Table: Example Virtual User Activities with various properties, based on the selected “Virtual User” configuration

🔎Notes

 
Max Input Timeout               
hh:mm:ss (default 00:01:00)

Maximum wait time for any Customer Response

  • Min: 00:00:05
  • Max 00:10:00

💡 If no interaction occurred between Customer and Bot, the "Idle Timeout" exit node is used. 

Fallback Parameter

✅ Apply for the “Intent Analyzer” and “AI Workflow” Virtual User. 


Used to store a fallback response by the Virtual User into the specified Parameter

🔎 Purpose: If no other exit could be determined during the conversation, the Virtual User will store any identified “unhandled” outcome into this parameter. Note this reply can vary in length, so below is just an example:

Repeated errors occurred while validating the Policy ID number '1234. Unable to proceed with the request to verify the Customer Policy.

💡Usage as context: While it is possible to display this parameter in the Nimbus UI (e.g. as Extensions Service Settings > My Sessions), this data is more meant for further processing. Examples could be evaluation of the parameter in automated Flows, transfers to fallback services, or for storage of customer requests that cannot be covered by the AI (and underlying Bot).

 
Text to Speech

✅ Applies for the “Copilot” Virtual User


Uses Microsoft Text-to-Speech routines to convert the bot response into audio for the Customer.

💡Note that the configuration of your Virtual User (e.g. “Topics” within Copilot Studio) determine how long these audio replies can get. We advise to give the bot instructions to limit itself to a maximum of 2-3 sentences.

Common Default Exits

✅ Applies for all Virtual User types. 


The activity has the following exits:

  • Fallback - taken when:
    • No other condition mentioned below applies.
    • Updates the → “Fallback Parameter” above and then proceeds with the exit.
  • Failed - Taken when: 
    • … the activity is disabled,
    • … no license is applied to the Virtual User in the Configuration.
    • … connections to external systems, e.g. via Web Requests or MCP server connections have failed.
    • … other technical issues occurred, e.g. the connection to the bot itself failed, or other error response codes.
  • Idle Timeout - Taken if during the Max Input Timeout no interaction with the Customer occurred.
Specific Exits 
(AI Workflow)

✅ Applies for the "AI Workflow” Virtual User. 


The activity has the following exits:

  • Success - taken when
    • All topics and requests by the customer could be handled by the Virtual User.
      AND
    • All connections to external systems, e.g. via Web Requests or MCP server were successful.
  • Failed - If either a Web Request OR a MCP server connection fails, the "Add Virtual User" activity will take the Fallback exit accordingly, with an optional “Fallback” Parameter storing the response generated by the AI
  • Idle Timeout - Same as “Common” default exit behavior
Custom Exits

✅ Apply for the “Intent Analyzer” and “Copilot” Virtual User


You can define up to 10 custom-named exits to handle cases. 

Known issue: Custom exits should be written in one word - e.g. "invoice_number" and avoid special characters - otherwise they may not be correctly detected by the corresponding AI Model. 

 

✅ Each Exit behaves based on the Bot configuration (AI-model) picked for your Virtual User:

For Copilot

Each custom exit consists of:

Caller Sentiment and Intent: Bot activity may also update System Fields and Parameters > CallData:

Name Type Lifecycle Placeholder UIName
CallerSentiment String Customer Session $(Caller.Sentiment) Customer Sentiment
CallerIntent String Customer Session $(Caller.Intent) Customer Intent

☝Potentially blank fields: If the underlying Bot behind your Virtual User activity does not support Sentiment / Intent analysis – or is not not configured for it –  these fields will be left blank during a Session.

 
 
 

For Nimbus Audio Intent Analyzer / Audio GPT Realtime

Each custom exit consists of:

  • A Name for the exit.
  • Context, helping the Virtual User to identify, when to take this exit.
  • OPTIONALLY: Up to 2 Parameters for storing Customer-provided data. 
    💡The AI will determine which customer data to fill into these parameters, based on the “Virtual User Context” specified in the Parameters config.
 
 
 
 

Audio / Video

INC WF Properties remark

No specific behaviors for this modality. “Common Properties” apply.

 
 
 
 
 

Modalities

Field Description
Modalities

Enables the bot to handle and react to Audio/Video (calls).
⮑ Depending on your chosen bot, checking this option may require a Speech Recognizer to be configured → See below.

💡Further modalities will be made available in the near future.

Speech Recognizer

✅Preconditions: 

  • Copilot Direct Line 3.0 was selected as a Bot
  • Requires Speech Recognizers (used for speech-to-text transcription) to be configured for selection in this menu.
    💡 Speech Recognizers are used to parse the language of the calling customer into text for the bot to process. They can be configured as multilingual-capable, but are more effective when set to a specific language.

Follow-up Steps

Licenses

Field Description
Licenses ✅In order to take action within a workflow, every individual Virtual User needs a license applied. You may freely (re-)assign licenses from your License Management. If you require additional licenses, get in touch with Luware Customer Success.

✅Workflow Follow-Up: 

  1. Once done with your Virtual User configuration, you need to update your Workflows
  2. In order to take effect, Virtual Users must be added as “Add Virtual User” activity to your Audio/Video workflows.
 

Limitations

INC AI and Virtual User Limitations

General note on AI-driven interactions

AI driven replies are not deterministic and depend highly on what the Customer is saying. For example, if the Customer is saying “I have trouble with my internet” it is not necessarily given that the Bot will associate “Router, Modem” as your workflow routing exit, unless specifically handled in your Virtual User integration. In this specific example, AI instructions should also cover alternative wordings like “Router, Internet”, to be handled in topics accordingly.

🔎Refer to our Best Practices - Virtual Users in Nimbus for some considerations and risks when using Virtual Users instead of Human interaction.

 

General Virtual User limitations

The current Nimbus implementation with AI-driven bots underlies the following limitations: 

  • Supported Modalities: Virtual Users (Bots) are currently available for Audio/Video modality tasks.
  • Virtual User Reporting: Sessions involving Virtual Users are not reflected as dedicated User Session. Virtual User session reporting is planned for a later point this year.
  • Outbound Call via Workflow: Virtual Users are supported in Outbound Call with Workflows. While all Virtual Users are planned to be fully supported by Outbound Workflows, a prior “Announcement” workflow element is required before the “Add Virtual User” to give the AI sufficient spin-up time. Nimbus preview program users can already test their scenarios. As testing and development on Bots compatibility is still ongoing, not all features may work as intended yet.
 
 

Microsoft Copilot Limitations

  • Expect processing delays: Processing AI answers takes a few seconds for voice-to-text-transcription, followed by AI processing and a transcription back into a voiced response. Luware is trying to minimize the delay on Nimbus call infrastructure, but the dependency on external APIs will always incur a delay. The Customer will hear silence during this processing and no audio feedback / silence detection.
  • Ensure you have no ambiguity in your topics. For instance, the word “Service” may be too generic if you want to transfer to different services. Rather use healthcare|medical|emergency as identifiers or use more complex Regular Expressions to identify replies.
 
 

Nimbus AI Services -  Audio Intent Analyzer

🔎By design (not a limitation or reportable issue): 

  • Fallback handling: If a workflow uses exits with multiple parameters, e.g. to collect customer name and invoice number, and only one parameter can be retrieved, the fallback exit is taken.
 
 

Nimbus AI Services - AI Workflow

☝First Web Requests implementation - planned to be improved: 

  • The “Description” field in the Web Requests itself is used to explain to the AI what the web request does. 
  • The Web Request itself can be identified via name, using the "Flow Description" field located in the Virtual Users > Extensions > Web Request section.

🔎By design (not a limitation or reportable issue):

  • Fallback handling: If a workflow uses exits with multiple parameters, e.g. to collect customer name and invoice number, and only one parameter can be retrieved, the fallback exit is taken.
  • Failed Web Request / MCP fallback: If either a Web Request fails or a connection to an MCP server fails to connect, the with the Fallback exit accordingly. Any (optionally user-specified) “Fallback” parameters will be updated with the response generated by the AI.
 
 

Azure Open AI - Open GPT Realtime limitations

☝ This feature is not yet ready for productive use and only enabled for selected customers. We will make further adjustments and update Use Case - Setting up a Nimbus Virtual User using OpenAI GPT Realtime accordingly in an upcoming Nimbus release.

 
 
 

Table of Contents