Best Practices - Virtual Users in Nimbus

Considerations before using AI bots as Virtual Users in Nimbus

The challenge when building Virtual Users in Nimbus isn’t just about building intelligent agents—it’s about building the right kind of intelligence. In the pursuit of efficient, scalable, and secure customer service experiences, simplicity and focus often outperform complexity.

This article explores best practices for creating Virtual Users in Nimbus, guided by three key principles: simplicity, modularity and workflow-based design. Rather than developing large, all-knowing bots, we advocate for creating small, purpose-built agents that handle clearly defined tasks. This "separation of concerns" makes systems more maintainable, reduces the surface area for security risks, and ensures better performance.

By thinking in terms of workflows, starting from user intent, and prioritizing interactions based on urgency and context, Virtual Users can more accurately triage requests, automate low-value interactions, and escalate high-value ones to the right human agents. Importantly, these agents should act as advisors, not decision-makers—providing data, context, and next steps without replacing human judgment.

Glossary

Term Definition
Virtual User Virtual Users act as first AI-driven Service response between Customer and human Service Agents in order to handle common tasks like IVR routing or lookup tasks.
Generative AI Refers to artificial intelligence systems that can create new content such as text, images, audio, or code by learning patterns from existing data. These models, like GPT or DALL·E, generate outputs that resemble human-created content.
Non-Generative AI Focuses on analyzing, classifying, or predicting based on existing data without creating new content. Examples include recommendation systems, fraud detection, and image classification models.
 

Benefits and Challenges

Using Bots as Virtual Users in your workflows can have benefits, but also bear challenges to consider.

Benefits Challenges
  • Immediate response, without the need of long queues
  • Automated low-value / high frequency interactions 
  • Freeing up your human Experts for challenging tasks
  • Add context to interactions before transferring / queuing
  • Scale effortlessly into high-demand scenarios by simply deploying more / specialized services
  • Dependency on AI capabilities / shortcomings
  • Continuous maintenance / AI agent optimization effort
  • Reliance on external infrastructure: delays / outages
  • Fluctuating and running cost depending on external vendors
  • Customer acceptance heavily relies on personal experience

Detail Considerations

Data Privacy: Conversations

☝Keep in mind: The conversation between a Customer and a Virtual User could contain private or sensitive data.

Output from Bots can be mapped and stored into Parameters, which will appear in the UI as Sessions List and My Sessions. When enabled for Context Handover these Parameters might be handed over and stored for multiple Nimbus services. Nimbus will delete Session parameter data after a retention time, as described in our whitepapers in the Documents section. Any permanent storage of such data is upon the Customer / Administrator to configure.

💡Good to know: Nimbus by itself will not store the exchanged Conversation messages or Parameter data in any logfiles.

 
 

Data Security: Permissions

☝Keep in mind: A bot can get divulge customer data when asked correctly.

While it might be tempting to configure an AI to directly access large datasets in order to to retrieve efficient and comprehensive answers, Customer inquiries can be equally diverse and thus hard to anticipate. Recent AI news have already shown that even the most sophisticated bots can be “convinced” to divulge sensitive information. Therefore it is best to limit the permissions and available data as much as possible to minimize any potential harm done.

💡Hint: By using the Nimbus Power Automate Connector you can retrieve data (e.g. from your Company repositories, CRM or other external systems) and provide it as Parameters for the Virtual User, e.g. to access data already within the first message or within specific topics. This ensures that you control that the information is correct, scope-limited, and up-to-date at the point of the Customer session. It also ensures that privileges to access such data are under Administrator control and “funnels” potential data access requirements at a single point for easier administration.

 
 

3rd-Party: Running Costs

INC Azure Billing Cognitive Services

Disclaimer: Support for Azure Billing & Cognitive Services 

☝ The usage of Microsoft Azure AI Services for Speech1 and AI will cause additional costs outside your Nimbus subscription, which are exclusively determined by Microsoft.

Please note: 

  • Nimbus Support does not cover pricing discussions or make recommendations on Microsoft Azure Services. → Please inform yourself via the references below on pricing models.
  • Nimbus Support can not provide any technical help, insights or solutions on Azure Cognitive Service related technical problems.

1 Azure Speech Services: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/speech-services/

2 Generative AI Services (Copilot Studio):

 

While the goal of the Virtual User is to drill down costs and improve the ease of maintenance, thoroughly thinking about what kind of AI you really need is important. The following guidelines are intended to spark reflection on dimensions which are directly linked to license or consumption cost. Some points to consider are:

  • AI vs. Automation
  • Generative vs. Non-generative AI
  • Calling external models vs. using internal Copilot AI features (prompts, generative AI)

💡Keep in mind: Everything that a Virtual User (and underlying Bots) does is tied to costs, a little bit like with employees. Cost for using AI can be generated on several layers. Therefore it is key to know where the consumption is created and what the billing rates for this consumption are. Make sure to check the pricings and rates of your external AI suppliers regularly, as these are also subject to change.

 
 

Designing your Virtual Users

When building Bots as the “engine” behind your Virtual User, it's best to think of them not just as pieces of software, but as employees. Each bot therefore should have: 

  • A clear role
  • A clear set of tasks responsibilities, skills required to fulfil those responsibilities, but also …
  • … a defined level of autonomy with limited permissions on “need to know” base —just like a person on a team. 

This simple mental model helps in keeping your bot design maintainable, controllable, and purpose-driven.

💡Make it your own: We wrote this guidelines below based on “Microsoft Copilot Studio”. Images and some terms like “topics” or “tools” may be very product-specific. However you can consider these guidelines and thoughts as generic – and can of course apply them to other AI tools as you see fit.

 

Start with the role

Before starting to configure your agent, ask yourself: What job will this Virtual User do?

  • Is it answering questions? What kind of Questions?
  • Will the bot need to answer with specific data? Try to list the data points needed.
  • Can you anticipate and classify expected Customer input in natural language? What other ways and Synonyms could be used to ask for the same thing?
  • Are you Supporting customers with tickets? 
  • Will escalation or transfers be required?

Break It down into Tasks

Once you defined the role of your bot, map out the sequence of tasks the Virtual User needs to perform. Often, a task can be broken down into a series of repeatable steps, each triggered by different events (a user input, a time-based schedule). Processes are often non-linear, with conditional branches, exceptions, or parallel actions. The Virtual User should be prepared for these variations while staying within clearly defined bounds.

Below are some examples how these tasks can be achieved in Luware Nimbus, Microsoft Flow and Copilot. For more inspiration, technical details and step-by-step instructions, head to our AI Use Case Category.

 

Example A: Callback Assistant

Example A: Automated Callback Assistant

An Agent called “Callback Assistant” recognizes the Customer “intent” to request a call back. It can…

  • … check if a call has already been scheduled for a phone number and inform the Customer about the details. If not it can also …
  • … start an immediate Call On Behalf in Nimbus, or…
  • … schedule an Outbound Call in the future.

🔎These actions are designed as “tools” in our AI-enabled Copilot. For each tool, we define specifically when to use it. It depends on the message of the caller:

“Call me back as soon as you can!” 

⮑ Copilot should recognize that the caller wants an immediate call back and run the tool accordingly.

Can you call me back?” 

⮑ Copilot should ask when it needs to schedule the callback, recognizing various inputs like “tomorrow morning” or "tonight at 6pm" or “next friday around 12”. 

“I am waiting for a call back!” 

⮑ Copilot should recognize that this caller thinks there has been a scheduled call back for him, so check if an entry exists and also look in Nimbus if the call back had been taken place and might have been unsuccessful. Then suggest to schedule again or initiate an immediate callback.

 
 
 

Example B: Customer Feedback Agent

Example B: Customer Feedback Agent

An Agent called “Customer Feedback Agent” checks in at the end of a call to ask about service satisfaction. It can

  • … recognize and categorize feedback (sentiment, topic, suggestions) and …
  • … send the feedback to the survey backend system.
  • The Copilot does not need general AI to be enabled. We work with controlled deterministic topics. 
  • Only the topic “Give Feedback” uses deterministic AI, achieving it via Power Automate. 
  • Whenever the Copilot detects some user message, it will redirect to the Give Feedback topic.
Topic: Give Feedback
  • The “Give Feedback” topic reads out the detected values in natural language and sends the exit to Nimbus. 
  • After that the data is saved in a SharePoint form.

 

Topic: Fill out the form
Finally, Copilot can return the interpreted feedback as Message back to the Customer.
  • With the existing SurveyID, the details can be updated again using updated or corrected Customer feedback.
  • When no further feedback is given the Conversation is ended.
 
 

Control Access

Following the considerations above, any employee –even Virtual Users – require specific permissions: access to systems, data, tools. A well-designed Virtual User should too. It must have just enough access to do its job—but not more. This is a critical part of responsible AI and automation design, especially when bots act on behalf of users or touch sensitive data. Ask yourself: “Does it need to send data to Nimbus or store data to a document such as a survey or the customer entity in a CRM?”.

Checkpoints matter

Likewise, just like any employee may require approval at certain stages, bots too can benefit from checkpoints or supervisory oversight, especially when decisions have consequences or involve uncertainty. You could ask “Is the Virtual User allowed to store data form a conversation directly in the CRM? What would be the impact if it does and the data is wrong? Do I want to involve my human users as approvers? When and how will humans approve the step?”

The need to “think”?

Now comes the most interesting part: the "thinking"—where a human would apply judgment, pattern recognition, or creativity.

This is the part of the Virtual User's task that may need to be powered by AI. But not all AI is the same. In fact, understanding the difference between non-generative AI and generative AI is key to build effective Virtual Users.

  👓Non-Generative AI: The Analytical Brain 🧠Generative AI: The Creative Brain
Definition

Non-generative AI includes systems that classify, predict, recognize, or recommend—based on structured learning from data. In the context of a Nimbus Contact Center, this could be:

  • A classification agent that labels a phrase of a customer as urgent or not urgent so that Nimbus can set the distribution priority of the call accordingly.
  • A intent recognition agent that understands customers intent so that Nimbus can connect them with the right department or person. This replaces classical DTMF or Sentence-based IVR.

Generative AI, text or image generators, is designed to create new content—text, images, code, etc.—based on patterns it has learned. It's ideal for tasks that require flexibility, nuance, or creativity, such as:

  • A conversation agent which replies to enquiries with engaging, natural language. In Nimbus, this could be part of a survey agent. Instead of asking for x of y stars, the agent understands the rating of the customer.
  • A summary generation agent which helps human agent with context of an ongoing conversation.
When to pick? These agents are deterministic or probabilistic within known boundaries. Their behaviour can be tested and audited. This makes them ideal for tasks that require reliable, explainable decision-making.

Generative AI can be more comprehensive and flexible in its answers. However, it also is non-deterministic, meaning that the same Customer input may lead to different outputs. Therefore it does requires guardrails:

  • Prompt engineering or fine-tuning to shape outputs
  • Human-in-the-loop validation where appropriate
  • Clear boundaries for where and how it’s used

Consideration: Testing has shown that GenAI can cause noticeable delays in responses, which Customers will experience as pause/silence during an Interaction.  We recommend testing the experience thoroughly.

 

✅How to approach the choice

In Nimbus, we want to use controlled generative AI. That means combining deterministic with non-deterministic logic. 

In the example of Copilot Studio this means: 

  • Using single Topics with generative AI rather than enabled generative AI globally for your Copilot acts as control mechanism. 
  • Additional topics to hand-over to a human Agent in Nimbus, when an approval is needed. 
  • Finally, test your AI thoroughly before putting into production, bearing in mind that customers can get equally as creative.
 

Final Thoughts: Virtual Users Are Colleagues, Not Black Boxes

Let's stay in the visual model of the AI-employee: Think of granting some freedom within a playbook. Your AI can improvise, but not go off-script. Larger decisions will require supervision and consent. Designing Virtual Users as if they are employees—with defined roles, boundaries, and oversight—keeps your systems understandable, trustworthy, and effective. Not every Virtual user needs AI. Not every decision needs creativity. But where automation meets intelligence, and where intelligence meets judgment, the design must be intentional.

Technical Limitations

Below are our currently known issues and limitations for Virtual Users in Nimbus. We strive to improve our AI feature support with each future Nimbus release, so make sure to check back regularly and visit our Latest Release Notes for updates.

INC AI and Virtual User Limitations

AI-driven interactions

AI driven replies are not deterministic and depend highly on what the Customer is saying. For example, if the Customer is saying “I have trouble with my Skateboard” it is not necessarily given that the Bot will use the same word “Skateboard” as your exit unless specifically handled via Copilot. 

🔎Refer to our Best Practices - Virtual Users in Nimbus for some considerations and risks when using Virtual Users instead of Human interaction.

Microsoft Copilot Bot Limitations

  • Expect delays: Processing AI answers takes a few seconds for transcription of voice input, AI processing and then transcription back into a spoken reply. Luware is trying to minimize the delay on Nimbus call infrastructure, but the dependency on external APIs will always incur a delay. The Customer will hear silence during this processing and no audio feedback / silence detection.
  • Ensure you have no ambiguity in your topics. For instance, the word “Service” may be too generic if you want to transfer to different services. Rather use healthcare|medical|emergency as identifiers or use more complex Regular Expressions to identify replies.

Nimbus Technical Limitations

The current Nimbus implementation with AI-driven bots underlies the following limitations: 

  • Supported Modalities: Virtual Users (Bots) are currently available for Audio/Video modality tasks.
  • 3rd-Party API Support: Nimbus will be built flexibly to support “BYO” (bring your own) customer bot configurations and APIs. The currently supported bot integration is M365 CoPilot Direct Line 3.0 APIs.
  • Authentication: The Copilot Secret needs to be copy-pasted into Nimbus. Further authentication types are planned to be supported in the near future.
  • Generative AI: Copilot events (Reviewing, Thinking) during Orchestration are currently being ignored.

Table of Contents