Use Case - Setting Up Transcription

Learn how to set up transcription for your services and users step by step.

In this use case, we are going through the steps needed for setting up the Transcription feature, including

  • Service Administration steps to enable transcription and live caption for your Services and make them visible via widgets.
  • User Administration steps to enable the Transcription feature for Service Users.

INC Azure Billing Transcription

AZURE BILLING

The usage of the Transcription feature will cause additional monthly ACS costs. The costs are determined by Microsoft. Also see https://azure.microsoft.com/en-us/pricing/details/cognitive-services/speech-services/.

  • Before enabling the Transcription feature, get in touch with your Luware Customer Success specialist to discuss terms, usage scenarios and necessary setup procedures.
  • Please note that Nimbus and Transcription support does not cover pricing discussions or make recommendations based on Microsoft Azure infrastructure.
 

INC Transcription Preconditions

PRECONDITIONS

✅Related: Refer to Use Case - Setting Up Transcription for detailed step-by-step instructions.

Licensing:

  • Enterprise Contact Center The service needs to have an Enterprise or Contact Center license assigned in order to use the Transcription feature.
  • Your tenant needs to have available Companion licenses in Licenses Tenant Settings
  • Each Nimbus User that wants to use the Live Caption/Transcription feature needs to have a Companion license applied. This is done by an administrator in General User Settings.
  • As an Administrator/Team Owner you need to enable the Live Caption/Transcription features on the Companion Service Settings.
  • To show either feature to Nimbus users, you also need to enable the “Companion” widget on the Extension Service Settings.

API Services1:

  • As an Administrator, you need to set up speech services in Azure and note down the API key. 
  • The API key needs to be added in the Nimbus Speech Recognizer settings. The configured Speech Recognizer is then used for Transcription.

1 🔎Notes: This feature uses a speech recognition engine hosted by Microsoft and supports regions as specified by Microsoft AI services. The usage of the Transcription features and related APIs will cause additional monthly ACS costs outside of your Nimbus subscription.

 

INC Icon Legend Accordion

Show Icon Legend

💡 = A hint to signal learnings, improvements or useful information in context. 🔍 = Info points out essential notes or related page in context.
☝ = Notifies you about fallacies and tricky parts that help avoid problems. 🤔 = Asks and answers common questions and troubleshooting points.
❌ = Warns you of actions with irreversible / data-destructive consequence. ✅ = Intructs you to perform a certain (prerequired) action to complete a related step.
 
 

Step 1: Add a Speech Recognizer

  1. Go to Configuration > Virtual Assistants > Speech Recognizers.
  2. Click Create New.
  3.  Specify: Name, Organization Unit, Type, the API key if you want Bring-Your-Own (BYO) API, Region and language to be recognized.
    💡Note that: 
    1. If no language is selected, the language will be auto-detected.
    2. The usage of Standard Azure AI Services will cause Azure Cognitive Services costs. For more information, read the Microsoft Documentation.
  4. Click Create.

Step 2: Enable Transcription and Live Captioning for your Service

  1. Go to Service Administration.
  2. Select the Service for which you want to enable the Transcription feature.
  3. Make sure that your Service has an Enterprise or Contact Center license assigned in General Service Settings.
  4. Go to the tab Companion Service Settings.
  5. Enable Transcription and (optionally) Live Captioning.
    💡 Note that Live Caption only works if Transcription is also enabled.
    💡 You can also opt to hide either feature from the Nimbus Portal, but still keep Transcription active. This allows you to evaluate the data via Nimbus Power Automate Connector using “Companion” Trigger Events and Flow Actions.
  6. In the field Speech Recognizer, select your previously created Speech Recognizer. 
  7. Click Save & Apply.
    ⮑ Transcription features are now immediately in effect when the next audio call is coming in.

Step 3: Setup the Widget

💡 This step is necessary if you want to make Live Caption and Transcription visible for the User in Nimbus Portal. If Live Caption and Transcript are disabled in Extensions Service Settings, Transcripts will be saved in the background but Users won't see them in Nimbus Portal.

  1. Go to Extensions Service Settings.
  2. In section widget, enable the “Companion widget.
  3. Click Save & Apply.

Step 4: Assign a Companion License to your Service Users

  1.  Go to User Administration.
  2. Select the User you want to assign a Companion license to and click Edit.
  3. Enable Companion in General User Settings.
    💡 Make sure that the User is part of a Service Team with Transcription/Live Captioning feature enabled.
    ⮑ Service Users should now see the Transcript widget in My Sessions. If they don't have a Companion license applied, the “License missing.” message is shown in the widget.

Troubleshooting

INC Transcription Troubleshooting

Not seeing any transcripts in the Transcript widget can have several causes. The following table lists error messages and explains why they are shown.

Message 🤔 Why do I see the message?
No transcription is available for this interaction. There was no transcription generated for the selected conversation.
No transcription is available for this interaction. 
Task was not accepted.
The selected call was not accepted and therefore there is no transcribed conversation.

Transcription not available.

User Transcription License missing.

The user doesn't have a Transcription License assigned. Transcription Licenses can be assigned to users in Users > General User Settings > Licenses.

Table of Contents