Transcription

An overview of the Transcription and Live Caption features.

The Transcription feature enables live aptioning and after-call voice transcription. It uses Speech-to-Text (STT) technology for converting spoken words into written text. When enabled, the Transcription feature appears as a widget in My Sessions and within the Attendant Console sidebar.

INC Transcription Preconditions

PRECONDITIONS

✅Related Admin Use Case: Refer to Use Case - Setting Up Transcription for detailed step-by-step instructions.

Nimbus service and user licensing

🔎Transcription features have service and user requirements. These requirements apply for either mid-session Live Caption or post-session Transcription / Summarization features.

Service requirements

Enterprise Routing Contact Center 

  • Prerequisite: Only Enterprise Routing or Contact Center services have the Companion tab and related Nimbus Features available to the service users.
  • Feature toggle: As Administrator/Team Owner you can enable single Companion features via Companion Service Settings.
  • Data display: To show the transcription data to Nimbus users, you also need to enable the “Companion” widget on the Extension Service Settings. Of course you can also opt to not show the transcription, instead just leveraging the data via the Nimbus Power Automate Connector. → More on this below.

User requirements

Companion 

  • License distribution: Your tenant needs to have sufficient available Companion licenses assigned in Licenses Tenant Settings. You can review and bulk-assign available licenses via License Management.
    💡If no or insufficient licenses are shown, get in touch with a Luware customer success partner to discuss terms and conditions.
  • Enable feature: Every Nimbus user requiring access to features described on this page needs to have a Companion license applied. This can be done by any Administrator in the General User Settings.
    ⮑ Once enabled, the My Sessions page will show a new “Companion” widget with (optionally) enabled features sorted into tabs.
 
 

Speech Services

🔎Features described in the following use “Speech Services” provided by 3rd party vendors. To offer our customers both convenience and flexibility, you may pick between a Nimbus-native implementation and Azure speech services.

INC Speech Recognizer service comparison

  Nimbus AI Services Azure AI Services
Benefits
  • Nimbus native speech recognizer, available out-of--box
  • Low initial setup and maintenance
  • Centrally managed in the Nimbus UI
  • Full flexibility in configuration and LLM choice
  • Possibility to train the model dependent on your use case and business needs
Challenges
  • Less flexible and no custom configuration possible
  • Usage based on Luware “fair use” policy as part of your license.
  • Will cause monthly ACS costs outside of your Nimbus subscription (e.g. APIs for speech services).
Setup
  • Low barrier of entry, no API knowledge required
  • Data data storage beyond retention time is done via Power Automate Connector.
 
 

Optional: Power Automate connector integration

🔎Optional step: The Nimbus Power Automate Connector can extract Transcription/Summarization data for implementing additional use cases. To achieve this, the following steps need to be performed by an administrator: 

  • You need to set up a Power Automate flow that uses the “Companion” Trigger Event to react to any ongoing transcription session event.
  • You then require the “Companion” Flow Action to capture the data for any further processing.

💡Some Use Case examples from our Knowledge Base:

 
 
 

Live Captioning

INC Preview Feature

This feature is in PREVIEW and may not yet be available to all customers. Functionality, scope and design may change considerably.

Precondition: To use Live Captioning functionality, Transcription (as part of the Companion license) must be set up first  → See general preconditions above.

Live captions are audio transcriptions generated real-time during call sessions. When a call is accepted, a bot is invited to the call in order to start the audio transcription of the session. As a user, you see your spoken conversation with the caller in the Live Caption widget. Live Caption is only visible during ongoing calls, visible in either My Sessions as “Companion” widget and within the sidebar of Attendant Console .

Live Caption Widget Shown in My Sessions

Transcription

When the call session is ended, the live caption is saved as a transcript. This process may take a bit, depending on how long your call with the customer was.
⮑ Once the final transcription is processed, it appears in the Companion widget.

Processed Transcription shown in the widget

💡Good to know: 

Transcription access

  • Within My Sessions you can also access past session transcripts by clicking on a concluded session. If available, the transcript of the session is then opened in the “Companion” widget.  Note that a transcript will only be created when the feature was enabled before the Nimbus session started.
  • Transcription features are also available in Attendant Console within the sidebar.
  • When you have After-Call Work (ACW) enabled on this service, you may see a message: "Your transcription is currently being processed and will be shown shortly" because the session is still technically not “done” yet. This message may also be shown on very long transcripts that are still being processed.
  • Note that once the feature is enabled on service level, Transcription is always enabled for Nimbus users, as long as they have the “Companion” license.1

Data access and Retention


1💡 We are actively are working on improvements to allow for more flexible opt-out in future. → If you wish to offer your callers an opt-out from transcription in your services, we recommend to transfer the call to a Nimbus service where this feature is disabled.

 

Transcription in other areas

Automated Voicemail Transcription

You can use transcription features to automatically transcribe incoming voicemails. Once a caller is asked to leave a voicemail via workflow, a “transcript” will be generated and added to the Adaptive Card. 

Optional Feature - Requires Transcription preconditions to be met (and “Voicemail Transcription" to be enabled within Modalities Service Settings
Your workflow requires a “Voice Message” activity to trigger the generation of a voicemail.

Workflow setup and enabling “Voicemail Transcription” with enabled Speech Recognizer.
Generated Adaptive Card with “Voicemail Transcription”

Known Limitations

INC Transcription Limitations

KNOWN TRANSCRIPTION & SUMMARIZATION LIMITATIONS

💡We are actively working on further improvements to the following limitations:

  • Transcription data is not part of service/user transfers. Data is kept within the current customer/user session.
  • Transcription is currently only supported for direct inbound calls. Outbound Calls (i.e. Call On Behalf), and Consultation calls (either to user or Services) are not yet supported.
  • Transcription > Summarization features are still in preview. The Summarization is only available on My Sessions.

Troubleshooting FAQ

INC Transcription Troubleshooting

Not seeing any transcripts in the Transcript widget can have several causes. The following table lists error messages and explains why they are shown.

Message shown 🤔 Why do I see the message?
No transcription is available for this interaction.

There was no transcription generated for the selected conversation. This could happen …

  • … when there are issues with the configured Speech Recognizers
  • … when there were issues with the microphone/speakers. 
  • … or simply when no one said anything during the transcription.
No transcription is available for this interaction 
OR Task was not accepted.
  • The selected call was not accepted and therefore there is no transcribed conversation.
User Transcription License is missing.
  • The user doesn't have a Companion license assigned.
    Licenses can be assigned to users in Users > General User Settings > Licenses. You can also inspect your available license contingency via Admin Portal > License Management.
 
 
 

BY DESIGN

💡The following points are not limitations.

  • Feature availability: 
    • Transcription is a prerequisite to Live Captioning and Summarization. A working Speech Recognizer must be configured to use all features.
    • Transcription relies on external services and APIs. When the feature is disabled or unavailable (e.g. throttling, settings or Microsoft service impediments) info messages are shown in the frontend UI widget. Nimbus task handling and user-customer interactions themselves are not affected by this and will continue normally.
  • Transcription scope: 
    • 3rd-party participants: Third parties and call conference attendees are not part of the content transcribed. Only the transcription between the Nimbus user and the customer is kept.
    • Supervision As long as Supervisors remain in Listen or Whisper mode during a conversation, their voice is not being transcribed. Only during ”Barge In" they are part of the conversation and transcription is active.
    • Summarization items generated from the Transcription might be missing when there is not enough data to draw from.
 

Table of Contents