Authors

Video Tutorials

AI Assistant: Producing Highly Realistic Audio

As a course author, you want to do more than just present information—you want to create multi-sensory e-learning experiences that resonate with learners. Using sound creatively can help you get there. AI Assistant’s text-to-speech and sound effects features let you create highly realistic AI-generated voices and sound effects for more immersive and accessible content.

Available only in Storyline 360, these features can be accessed from the Insert Audio dropdown in the AI Assistant menu within the ribbon. Find them under the Home or Insert tab when you’re in slide view or from the AI Assistant side panel as quick action buttons for added convenience.

Bring Narration to Life with AI-generated Voices

If you’ve ever used classic text-to-speech, you probably wished the voices sounded less, well, robotic. AI Assistant’s text-to-speech brings narration to life with contextually aware AI-generated voices that sound more natural—and human! Check out the difference in quality between a standard voice, neural voice, and AI-generated voice by playing the text-to-speech examples below.

 

Standard Voice

Neural Voice

AI-generated Voice

 

To get started, click the Insert Audio icon in the AI Assistant menu to open the Generate AI Audio dialog box. A library of AI-generated voices—which you can filter by Gender, Age, and Accent—displays under the Voices tab.  The voices also have descriptions like “deep,” “confident,” “crisp,” “intense,” and “soothing” and categories that can help you determine their ideal use cases, from news broadcasts to meditation, or even ASMR. Find these qualities under the voice’s name, and use the play button to preview the voice. Currently, there are 52 pre-made voices to choose from, and you can mark your favorites by clicking the heart icon. This way, you can easily access your preferred voices without having to scroll through the list. Toggle the View option to Favorites to find all your favorite voices, or In project to see voices used in the current project. Once you’ve decided on a voice, click the button labeled Use to switch to the Text-to-Speech tab. Your chosen voice is already pre-selected.

Next, enter your script in the text box provided or click the add from slide notes link to copy notes from your slide. The script can be a maximum of 5,000 characters. For accessibility, select Generate closed captions—AI Assistant will generate closed captions automatically.

Adjust the Voice Settings

Unlike classic text-to-speech, the AI-generated voices in AI Assistant’s text-to-speech can be customized for a tailored voice performance. The Model setting lets you choose between the Multilingual v2 model—highly stable, exceptionally accurate, lifelike speech with support for 29 languages—and Turbo v2.5 model—slightly less stable but 300% faster with support for 32 languages. Play the following samples to listen and compare the voices generated by each model.

 

Multilingual v2

Turbo v2.5

 

The setting for Stability controls the balance between the voice’s steadiness and randomness. The Similarity setting determines how closely the AI should adhere to the original voice when attempting to replicate it. The defaults are set to 0.50 for the stability slider and 0.75 for the similarity slider, but you can play around with these settings to find the right balance for your content.

Additional settings include Style exaggeration, which amplifies the style of the original voice, and Speaker boost, which enhances the similarity between synthesized speech and the voice. Note that if either of those settings is adjusted, generating your speech will take longer.

Do I Need to Use SSML?

AI Assistant has limited support for speech synthesis markup language (SSML) because AI-generated voices are designed to understand the relationship between words and adjust delivery accordingly. If you need to manually control pacing, you can add a pause. The most consistent way to do that is by inserting the syntax <break time="1.5s" /> into your script. This creates an exact and natural pause in the speech. For example:

With their keen senses <break time="1.5s" /> cats are skilled hunters.

Use seconds to describe a break of up to three seconds in length. 

You can try a simple dash - or em-dash to insert a brief pause or multiple dashes for a longer pause. Ellipsis ... will also sometimes work to add a pause between words. However, these options may not work consistently, so we recommend using the syntax above for consistency. Just keep in mind that an excessive number of break tags can potentially cause instability. 

Multilingual Voices Expand Your Reach

Another compelling benefit of AI-generated text-to-speech is the ability to bridge language gaps, allowing you to connect with international audiences. With support for up to 32 languages depending on the model—including some with multiple accents and dialects—AI Assistant’s text-to-speech helps your content resonate with a global audience.

All you have to do is type or paste your script in the supported language you want AI Assistant to use. (Even though the voice description notes a specific accent or language, AI Assistant will generate the narration in the language used in your script.) Note that some voices tend to work best with certain accents or languages, so feel free to experiment with different voices to find the best fit for your needs.

The table below provides a quick rundown of supported languages.

Available in Multilingual v2 and Turbo v2.5:

  • English (USA)
  • English (UK)
  • English (Australia)
  • English (Canada)
  • Japanese
  • Chinese
  • German
  • Hindi
  • French (France)
  • French (Canada)
  • Korean
  • Portuguese (Brazil)
  • Portuguese (Portugal)
  • Italian
  • Spanish (Spain)
  • Spanish (Mexico)
  • Indonesian
  • Dutch
  • Turkish
  • Filipino
  • Polish
  • Swedish
  • Bulgarian
  • Romanian
  • Arabic (Saudi Arabia)
  • Arabic (UAE)
  • Czech
  • Greek
  • Finnish
  • Croatian
  • Malay
  • Slovak
  • Danish
  • Tamil
  • Ukrainian
  • Russian

 

Available only in Turbo v2.5:

  • Hungarian
  • Norwegian
  • Vietnamese

Create Sound Effects Using Prompts

Sound effects that align with your theme and content can highlight important actions or feedback, like clicking a button or choosing a correct answer, offering a more engaging and effective e-learning experience. With AI Assistant’s sound effects, you can now use prompts to easily create nearly any sound imaginable. No more wasting time scouring the web for pre-made sounds that may cost extra!

Start creating high-quality sound effects by going to the AI Assistant menu in the ribbon under the Home or Insert tab. Then, click the lower half of the Insert Audio icon, and choose Sound Effects. (You can also access it from the Audio dropdown within the Insert tab. Simply select Sound Effects under the AI Audio option.) In the text box, describe the sound effect you want and choose a duration. You can adjust the Prompt influence slider to give AI Assistant more or less creative license in generating the sound.

Since AI Assistant understands natural language, sound effects can be created using anything from a simple prompt like “a single mouse click” to a very complex one that describes multiple sounds or a sequence of sounds in a specific order. Just note you have a maximum of 450 characters to describe the sound you want to generate.

Play the following audio samples to listen to sound effects created using a simple prompt and a complex one.

Prompt: A single mouse click

Prompt: Dogs barking, then lightning strikes

You can also adjust the Duration—how long the sound effect plays—up to a maximum of 22 seconds. For example, if your prompt is “barking dog” and you set the duration to 10 seconds, you’ll get continuous barking, but a duration of two seconds is one quick bark. Adjusting the Prompt Influence slider to the right makes AI Assistant strictly adhere to your prompt, while sliding it to the left allows more free interpretation.

Some Pro Terms to Know

Using audio terminology—specialized vocabulary that audio experts use in their work—can help improve your prompts and produce even more dynamic sound effects. Here are a few examples:

Braam: A deep, resonant, and often distorted bass sound used in media, particularly in trailers, to create a sense of tension, power, or impending doom.

Whoosh: A quick, swooshing sound often used to emphasize fast motion, transitions, or dramatic moments.

Impact: A sharp, striking noise used to signify a collision, hit, or sudden forceful contact, often to highlight a moment of action or emphasis.

Glitch: A short, jarring, and usually digital noise that mimics a malfunction or distortion, commonly used to convey errors.

Foley: The process of recreating and recording everyday sound effects like movements and object sounds in sync with the visuals of a film, videos, or other media.

Here’s something fun to try! Generate a 3-second sound effect using the prompt “studio quality, sound designed whoosh and braam impact.” Increasing the duration may produce better sound effects but will also create more dead air towards the end.

Pro tip: Onomatopoeias—words like “buzz,” “boom,” “click,” and “pop” that imitate natural sounds—are also important sound effects terms. Use them in your prompts to create more realistic sound effects.

Video Tutorials

Want to learn more before getting started? Check out our video tutorials for additional guidance on using AI Assistant to generate text-to-speech and sound effects.

Articulate 360 Training also has additional video tutorials on using other AI Assistant features.

You must be logged in to your Articulate 360 account to watch the videos. Don’t have an account yet? Sign up for a free trial now!

Updated 3 days ago
Version 11.0