• Retrieves a voice mp3 saying sound from the backend, then plays the audio by creating an AudioBufferSourceNode, as described here: https://developer.mozilla.org/en-US/docs/Web/API/Response/arrayBuffer#playing_music

    Previous calls are stored in the main thread Cache, as recommended here: https://web.dev/articles/storage-for-the-web

    "Why do we cache the Response and not the AudioBuffer?" The AudioBuffer is sampled according to the AudioContext, so it is most reliable is we resample the AudioBuffer each call. This extra processing time is not noticeable by the end user.

    "Why do I have to pass in the AudioContext?" Safari will only play audio if it is the result of direct user interaction. If the AudioContext is initialized in the function, Safari does not see that as the result of user interaction, so it blocks the context. By passing in the AudioContext, Safari allows the audio to play. https://stackoverflow.com/a/31777081 https://stackoverflow.com/a/58354682

    Parameters

    • sound: string
    • __isBackendActive: boolean = true

    Returns Promise<boolean>

    true if the sound is played, and false otherwise

Generated using TypeDoc