×

Speech Synthesis (TTS)

With Omni Automation, communication with the user of automation tools is accomplished through user-generated dialogs and plug-in UI interfaces, as well as aurally through the device built-in Speech Synthesis frameworks. Omni Automation scripts and plug-ins can “talk” in order to pass important information to the user.

The following documentation details how to incorporate text-to-speech to your automation script and plug-ins.

CLASSES: Speech.Voice | Speech.Utterance | Speech.Synthesizer

 

Speech.Voice Class

The fundamental object in Text-to-Speech in the instance of the Speech.Voice class that is used to render and convey the specified text message to the user.

NOTE: On macOS, the default voice options are set in the Spoken Content section of the Accessibility system preference pane. On iPadOS and iOS, the default voice options are set in the Speech section of the VoiceOver preference in the Settings app.

Class Properties

All Voice Instances


Speech.Voice.allVoices //--> [[object Speech.Voice], [object Speech.Voice], [object Speech.Voice], [object Speech.Voice], [object Speech.Voice], [object Speech.Voice], [object Speech.Voice], [object Speech.Voice], [object Speech.Voice], [object Speech.Voice], …] (75)
Current Language Code


Speech.Voice.currentLanguageCode //--> "en-US"

Instance Properties

Voice Objects Matching the Current Language


var languageCode = Speech.Voice.currentLanguageCode voices = Speech.Voice.allVoices.filter(voice => { if(voice.language === languageCode){ return voice } }) voices.map(voice => voice.name)

Speech.Voice.Gender Class

Speech.Voice.Quality Class

Class Functions

NOTE: There is no mechanism for identifying the system-wide voice currently chosen by the user. Using the withLanguage function above to determine the “default” voice, appears to return the first name in the list of installed voices, not the voice currently used in the speech settings:

“Default” Voice

Speech.Voice.withLanguage(null) //--> [object Speech.Voice] {gender: [object Speech.Voice.Gender: Unspecified], identifier: "com.apple.speech.synthesis.voice.Agnes", language: "en-US", name: "Agnes"}

Installed Voices

Here are lists of the default installed voices on macOS and iOS/iPados:

Names of All Voices (macOS)


names = Speech.Voice.allVoices.map(voice => voice.name) //--> Agnes, Albert, Alex, Alice, Alva, Amelie, Anna, Bad News, Bahh, Bells, Boing, Bruce, Bubbles, Carmit, Cellos, Damayanti, Daniel, Deranged, Diego, Ellen, Fiona, Fred, Good News, Hysterical, Ioana, Joana, Jorge, Juan, Junior, Kanya, Karen, Kate, Kathy, Kyoko, Laura, Lee, Lekha, Luca, Luciana, Maged, Mariska, Mei-Jia, Melina, Milena, Moira, Monica, Nora, Oliver, Paulina, Pipe Organ, Princess, Ralph, Rishi, Samantha, Sara, Satu, Serena, Sin-ji, Susan, Tessa, Thomas, Ting-Ting, Tom, Trinoids, Veena, Vicki, Victoria, Whisper, Xander, Yelda, Yuna, Yuri, Zarvox, Zosia, Zuzana

NOTE: on iOS/iPadOS the names of the voices also reflect is they are high-quality (enhanced):

Names of All Voices (iOS/iPadOS)


names = Speech.Voice.allVoices.map(voice => voice.name) //--> Aaron, Alex, Alice, Alva, Amélie, Anna, Arthur, Carmit, Catherine, Damayanti, Daniel, Daniel, Daniel (Enhanced), Fred, Gordon, Hattori, Helena, Ioana, Joana, Kanya, Karen, Karen (Enhanced), Kate, Kate (Enhanced), Kyoko, Laura, Lee, Lee (Enhanced), Lekha, Li-mu, Luciana, Maged, Marie, Mariska, Martha, Martin, Mei-Jia, Melina, Milena, Moira, Moira (Enhanced), Mónica, Nicky, Nora, O-ren, Oliver, Oliver (Enhanced), Paulina, Rishi, Samantha, Samantha (Enhanced), Sara, Satu, Serena, Serena (Enhanced), Sin-Ji, Tessa, Tessa (Enhanced), Thomas, Ting-Ting, Xander, Yelda, Yu-shu, Yuna, Zosia, Zuzana
Identifiers of All Voices (macOS)


voiceIDs = Speech.Voice.allVoices.map(voice => voice.identifier) //--> ["com.apple.speech.synthesis.voice.Agnes", "com.apple.speech.synthesis.voice.Albert", "com.apple.speech.synthesis.voice.Alex", "com.apple.speech.synthesis.voice.alice", "com.apple.speech.synthesis.voice.alva", "com.apple.speech.synthesis.voice.amelie", "com.apple.speech.synthesis.voice.anna", "com.apple.speech.synthesis.voice.BadNews", "com.apple.speech.synthesis.voice.Bahh", "com.apple.speech.synthesis.voice.Bells", "com.apple.speech.synthesis.voice.Boing", "com.apple.speech.synthesis.voice.Bruce", "com.apple.speech.synthesis.voice.Bubbles", "com.apple.speech.synthesis.voice.carmit", "com.apple.speech.synthesis.voice.Cellos", "com.apple.speech.synthesis.voice.damayanti", "com.apple.speech.synthesis.voice.daniel.premium", "com.apple.speech.synthesis.voice.Deranged", "com.apple.speech.synthesis.voice.diego", "com.apple.speech.synthesis.voice.ellen", "com.apple.speech.synthesis.voice.fiona.premium", "com.apple.speech.synthesis.voice.Fred", "com.apple.speech.synthesis.voice.GoodNews", "com.apple.speech.synthesis.voice.Hysterical", "com.apple.speech.synthesis.voice.ioana", "com.apple.speech.synthesis.voice.joana", "com.apple.speech.synthesis.voice.jorge", "com.apple.speech.synthesis.voice.juan", "com.apple.speech.synthesis.voice.Junior", "com.apple.speech.synthesis.voice.kanya", "com.apple.speech.synthesis.voice.karen.premium", "com.apple.speech.synthesis.voice.kate.premium", "com.apple.speech.synthesis.voice.Kathy", "com.apple.speech.synthesis.voice.kyoko", "com.apple.speech.synthesis.voice.laura", "com.apple.speech.synthesis.voice.lee.premium", "com.apple.speech.synthesis.voice.lekha", "com.apple.speech.synthesis.voice.luca", "com.apple.speech.synthesis.voice.luciana", "com.apple.speech.synthesis.voice.maged", "com.apple.speech.synthesis.voice.mariska", "com.apple.speech.synthesis.voice.meijia", "com.apple.speech.synthesis.voice.melina", "com.apple.speech.synthesis.voice.milena", "com.apple.speech.synthesis.voice.moira.premium", "com.apple.speech.synthesis.voice.monica", "com.apple.speech.synthesis.voice.nora", "com.apple.speech.synthesis.voice.oliver.premium", "com.apple.speech.synthesis.voice.paulina", "com.apple.speech.synthesis.voice.Organ", "com.apple.speech.synthesis.voice.Princess", "com.apple.speech.synthesis.voice.Ralph", "com.apple.speech.synthesis.voice.rishi", "com.apple.speech.synthesis.voice.samantha.premium", "com.apple.speech.synthesis.voice.sara", "com.apple.speech.synthesis.voice.satu", "com.apple.speech.synthesis.voice.serena.premium", "com.apple.speech.synthesis.voice.sinji", "com.apple.speech.synthesis.voice.susan.premium", "com.apple.speech.synthesis.voice.tessa.premium", "com.apple.speech.synthesis.voice.thomas", "com.apple.speech.synthesis.voice.tingting", "com.apple.speech.synthesis.voice.tom.premium", "com.apple.speech.synthesis.voice.Trinoids", "com.apple.speech.synthesis.voice.veena.premium", "com.apple.speech.synthesis.voice.Vicki", "com.apple.speech.synthesis.voice.Victoria", "com.apple.speech.synthesis.voice.Whisper", "com.apple.speech.synthesis.voice.xander", "com.apple.speech.synthesis.voice.yelda", "com.apple.speech.synthesis.voice.yuna", "com.apple.speech.synthesis.voice.yuri", "com.apple.speech.synthesis.voice.Zarvox", "com.apple.speech.synthesis.voice.zosia", "com.apple.speech.synthesis.voice.zuzana"]
Identifiers of All Voices (iPadOS.iOS)


voiceIDs = Speech.Voice.allVoices.map(voice => voice.identifier) //--> ["com.apple.ttsbundle.Maged-compact",
"com.apple.ttsbundle.Zuzana-compact",
"com.apple.ttsbundle.Sara-compact",
"com.apple.ttsbundle.Anna-compact",
"com.apple.ttsbundle.siri_Helena_de-DE_compact",
"com.apple.ttsbundle.siri_Martin_de-DE_compact",
"com.apple.ttsbundle.Melina-compact",
"com.apple.ttsbundle.Karen-premium",
"com.apple.ttsbundle.Lee-premium",
"com.apple.ttsbundle.siri_Catherine_en-AU_compact",
"com.apple.ttsbundle.siri_Gordon_en-AU_compact",
"com.apple.ttsbundle.Karen-compact",
"com.apple.ttsbundle.Lee-compact",
"com.apple.ttsbundle.Daniel-premium",
"com.apple.ttsbundle.Kate-premium",
"com.apple.ttsbundle.Oliver-premium",
"com.apple.ttsbundle.Serena-premium",
"com.apple.ttsbundle.siri_Arthur_en-GB_compact",
"com.apple.ttsbundle.Daniel-compact",
"com.apple.ttsbundle.Kate-compact",
"com.apple.ttsbundle.siri_Martha_en-GB_compact",
"com.apple.ttsbundle.Oliver-compact",
"com.apple.ttsbundle.Serena-compact",
"com.apple.ttsbundle.Moira-premium",
"com.apple.ttsbundle.Moira-compact",
"com.apple.ttsbundle.Rishi-compact",
"com.apple.ttsbundle.Samantha-premium",
"com.apple.ttsbundle.siri_Aaron_en-US_compact",
"com.apple.speech.synthesis.voice.Fred",
"com.apple.ttsbundle.siri_Nicky_en-US_compact",
"com.apple.ttsbundle.Samantha-compact",
"com.apple.ttsbundle.Tessa-premium",
"com.apple.ttsbundle.Tessa-compact",
"com.apple.ttsbundle.Monica-compact",
"com.apple.ttsbundle.Paulina-compact",
"com.apple.ttsbundle.Satu-compact",
"com.apple.ttsbundle.Amelie-compact",
"com.apple.ttsbundle.siri_Daniel_fr-FR_compact",
"com.apple.ttsbundle.siri_Marie_fr-FR_compact",
"com.apple.ttsbundle.Thomas-compact",
"com.apple.ttsbundle.Carmit-compact",
"com.apple.ttsbundle.Lekha-compact",
"com.apple.ttsbundle.Mariska-compact",
"com.apple.ttsbundle.Damayanti-compact",
"com.apple.ttsbundle.Alice-compact",
"com.apple.ttsbundle.siri_Hattori_ja-JP_compact",
"com.apple.ttsbundle.Kyoko-compact",
"com.apple.ttsbundle.siri_O-ren_ja-JP_compact",
"com.apple.ttsbundle.Yuna-compact",
"com.apple.ttsbundle.Xander-compact",
"com.apple.ttsbundle.Nora-compact",
"com.apple.ttsbundle.Zosia-compact",
"com.apple.ttsbundle.Luciana-compact",
"com.apple.ttsbundle.Joana-compact",
"com.apple.ttsbundle.Ioana-compact",
"com.apple.ttsbundle.Milena-compact",
"com.apple.ttsbundle.Laura-compact",
"com.apple.ttsbundle.Alva-compact",
"com.apple.ttsbundle.Kanya-compact",
"com.apple.ttsbundle.Yelda-compact",
"com.apple.ttsbundle.siri_Li-mu_zh-CN_compact",
"com.apple.ttsbundle.Ting-Ting-compact",
"com.apple.ttsbundle.siri_Yu-shu_zh-CN_compact",
"com.apple.ttsbundle.Sin-Ji-compact",
"com.apple.ttsbundle.Mei-Jia-compact",
"com.apple.speech.voice.Alex"]
 

The “Alex” Voice

While the “Alex” voice is installed on all platforms, the identifier of the “Alex” voice changes depending on platform, and so a conditional statement must use the value of the platformName property do determine which ID to use:

The “Alex” Voice


AlexID = ( (app.platformName === "macOS") ? "com.apple.speech.synthesis.voice.Alex" : "com.apple.speech.voice.Alex" ) voiceObj = Speech.Voice.withIdentifier(AlexID)

In the case of the Alex voice, a simple single-line solution for getting the corresponding voice object is to use the find() function with a startsWith() condition:

Alex: Single-Line Solution


voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex"))

Note the differences in the resulting Alex voice objects:

Result: macOS


//--> macOS: [object Speech.Voice] {gender: [object Speech.Voice.Gender: Male], identifier: "com.apple.speech.synthesis.voice.Alex", language: "en-US", name: "Alex", quality: [object Speech.Voice.Quality]}
Result: iPadOS/iOS


//--> iPadOS/iOS: [object Speech.Voice] {gender: [object Speech.Voice.Gender: Unspecified], identifier: "com.apple.speech.voice.Alex", language: "en-US", name: "Alex", quality: [object Speech.Voice.Quality: Enhanced]}

Checking Voices

Is Voice Installed? (Check by Name)


voiceName = "Serena" voiceNames = Speech.Voice.allVoices.map(voice => voice.name) voiceStatus = voiceNames.includes(voiceName) //--> true (installed) or false (not installed)
Is Voice Installed? (Check by ID)


voiceID = "com.apple.speech.synthesis.voice.serena.premium" voiceIDs = Speech.Voice.allVoices.map(voice => voice.identifier) voiceStatus = voiceIDs.includes(voiceID) //--> true (installed) or false (not installed)
Is Voice Installed?


voiceID = "com.apple.speech.synthesis.voice.serena.premium" voiceIDs = Speech.Voice.allVoices.map(voice => voice.identifier) if (voiceIDs.includes(voiceID){ //--> voice is installed, perform actions } else { throw "The required voice is not installed." }

Return voice object for a voice by name, and if it doesn't exist, use the Alex voice instead:

Find Voice Object by Name (begins with…)


voiceName = "Serena" voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith(voiceName)) if (!voiceObj){ voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) }
 

Speech.Utterance Class

An instance of the Speech.Utterance class contains the text and voice properties to be rendered by an instance of the Speech.Synthesizer class.

Class Properties

Utterance Class Speech Rate Properties


console.log("defaultSpeechRate", Speech.Utterance.defaultSpeechRate) //--> 0.5 console.log("maximumSpeechRate", Speech.Utterance.maximumSpeechRate) //--> 1 console.log("minimumSpeechRate", Speech.Utterance.minimumSpeechRate) //--> 0

Constructor

Instance Properties

Speak Utterance


string = "The rain in Spain falls mainly on the plain." utterance = new Speech.Utterance(string) voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) utterance.voice = voiceObj new Speech.Synthesizer().speakUtterance(utterance)

The following example creates and vocalizes an array of utterances with a 1-second pause appended to each utterance:

Speak List of Strings


var voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) strings = ["January", "February", "March", "April", "May", "Jume", "July", "August", "September", "October", "November", "December"] utterances = new Array() strings.forEach(string => { utterance = new Speech.Utterance(string) utterance.voice = voiceObj utterance.postUtteranceDelay = 1 utterances.push(utterance) }) var synthesizer = new Speech.Synthesizer() utterances.forEach(utterance => { synthesizer.speakUtterance(utterance) })

IMPORTANT: Due to system Speech API issues, the prefersAssistiveTechnologySettings property currently does not work as expected.

The Assistive Settings Property


utteranceString = "The quick brown fox jumped over the lazy dog." utterance = new Speech.Utterance(utteranceString) utterance.prefersAssistiveTechnologySettings = true synthesizer = new Speech.Synthesizer() synthesizer.speakUtterance(utterance)
 

Speech.Synthesizer Class

The Speech.Synthesizer class represents the code object for speaking the provided text (utterance).

Instance Functions

Instance Properties

Speech.Boundary Class

Stop Speaking Dialog
Stopping Speech Synthesizer


voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) messageString = "Once upon a time in a village far far away, lived a man and his dog. [[slnc 500]] Every day the man and the dog would walk the beach, looking for driftwood." utterance = new Speech.Utterance(messageString) utterance.voice = voiceObj var synthesizer = new Speech.Synthesizer() synthesizer.speakUtterance(utterance) alert = new Alert("Text-to-Speech", "Click “Stop” button to stop speaking.") alert.addOption("Continue") alert.addOption("Stop") alert.show().then(index => { console.log(index) if(index === 1){ synthesizer.stopSpeaking(Speech.Boundary.Word) } })

Another example of stopping an active speech synthesizer, using interaction with a notification alert:

omnifocus://localhost/omnijs-run?script=string%20%3D%20%22Once%20upon%20a%20time%20in%20a%20village%20far%20far%20away%20lived%20a%20man%20and%20his%20dog%2E%20Every%20day%20the%20man%20and%20the%20dog%20would%20walk%20the%20beach%20looking%20for%20driftwood%2E%20On%20occasion%2C%20they%20would%20find%20branches%20washed%20up%20upon%20the%20shore%2C%20gnarled%20and%20twisted%20in%20their%20beauty%2E%22%0Autterance%20%3D%20new%20Speech%2EUtterance%28string%29%0Autterance%2Erate%20%3D%200%2E3%0AvoiceObj%20%3D%20Speech%2EVoice%2EwithIdentifier%28%22com%2Eapple%2Espeech%2Esynthesis%2Evoice%2EAlex%22%29%0Autterance%2Evoice%20%3D%20voiceObj%0Avar%20synthesizer%20%3D%20new%20Speech%2ESynthesizer%28%29%0Asynthesizer%2EspeakUtterance%28utterance%29%0Anotification%20%3D%20new%20Notification%28%22Speaking%E2%80%A6%22%29%0Anotification%2Esubtitle%20%3D%20%22%28TAP%7CCLICK%20to%20Stop%29%22%0Anotification%2Eshow%28%29%2Ethen%28notif%20%3D%3E%20%7B%0A%09synthesizer%2EstopSpeaking%28Speech%2EBoundary%2EWord%29%0A%7D%29%2Ecatch%28err%20%3D%3E%20%7B%0A%09synthesizer%2EstopSpeaking%28Speech%2EBoundary%2EWord%29%0A%7D%29
Stopping Speech Synthesizer via Notification
 

string = "Once upon a time in a village far far away lived a man and his dog. Every day the man and the dog would walk the beach looking for driftwood. On occasion, they would find branches washed up upon the shore, gnarled and twisted in their beauty." utterance = new Speech.Utterance(string) utterance.rate = 0.3 voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) utterance.voice = voiceObj var synthesizer = new Speech.Synthesizer() synthesizer.speakUtterance(utterance) notification = new Notification("Speaking…") notification.subtitle = "(TAP|CLICK to Stop)" notification.show().then(notif => { synthesizer.stopSpeaking(Speech.Boundary.Word) }).catch(err => { synthesizer.stopSpeaking(Speech.Boundary.Word) })

The following example script, will open the Text-to-Speech section of the System Preferences application and speak the error message if the voice, identified by the value of its name property, is not installed.

Check for Specified Voice


// CHECK FOR VOICE BY NAME voiceName = "Serena" voiceObj = Speech.Voice.allVoices.find(voice => {return voice.name === voiceName}) if (voiceObj){ var messageString = `Hello, I am the text-to-speech voice “${voiceName}.”` } else { voiceObj = Speech.Voice.withIdentifier("com.apple.speech.synthesis.voice.Alex") // OPEN SYSTEM PREFERENCE FOR TEXT-TO-SPEECH urlStr = "x-apple.systempreferences:com.apple.preference.universalaccess?TextToSpeech" URL.fromString(urlStr).open() var messageString = `The text-to-speech voice “${voiceName}” is not installed. [[slnc 250]] Select the “Customize…” option, from the “System Voice” popup menu, to add the voice.` } utterance = new Speech.Utterance(messageString) utterance.voice = voiceObj utterance.rate = 0.5 synthesizer = new Speech.Synthesizer() synthesizer.speakUtterance(utterance)

Voice Tester Plug-In

The voices installable using the Apple Text-to-Speech preferences respond differently to rate adjustments. The following plug-in presents controls for choosing the high-quality voice and the rate so you can find the rate adjustment that works best for the chosen voice.

Voice Tester dialog Voice Tester Results
Voice Tester Plug-In
 

/*{ "author": "Otto Automator", "targets": ["omnioutliner","omnifocus"], "type": "action", "identifier": "com.omni-automation.tts.speech-form", "version": "1.5", "description": "Displays a form for setting the parameters of a chosen voice. Results are logged in the console.", "label": "Voice Tester", "shortLabel": "Voice Tester", "mediumLabel": "Voice Tester", "longLabel": "Voice Tester", "paletteLabel": "Voice Tester", "image": "person.wave.2.fill" }*/ (() => { var action = new PlugIn.Action(function(selection){ var form = new Form(); var voices = Speech.Voice.allVoices if(app.platformName === "macOS"){ // on macOS, Alex is not included by default alexVoice = Speech.Voice.withIdentifier("com.apple.speech.synthesis.voice.Alex") voices.unshift(alexVoice) } var voice = new Form.Field.Option( "voice", "Voice", voices, voices.map(voice => voice.name), voices[0] ); form.addField(voice); var defaultString = "The quick brown fox jumped over the lazy dog." var utterance = new Form.Field.String( "utteranceString", "Text", defaultString ); form.addField(utterance); displayRates = ["+5", "+4", "+3", "+2", "+1", "0", "-1", "-2", "-3", "-4", "-5"] rates = ["1.0", "0.9", "0.8", "0.7", "0.6", "0.5", "0.4", "0.3", "0.2", "0.1", "0"] var rate = new Form.Field.Option( "rate", "Rate", rates, displayRates, "0.5" ) form.addField(rate); title = "Text-to-Speech Voices (HQ)" button = "Speak" var formPromise = form.show(title, button) form.validate = function(formObject){ textValue = formObject.values['utteranceString'] return (textValue && textValue.length > 0) ? true:false } formPromise.then(formObject => { voiceObj = formObject.values["voice"] name = voiceObj.name id = voiceObj.identifier lang = voiceObj.language if(app.platformName === "macOS"){ var intro = `Hello, my name is ${name}. [[slnc 500]]` } else { var intro = `Hello, my name is ${name}.` } utteranceString = intro + formObject.values["utteranceString"] utterance = new Speech.Utterance(utteranceString) rateAmt = parseFloat(formObject.values["rate"]) utterance.rate = rateAmt utterance.voice = voiceObj synthesizer = new Speech.Synthesizer() synthesizer.speakUtterance(utterance) console.log("NAME:", name, "RATE:", rateAmt, "ID: ", id) alert = new Alert("Voice Settings", `NAME: ${name}\nLANGUAGE: ${lang}\nRATE: ${rateAmt}\nID: ${id}`) alert.addOption("Done") if(app.platformName === "macOS"){alert.addOption("TTS Prefs")} alert.show(index => { if(app.platformName === "macOS" && index === 1){ // on macOS, open system preference for text-to-speech urlStr = "x-apple.systempreferences:com.apple.preference.universalaccess?TextToSpeech" URL.fromString(urlStr).open() } }) }) }); return action; })();

IMPORTANT: Voices added using the System Text-to-Speech preferences will not be available until the host Omni application is quit and restarted.

 

Examples

Examples using the Speech classes.

The first example uses the Formatter.Date class to speak the current time and date:

What is the Current Time and Date?


dateString = Formatter.Date.withFormat('h:mma, EEEE, LLLL d').stringFromDate(new Date()) //--> "12:07AM, Wednesday, March 2" utterance = new Speech.Utterance(`It is ${dateString}`) speakerVoice = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) utterance.voice = speakerVoice new Speech.Synthesizer().speakUtterance(utterance)
 
Tasks Due Today
An OmniFocus plug-in that aurally lists the available tasks that are due today.
Stop Speaking Dialog
OmniFocus: Tell Me Tasks Due Today
 

/*{ "type": "action", "targets": ["omnifocus"], "author": "Otto Automator", "identifier": "com.omni-automation.of.tts.tasks-due-today", "version": "1.9", "description": "Uses the Speech API of Omni Automation to speak the names and due times of the tasks due today, in the order they are due.", "label": "Tasks Due Today", "shortLabel": "Tasks Due", "paletteLabel": "Tasks Due", "image": "rectangle.3.group.bubble.left.fill" }*/ (() => { var action = new PlugIn.Action(function(selection, sender){ // FUNCTION FOR ORDINAL STRINGS: 1st, 2nd, 3rd, 4th... function ordinal(n) { var s = ["th", "st", "nd", "rd"]; var v = n%100; return n + (s[(v-20)%10] || s[v] || s[0]); } // GLOBAL VOICE var speakerVoice = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) // CURRENT TIME AND DATE date = new Date() var currentDateTimeString = Formatter.Date.withFormat('h:mma, EEEE, LLLL d').stringFromDate(date) //--> "12:07AM, Wednesday, March 2" (Speech API adds ordinal dates when spoken) var openingUtterance = new Speech.Utterance(`It is ${currentDateTimeString}`) openingUtterance.postUtteranceDelay = 0.5 openingUtterance.voice = speakerVoice // IDENTIFY TASKS DUE TODAY fmatr = Formatter.Date.withStyle(Formatter.Date.Style.Short) rangeStart = fmatr.dateFromString('today') rangeEnd = fmatr.dateFromString('tomorrow') tasksToProcess = flattenedTasks.filter(task => { return ( task.effectiveDueDate > rangeStart && task.effectiveDueDate < rangeEnd && task.taskStatus === Task.Status.DueSoon ) }) // PROCESS DUE TASK(S) if(tasksToProcess.length === 0){ string = "There are no available tasks due today." utterance = new Speech.Utterance(string) utterance.voice = speakerVoice var utterances = [openingUtterance, utterance] var tasksFound = false } else { // SORT BY TIME DUE var tasksFound = true tasksToProcess.sort((a, b) => { var x = a.effectiveDueDate; var y = b.effectiveDueDate; if (x < y) {return -1;} if (x > y) {return 1;} return 0; }) // TASK(S) DUE ANNOUNCEMENT taskCount = String(tasksToProcess.length) if(taskCount === "1"){ var textSegments = ["There is one task due today."] var alertTitle = "1 Task Due Today" } else { var textSegments = [`There are ${taskCount} tasks due today.`] var alertTitle = `${taskCount} Tasks Due Today` } // CREATE INFO STRING FOR EACH TASK var timeFormatter = Formatter.Date.withFormat('h:mma') tasksToProcess.forEach((task, index) => { taskName = task.name dueDateObj = task.effectiveDueDate dueTimeString = timeFormatter.stringFromDate(dueDateObj) spokenOrdinalNumber = ordinal(index + 1) parentObj = task.parent if(parentObj){ parentProject = parentObj.project parentName = parentObj.name parentType = (parentObj.project) ? "project" : "task" var TTString = `The ${spokenOrdinalNumber} task, ${taskName}, of ${parentType} ${parentName}, is due at ${dueTimeString}.` } else { var TTString = `The ${spokenOrdinalNumber} task, ${taskName}, is due at ${dueTimeString}.` } textSegments.push(TTString) }) // CREATE UTTERANCE FOR EACH TASK utterances = [openingUtterance] textSegments.forEach(string => { utterance = new Speech.Utterance(string) utterance.voice = speakerVoice utterance.rate = Speech.Utterance.defaultSpeechRate utterance.postUtteranceDelay = 0.5 utterances.push(utterance) }) } // USE SPEECH API TO SPEAK UTTERANCES var synthesizer = new Speech.Synthesizer() utterances.forEach(utterance => { synthesizer.speakUtterance(utterance) }) if(tasksFound){ alert = new Alert(alertTitle, "Click “Done” button to stop speaking.") alert.addOption("Done") alert.show().then(index => { synthesizer.stopSpeaking(Speech.Boundary.Word) }) } }); action.validate = function(selection, sender){ // validation code return true }; return action; })();

Audio and Spoken Alerts

Here’s an example of using both audio and spoken alerts. In this example, an alert sound is played and an alert message spoken if the script user has not previously selected a single task or project:

Spoken and Audio Alerts (OmniFocus)


(async () => { var sel = document.windows[0].selection var selCount = sel.tasks.length + sel.projects.length if(selCount === 1){ if (sel.tasks.length === 1){ var selectedItem = sel.tasks[0] } else { var selectedItem = sel.projects[0] } // SELECTION PROCESSING } else { if(app.platformName === "macOS"){Audio.playAlert()} alertMessage = "Please select a single project or task." utterance = new Speech.Utterance(alertMessage) voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) utterance.voice = voiceObj new Speech.Synthesizer().speakUtterance(utterance) } })().catch(err => { new Alert(err.name, err.message).show() })

Read Note of Selected Project|Task

A script for OmniFocus.

Read Note of Selected Project|Task
 

(async () => { var sel = document.windows[0].selection var selCount = sel.tasks.length + sel.projects.length function createUtterance(textToSpeak){ voiceObj = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) voiceRate = 0.4 utterance = new Speech.Utterance(textToSpeak) utterance.voice = voiceObj utterance.rate = voiceRate return utterance } var synthesizer = new Speech.Synthesizer() if(selCount === 1){ if (sel.tasks.length === 1){ var selectedItem = sel.tasks[0] var objType = "task" } else { var selectedItem = sel.projects[0] var objType = "project" } var noteString = selectedItem.note var objectName = selectedItem.name if(noteString && noteString.length > 0){ utterance = createUtterance(noteString) alert = new Alert(`“${objectName}” Note`, "Press “Done” to Stop.") alert.addOption("Done") synthesizer.speakUtterance(utterance) alert.show().then(index => { synthesizer.stopSpeaking(Speech.Boundary.Word) }) } else { alertMessage = `The ${objType} “${objectName}” does not have any note text.` utterance = createUtterance(alertMessage) synthesizer.speakUtterance(utterance) new Alert("No Note", alertMessage).show() } } else { if(app.platformName === "macOS"){Audio.playAlert()} alertMessage = "Please select a single project or task." utterance = createUtterance(alertMessage) synthesizer.speakUtterance(utterance) } })();

The Declaration of Independence

An example of how to create a stoppable vocalization of a long document:

Creating a Longer Vocalization


// THE STRINGS (SENTENCES) TO BE SPOKEN strings = ["When in the Course of human Events, it becomes necessary for one People to dissolve the Political Bands which have connected them with another, and to assume among the Powers of the Earth, the separate and equal Station to which the Laws of Nature and of Nature’s God entitle them, a decent Respect to the Opinions of Mankind requires that they should declare the causes which impel them to the Separation.", "We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the Pursuit of Happiness — That to secure these Rights, Governments are instituted among Men, deriving their just Powers from the Consent of the Governed, that whenever any Form of Government becomes destructive of these Ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its Foundation on such Principles, and organizing its Powers in such Form, as to them shall seem most likely to effect their Safety and Happiness.", "Prudence, indeed, will dictate that Governments long established should not be changed for light and transient Causes; and accordingly all Experience hath shewn, that Mankind are more disposed to suffer, while Evils are sufferable, than to right themselves by abolishing the Forms to which they are accustomed.", "But when a long Train of Abuses and Usurpations, pursuing invariably the same Object, evinces a Design to reduce them under absolute Despotism, it is their Right, it is their Duty, to throw off such Government, and to provide new Guards for their future Security. Such has been the patient Sufferance of these Colonies; and such is now the Necessity which constrains them to alter their former Systems of Government."] // CREATE UTTERENCES FOR EACH STRING narrator = Speech.Voice.allVoices.find(voice => voice.name.startsWith("Alex")) utterances = new Array() strings.forEach(string => { utterance = new Speech.Utterance(string) utterance.voice = narrator utterance.postUtteranceDelay = 1 utterances.push(utterance) }) // CREATE SPEECH SYNTHESIZER INSTANCE synthesizer = new Speech.Synthesizer() // BEGIN SPEAKING utterances.forEach(utterance => { synthesizer.speakUtterance(utterance) }) // SHOW ALERT alert = new Alert("The Declaration of Independence", "Press “Done” to Stop.") alert.addOption("Done") alert.show().then(index => { synthesizer.stopSpeaking(Speech.Boundary.Word) })

Shaping the Way the Text is Spoken

To better control the way text is spoken by the computer, you may insert special commands into the text to be spoken. The following are two of the commands:

Emphasis Command: emph + | -

The emphasis command causes the synthesizer to speak the next word with greater or less emphasis than it is currently using. The + parameter increases emphasis and the - parameter decreases emphasis.

For example, to emphasize the word “not” in the following phrase, use the emph command as follows. Copy script and run in an Omni application Automation Console window.

Emphasis Command


function createUtterance(textToSpeak){ voiceObj = Speech.Voice.allVoices.find( voice => voice.name.startsWith("Alex") ) voiceRate = 0.4 utterance = new Speech.Utterance(textToSpeak) utterance.voice = voiceObj utterance.rate = voiceRate return utterance } synthesizer = new Speech.Synthesizer() // without the emphasis utterance = createUtterance("Do not overtighten the screw.") synthesizer.speakUtterance(utterance) // with the emphasis utterance = createUtterance("[[slnc 1000]]Do [[emph +]] not [[emph -]] overtighten the screw.") synthesizer.speakUtterance(utterance)

NOTE: The emphasis control is more perceptible when used with higher quality voices.

Silence command: slnc <32BitValue>

The silence command causes the synthesizer to generate silence for the specified number of milliseconds.

You might want to insert extra silence between two sentences to allow listeners to fully absorb the meaning of the first one. Note that the precise timing of the silence will vary among synthesizers.

The Silence Command


function createUtterance(textToSpeak){ voiceObj = Speech.Voice.allVoices.find( voice => voice.name.startsWith("Alex") ) voiceRate = 0.4 utterance = new Speech.Utterance(textToSpeak) utterance.voice = voiceObj utterance.rate = voiceRate return utterance } synthesizer = new Speech.Synthesizer() // without the silence and emphasis utterance = createUtterance("I said no!") synthesizer.speakUtterance(utterance) // with the silence and emphasis utterance = createUtterance("[[slnc 1000]]I said [[slnc 350]] [[emph +]] no! [[emph -]]") synthesizer.speakUtterance(utterance)

The Number Mode Command: [[nmbr LTRL]]…[[nmbr NORM]]

The number mode command sets the number-speaking mode of the synthesizer. The NORM parameter causes the synthesizer to speak the number 46 as “forty-six,” whereas the LTRL parameter causes the synthesizer to speak the same number as “four six.“

For example, to make it clear that the following 7-digit number is a phone number, you can use the nmbr command to tell the synthesizer to say each digit separately, as follows:

The Number Mode Command


function createUtterance(textToSpeak){ voiceObj = Speech.Voice.allVoices.find( voice => voice.name.startsWith("Alex") ) voiceRate = 0.4 utterance = new Speech.Utterance(textToSpeak) utterance.voice = voiceObj utterance.rate = voiceRate return utterance } synthesizer = new Speech.Synthesizer() // without the silence and emphasis utterance = createUtterance("Please call me at extension 1990.") synthesizer.speakUtterance(utterance) // with the silence and emphasis utterance = createUtterance("[[slnc 1000]]Please call me at extension [[nmbr LTRL]] 1990 [[nmbr NORM]].") synthesizer.speakUtterance(utterance)

Archived Apple Reference Materials