Of
course,
translation is much more
complex, so these machine learning tech- niques include processing large volumes of highly structured, annotated language data to develop models that recognize the relationships among the elements of speech. For the speech app, these language components are the automated speech recognizer, the machine- translation engine and the text-to-speech speech synthesizer.
A microphone captures speech, and the automated speech recognizer turns it into text data by using a probabilistic model that finds the most likely match between the speech that’s been converted to text and what the machine has learned. After the speech is converted to text, the text shows on the display so the user can decide whether it is correct. Tis is how the Google Assistant, Apple’s Siri and others “understand” you when you ask them to find the next nearest gas station, or when you ask Amazon’s Alexa to play a specific song from your music library.
Te automated speech recognizer doesn’t complete all of the requested translation task; its job is done when it passes the recognized speech in text to the next pro- cess, machine translation.
A SIMPLE SETUP
MFLTS’ apps are available for the smartphone- like Nett Warrior device and for download to use on a laptop, as in this configuration, which also includes a scanner to input documents for text translation and a microphone for speech translation. (Photos by Tracy Blocker, MFLTS Product Office)
CORE PROCESSES Inside a language pack, the machine- translation engine is the component that performs the “magic” of the actual translation. Like the automated speech recognizer, a machine-translation engine uses probabilistic models that are trained using dual-language sets of data devel- oped with the expertise of people fluent in both languages in a language pair— for example, English and Arabic.
Not surprisingly, developing machine- translation engines for unusual pairs of language is often very labor-intensive and expensive because of the scarcity of data and linguists proficient in both lan- guages. As developers of the automated speech recognizer’s model have done, engineers and scientists who are creat- ing machine-translation models rely on techniques for model training that are a combination of science and art.
Machine translation probabilistic models find the best match between the source and target languages and, like the auto- mated speech recognizer,
text output in the target language to a speech synthesizer and to a display, using the target language’s character set.
Finally, for the speech app, the text- to-speech language part of the app is a synthesis program that produces audi- ble speech in the target language. Like the automated speech recognizer and machine translation, the text-to-speech component relies on extensively trained speech-synthesizing models that provide the text-to-speech conversion.
After receiving the text from the machine-translation engine, the text-to- speech function converts
the text
into
spoken language by putting together words or phrases from recorded speech of the target language. MFLTS then plays the text-to-speech content on the inter- nal or external speaker of, typically, a smartphone.
Working together, these language com- ponents
are the
appearing to hear, understand and trans- late English into another language.
then provide
Let’s say that a Soldier has just translated speech from an Arabic-speaking local. Now the Soldier needs to reverse the pro- cess, translating from English to Arabic. Tat’s no problem; it’s why the MFLTS language packs always travel in pairs.
“brain” of MFLTS,
ASC.ARMY.MIL
85
SCIENCE & TECHNOLOGY
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52 |
Page 53 |
Page 54 |
Page 55 |
Page 56 |
Page 57 |
Page 58 |
Page 59 |
Page 60 |
Page 61 |
Page 62 |
Page 63 |
Page 64 |
Page 65 |
Page 66 |
Page 67 |
Page 68 |
Page 69 |
Page 70 |
Page 71 |
Page 72 |
Page 73 |
Page 74 |
Page 75 |
Page 76 |
Page 77 |
Page 78 |
Page 79 |
Page 80 |
Page 81 |
Page 82 |
Page 83 |
Page 84 |
Page 85 |
Page 86 |
Page 87 |
Page 88 |
Page 89 |
Page 90 |
Page 91 |
Page 92 |
Page 93 |
Page 94 |
Page 95 |
Page 96 |
Page 97 |
Page 98 |
Page 99 |
Page 100 |
Page 101 |
Page 102 |
Page 103 |
Page 104 |
Page 105 |
Page 106 |
Page 107 |
Page 108 |
Page 109 |
Page 110 |
Page 111 |
Page 112 |
Page 113 |
Page 114 |
Page 115 |
Page 116 |
Page 117 |
Page 118 |
Page 119 |
Page 120 |
Page 121 |
Page 122 |
Page 123 |
Page 124 |
Page 125 |
Page 126 |
Page 127 |
Page 128 |
Page 129 |
Page 130 |
Page 131 |
Page 132 |
Page 133 |
Page 134 |
Page 135 |
Page 136 |
Page 137 |
Page 138 |
Page 139 |
Page 140 |
Page 141 |
Page 142 |
Page 143 |
Page 144 |
Page 145 |
Page 146 |
Page 147 |
Page 148 |
Page 149 |
Page 150 |
Page 151 |
Page 152 |
Page 153 |
Page 154 |
Page 155 |
Page 156