expensive, attempting to run ML on such devices
may seem silly. However, there are many
potential uses of ML on embedded systems.
One prevalent use of ML on microcontrollers
(TinyML) includes wake word detection, which is
also known as keyword spotting. For example,
if you say “Alexa” or “Hey Siri,your phone or
nearby smart speaker may come to life, waiting
for further instructions (Figure
J
). The smart
speaker uses two types of machine learning. The
first kind is TinyML, where inference is performed
locally in the speaker’s microcontroller to listen
for that wake word. Once the speaker hears the
wake word, it begins streaming the subsequent
audio to an internet-connected server, which
performs a much more complex machine
learning process known as natural language
processing (NLP) to figure out what you’re asking.
In addition to speech, we can use machine
learning to change the way we interact with
electronics. For example, makers Salman Faris
and Suhail Jr. created smart glasses for the
visually impaired that would take a picture and
tell the wearer what it saw through headphones
(hackster.io/makergram/sight-for-the-blind-
c1e1b9) (Figure
K
). We could also use motion
sensors and TinyML to detect and classify
gestures, giving us the ability to translate sign
language or perform actions by drawing shapes
in the air with a wand.
In government and business, TinyML has the
potential to complement Internet of Things (IoT)
ecosystems. With a traditional machine learning
architecture, networked sensors would need to
stream data back to a central server for analysis
and inference. Imagine trying to send sound or
image data from tens or hundreds of sensors
to a server. This setup could easily consume all
of the available bandwidth in a network. Using
embedded machine learning, we could have each
sensor classify patterns and send the final results
to the server, thus freeing up network bandwidth.
For example, the Prague Public Transit
Company (DPP) has announced a partnership
with Czech-based Neuron Soundware to produce
audio sensors that listen to the sounds made
by each of the 21 escalators in Prague’s metro
system. The sensors will employ machine
learning models to identify potential anomalies
or unusual sound patterns to determine if certain
parts in the escalator require maintenance. This
approach is similar to a car mechanic listening to
engine sounds to diagnose a problem.
Machine learning can help classify or identify
patterns in almost any type of data. As a result,
we can use motion and physiological data
captured from body-worn sensors to assist with
workouts and help predict potential issues. While
a GPS unit doesn’t need machine learning to
tell us how far we ran, what could we use
to evaluate our jump shot in basketball? A
combination of motion sensors and machine
learning has the potential to provide such real-
time feedback. Additionally, sensor suites like the
EmotiBit (emotibit.com) can determine our stress
level (Figure
L
, following page
). Coupled with
machine learning, this type of data could be used
to classify our current emotional state or predict
panic attacks before they occur.
29
makezine.com
J
K
Shawn Hymel, Salman Faris
M77_022-31_SS_MLdeepDive_F1.indd 29M77_022-31_SS_MLdeepDive_F1.indd 29 4/11/21 12:59 PM4/11/21 12:59 PM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.93.210