PROJECTS: LoRaWAN Beehive Monitor
With this ecosystem in mind, we want to
be as unobtrusive as possible. The good
news is that beehives have standard
dimensions, which means we can design
a “one size fits allsolution. If you've ever
seen a hive in person, you’re probably
familiar with the famous stackable
assembly. After discussing with a friend
who keeps bees, we decided to use the
empty hive stand at the bottom to house
our electronics, batteries, and load cells
(Figure
B
). Wired sensors are threaded
up into the hive to give relevant measurements.
Classifying Buzzing Signals
With Deep Learning
While you can tell a lot about a hive’s health
from first-degree data sources like temperature
and humidity probes, researchers have proven
that you can also extract useful information by
listening to the bees themselves. As a proof
of concept, we have implemented a CNN that
classifies a hive based on whether or not it has
a queen, by encoding the spectral content of its
acoustic signals. Once a robust, labeled dataset
is collected (hopefully through the LongHive
community), we suspect that we can use a similar
pipeline to make other classifications.
The training dataset was compiled from
anopen source publication (kaggle.com/chrisfilo/
to-bee-or-no-to-bee),where beekeepers
recorded their hives and labeled the audio files
according to whether or not they had a queen.
Because it represents a variety of geographic
locations, recording techniques, and background
noise, the data is robust and generalizable. We
split the WAV files into 4.5-second segments,
resulting in about 2,000 training samples per
class (queen or no queen).
In a purely temporal domain, these acoustic
signals are not easily separable, as it is difficult
(for a DL model) to differentiate audio of
differing amplitudes and background noise.Mel
spectrogramsare commonly used for audio
classification, as they extract relevant spectral
information from the time-series signals into an
image (Figure
C
), allowing us to take advantage
of mature CNN-based techniques. The X-axis is
time, the Y-axis is frequency, and the color is the
Nathan Pirhalla
Replace this Arduino Uno with the ST LoRa Discovery
Kit board, which has the same form factor.
To increase the separability of the dataset, we transform
the raw audio signals (top row) into the time-frequency
domain. Here we see the spectrogram outputs for
(bottom row, left to right) hives with a queen, hives with
no queen, and a control case.
Time(s)
Queen
Time(s) Time(s)
No Queen No Bees
Mel Spectrogram, M(A(t))
Normalized Audio Signal, A(t)
Queen
No Queen No Bees
Time(s) Time(s) Time(s)
Data Distillation: representing vast database in a single byte
Training dataset
(3.41GB)
Desktop computer Raspberry Pi 3B LoRaWAN
microcontroller
Pre-trained TensorFlow
Lite interpreter (568KB)
Model classification
(1 byte)
66 make.co
C
D
E
B
M75_064-7_LongHive_F2.indd 66M75_064-7_LongHive_F2.indd 66 10/12/20 12:00 PM10/12/20 12:00 PM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.165.246