© Joseph Faisal Nusairat 2020
J. F. NusairatRust for the IoThttps://doi.org/10.1007/978-1-4842-5860-6_9

9. Sense HAT

Joseph Faisal Nusairat1 
(1)
Scottsdale, AZ, USA
 

In the last chapter, we started to have the Pi communicate with our backend servers and running a more advanced version of “Hello World”. In this chapter, we are going to build up on this existing application by adding and using a set of hardware components. After all, one of the things that makes the Pi so great is its hardware extensibility. That is one of the biggest selling points of the Raspberry Pi, being able to add on sensors, cameras, or even custom components that communicate with the board and the GPIO specifically. We will be using a few of these components, but to start with, I want us to use a component that gives us an all-in-one board to use, the Sense HAT. We will be interacting with this board throughout the chapter, gathering the temperature for the board as well as using it as the basis for our future command interaction with the board. In addition, we will integrate it with our login authorization flow.

Goals

For this chapter, we are not going to go over every feature; we are only going to use three of those features for interacting with our application. We will use the LED for textual and warning displays. The textual displays will be used for the temperature and login as well as any warning lights for connectivity or other problems.

Our goals when finished will be to have a functional board with Sense HAT that has the following capabilities:
  • Able to calculate the temperature.

  • Display the temperature when the user clicks the center of the joystick.

  • Use the display to show the device code to log in with.

  • Display a question mark when we have MQTT connectivity issues.

  • Display a holiday image for Christmas and Halloween.

Hardware

In the previous chapter, we added a basic running heartbeat to the application. This was fairly simple, and much of our time was spent making sure it could compile and deploy to the Pi. What our advanced Hello World would do was the Pi app started and would periodically send a heartbeat. In this chapter, we plan to complicate things quite a bit more, and that will require us to use new peripheral, the Sense HAT, and to add some more complicated code. This chapter will be a new application for the Raspberry Pi. We are going to keep the heartbeat separate in its own process, and this will be an entirely new application. Both running on the Raspberry Pi though. In Chapter 11, we will discuss how to have the heartbeat and this application communicate with each other.

The Sense HAT is an all-in-one board that can be affixed to the top of of your Pi taking up all 40 of the GPIO pins, which will provide the complete interface to the board and the power to the board. The board is unique, in that it has quite a few chips on it to allow us quite a bit of different features to detect the world around us. The chipsets can determine these:
  • Air pressure

  • Humidity

  • Temperature (collocated on the air pressure and humidity sensors)

  • Gyroscope – The orientation of the Pi and if it is changing

  • Magnetometer – To measure magnetic forces

  • Accelerometer – To measure the movement and speed of the Pi

All these combined can provide quite a bit of interactivity to measure your outside world. In fact, Raspberry Pis with Sense HAT were used aboard the International Space Station to conduct experiments. Also the board allows us some interaction and communication with the user; to that end we have an
  • 8x8 LED matrix display

  • Five-button joystick

The device is easy to install and somewhat easy to use in isolation; however, we want to use it as a system which requires simultaneous use and multiple interactions between components. The goal for this section is to attach the Sense HAT and using a few crates control the temperature, LED, and joystick. We will make more use of the joystick in the following chapters, but for now, we will keep it basic. One of the big challenges for this chapter is to run multiple background processes while still allowing input from the joystick:
  • Daily displays of the temperature

  • Display of holiday lighting

For this chapter, many of the crates I have tweaked the functionality. Quite a few of them have not been updated in years, but that is mostly because the underlying code to interact with the sensors has not changed either. I do hope to merge some of my changes back to their parent and will modify the code online when I do.

All of this needs to happen while still allowing for joystick control. Remember that Rust is a very memory-safe language, so we won’t be simply passing in the LED and Atmospheric structs in multiple threads run by different modules that simply wouldn’t work. We will be using instead multi-producer, single-consumer channels to run all of our logic. This will give us our multi-threaded capabilities without worrying about multi-threads trying to own the same memory. But that part is down the road a bit; let’s start with installing the Sense HAT.

Install

The SenseHAT is a somewhat brilliant all-in-one board, designed specifically for the Raspberry Pi. It’s a board that gives us many features in one compact inexpensive board. Before we dive into the board features, let’s start with getting this unboxed and installed. In Chapter 1, I gave you a link to a Sense HAT you can purchase from Amazon; if you’ve forgotten it, the URL for the board to purchase is http://amzn.com/B014HDG74S (and is the board I used for this chapter).

Once you have it, let’s open it up; in Figure 9-1, we have a picture of the unboxing and all the parts.
../images/481443_1_En_9_Chapter/481443_1_En_9_Fig1_HTML.jpg
Figure 9-1

Shows the unboxing of our Sense HAT

This contains a manual, the board itself, and the spacers used to attach it to the main board. The Sense HAT is going to attach to your GPIO board; however, it can’t attach to it directly due to space limitations of the chips and sensors already on the board. That is why the kit contains spacers with it both for the GPIO and the board. Start by attaching the GPIO extender to your board like in Figure 9-2.
../images/481443_1_En_9_Chapter/481443_1_En_9_Fig2_HTML.jpg
Figure 9-2

Board with the GPIO extender

Now we will install the SenseHAT on top of the board, but first let’s attach the spacers to the board; if we don’t, the board will be unstable and you risk bending the pins or worse. Screw the spacers in to the SenseHat, then attach the spacers to the four corners, and attach the board on to the top. In Figure 9-3, we have the complete board assembled and powered on.
../images/481443_1_En_9_Chapter/481443_1_En_9_Fig3_HTML.jpg
Figure 9-3

The board attached and started up

One final thing before powering it up, take out your SD card you had in before and go back to the config.txt . You will have to uncomment the line
# uncomment if hdmi display is not detected and composite is being output
hdmi_force_hotplug=1

With the line uncommented, it will allow you to turn on the Pi without a display attached to it. Now you can attach the board and turn it on; it will light up all the LEDs each time and then turn off. If you haven’t altered the preceding config, you will need to attach an HDMI monitor or the lights will just stay on and it won’t finish the booting process. Once the LED goes off, it will be ready to log onto the board.

Sensors

The sensors on the board are run by an Atmel chip that operates on a i2c (pronounced eye-squared-cee) protocol which require us to code against that protocol to work correctly. Thus, all our sensors we have will be communicating on the same protocol. This will help us in the debugging of it because this will allow us to run commands against the board directly from the shell to check its status. This is a standard protocol on a bus that Raspberry Pi uses to speak to other embedded devices, and the same logic can be applied to other attached sensors as well. The i2c is a two-wire bus that has serial data (SDA) and a serial clock (SCL) . Your Pi can contain multiple i2c buses and will contain one or more primaries and secondaries. Because the lines are shared between multiple secondaries, each device attached to it will have a specific address that it communicates on.1 Those addresses can vary by the type of device we attach; in Table 9-1, I list the addresses for all of the sensors on the Sense HAT.
Table 9-1

Sensors and chipsets on the board

Name

Sensor

Address

Accelerometer

LSM9DS1

0x1c(0x1e)

Magnetometer

LSM9DS1

0x6a(0x6b)

Pressure

LPS25H

0x5c

Humidity

HTS221

0x5f

LED matrix

LED2472G

0x46

The addresses are all documented on the Sense HAT found at https://pinout.xyz/pinout/sense_hat. It’s a good overview if you want details of what the circuits are doing and where I got some of my information from.

Since we are using the Raspbian Buster Lite OS, we will need to install drivers for the Sense HAT; these normally are preinstalled with the image if you used the full Buster OS. We also need to install some tools that allow us to make sure the Pi and the HAT are communicating properly. It’s also good for debugging purposes. One of the easiest ways to perform debugging is to examine the i2c bus. In Listing 9-1, after logging on to the board, we will install the Sense HAT libraries needed as well as a tool that will help us communicate via the i2c protocol. (Note: The install is very verbose, and I’ve shortened it down for brevity.)
pi@raspberrypi:~ $ sudo apt-get update ①
Get:1 http://archive.raspberrypi.org/debian buster InRelease [25.1 kB]
Get:2 http://raspbian.raspberrypi.org/raspbian buster InRelease [15.0 kB]
...
pi@raspberrypi:~ $  sudo apt-get install -y sense-hat ②
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
...
pi@raspberrypi:~ $ sudo apt install -y i2c-tools ③
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libi2c0 read-edid
  ...
Processing triggers for man-db (2.8.5-2) ...
Processing triggers for libc-bin (2.28-10+rpi1) ...
Listing 9-1

Install i2c tools on the board

  • ➀ Downloads package information for all configured sources needed or the next two steps may fail.

  • ➁ Installs necessary libraries for communication with the Pi.

  • ➂ Installs tools to debug communication between the Pi and the HAT.

Now that we have it installed, go ahead and run the command i2cdetect -y 1. The -y indicates we want non-interactive mode, and the 1 tells it which i2c bus to use. The Raspberry Pi board only has two i2c buses: one of them is the GPIO and the other is on the P5 header where you’d have to solder into that header to use. Since we attached the Sense HAT to the GPIO, we are using i2c-1. In Listing 9-2, we run the command on the Raspberry Pi.
ubuntu@ubuntu:~$ sudo i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- 1c -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- UU -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- 5c -- -- 5f
60: -- -- -- -- -- -- -- -- -- -- 6a -- -- -- -- --
70: -- -- -- -- -- -- -- --
Listing 9-2

Running i2cdetect on the board

You might notice a few things pop out if you remember the addresses we just went over. Notice the 6a and 5f; those are our magnetometer and humidity sensor, respectively. What we should also have is a 5c, but where 50 and c meet, you may have UU (I tried with three boards, and my Pi 3 was inconsistent and showing it). The UU indicates a busy used state. Just be aware, if you do see a UU, then you won’t be able to use that sensor for the temperature reading; that will become important when we do the temperature calculations to know which sensors we have available.

Note

There are other debugging tools that help visualize your bus. “lsmod | grep i2c” will give you an output:

st_pressure_i2c        16384  0
st_magn_i2c            16384  0
st_magn                20480  2 st_magn_i2c,st_magn_spi
st_pressure            20480  2 st_pressure_i2c,st_pressure_spi
st_sensors_i2c         16384  2 st_pressure_i2c,st_magn_i2c
st_sensors             28672  6 st_pressure,st_pressure_i2c,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi
industrialio           90112  9 st_pressure,industrialio_triggered_buffer,st_sensors,st_pressure_i2c,kfifo_buf,st_magn_i2c,st_pressure_spi,st_magn,st_magn_spi

This tells you all the sensors there are available; in addition, you can run “sudo i2cdump 1 [address]” to query the state of individual settings on your device.

If you want more details on how the Sense HAT and Pi systems work, there are quite a bit of documentations out there; I just wanted you to get enough of an overview that the coding going forward you would know where certain features come from. For now though, we are going to move on to setting up interactions with the LED and the temperature sensors. The way we are going to perform these operations is by using a variety of crates to control the sensors and display and then writing small wrappers to them for our individual functions we need.

LED Display

First up on our list is the LED screen. The LED screen is the big 8x8 multi-colored LEDs we saw light up when we started the Pi. Since we aren’t using a standard screen for our application, this LED display is going to become very important to display our interactions with the user, since for this particular device, this is the one we are using. Also I wanted to use the device to have a bit of fun with by displaying different emblems for Christmas and Halloween. Let’s start by going over everything I want our display to show, and then we can talk about how to do it. In Table 9-2, I have a listing of our different type of displays we are going to code to and the corresponding function that will encapsulate the logic.
Table 9-2

Displays we will create on the Sense HAT

Name

Function

Description

Blank

blank

Blank out the screen to clear it out after we have displayed any symbols or sequences.

Question mark

question

A question mark to display in case of an error.

Processing

processing

Run through a progress screen that shows sequential blocks, to be used while waiting for a response from another system.

Symbols

display_symbol

Used to display an 8x8 LED “image” to the screen; this will be a predefined multi-color output.

Text display

display

Will output a different set of text, one letter at a time with a predefined wait of 800 ms.

Text scrolling

scroll_text

Also displays a set of text, but instead of shifting a whole letter at a time, this will scroll the text through.

These are quite a bit of functions we want to implement, but each will solve for us all our use cases; we need to display a variety of text and anything else to the screen. To start with, we will be making use of the sensehat-screen crate (https://github.com/saibatizoku/sensehat-screen-rs). We will also incorporate the features that are necessary for allowing us to perform various textual displays to the screen. In Listing 9-3, I add the crate as well as applied the features for displaying and scrolling our text.
[dependencies]
sensehat-screen = { version = "0.2", default-features = false, features = ["fonts", "linux-framebuffer", "scroll"] }
Listing 9-3

Adding sensehat-screen to our Pi application

We are also adding in the features for controlling the fonts and scrolling; these are both needed for our application in order to display static and scrolling text; in addition, the linux-framebuffer is how we are going to write to the LED. There are other features like rotate and clip that we aren’t using so I didn’t include them, but you can go to the site and add extra features to your individual application.

The screen crate is a self-contained module via the sensehat_screen::Screen struct. We will be wrapping this and implementing the methods we mentioned earlier to interact with the screen. The Screen itself is a high-level API that interacts with the linux-framebuffer. Screen will open to the framebuffer’s file descriptor in order to connect and write to the LED matrix. From there, it will just be writing our input. Input comes in the form of the FrameLine struct that contains raw bytes needed for the buffer. From there, the Screen will take the FrameLine information and write it to the LED matrix. We will be converting Unicode to bytes directly when we create our holiday pictures; in other situations, we will use wrappers provided by the crate that allow us to convert the text to raw bytes without us having to create our own font catalog.

Frames

The 8x8 LED display will display colors on each matrix in a 16-bit RGB565 color representation; this basically gives you the color pallet you had in your old Atari Lynx (yes, very old school reference) but obviously not as a tight of a pattern. We are going to send to the framebuffers an 8x8 set of RGB colors. How this translates though is not an actual dual array but as a u8 single array that divides the RGB color in half for each LED. Thus, you will have a 128-sized array of u8 types ([u8: 128]), which is 8 x 8 x 2. This gives you two bytes for each color and then defines each LED matrix possible. For our “images” like the pumpkin and Christmas tree and any other static display, we will set up the constants in a multi-line format, so visualizing it can be easier.

We are going to feed to the framebuffers an 8x8 array of colors that converts the hexadecimal representation of the color into bytes. The individual pixels are 16-bit RGB color representation. One challenge is to come up with the colors to use; most online sites that have color pickers use RGB888, the standard for websites CSS. However, the LED matrix uses RGB56 instead; there are a few sites that make it easy to come up with the colors; the following two URLs are what I used to pick colors:
In Listing 9-4, I have a 128 u8 representation of a Christmas tree that we will use to display during the month of December preceding boxing day.
// Christmas Tree
pub const CHRISTMAS_TREE: [u8; 128] = [
    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0xFF, 0xFF, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xE0, 0x07, 0xE0, 0x07, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
    0x00, 0x00, 0x00, 0x00, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0x00, 0x00, 0x00, 0x00,
    0x00, 0x00, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0x00, 0x00,
    0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07,
    0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07, 0xE0, 0x07,
    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x61, 0x80, 0x61, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x61, 0x80, 0x61, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
];
Listing 9-4

A constant for a Christmas tree, file is in led/mod.rs

This will produce a tree with a light at the top and then green throughout. You will notice in the second line we have 0xE0, 0x07, 0xE0, 0x07; this represents two LED lights of RGB565 color 07E0 and 07E0 which is a green color. You may be taking a double take on the input and the color and noticed the pairing is reversed from the input. That is not a typo; that is how you have to input the set. So keep that in mind when adding colors. In Figure 9-4, you can see the output that this will create.
../images/481443_1_En_9_Chapter/481443_1_En_9_Fig4_HTML.jpg
Figure 9-4

A Christmas tree being displayed on the Sense HAT

This is a black and white book, so the color variants may not show up here, but once you run the code on the board, it will. In addition, I’ve also added some yellow boxes we will use for concentric box display (for a processing image) and a pumpkin that we can use for the month of October. The code is in the repository; I have not included it here as I doubt any of you are going to line by line copy it.

But now that we have the understanding of it, let’s go back to adding code for it. As I said text is easy, and images aren’t. We are going to add a struct FrameProcessor that will share as an easy cheat to display a few items:
  • Our set of yellow concentric squares

  • A question mark

  • An off frame which will be translated as a buffer of [0x00; 128] for our blank display

The easiest way to create a FrameLine from a set of RGB colors is to create the 128 slice. From there, we can use the call FrameLine::from_slice passing in the slice to create the FrameLine struct. Let’s look at this code in Listing 9-5.
struct FrameProcessor {
    off_frame: FrameLine,
    yellow_squares: [FrameLine; 4],
    question_mark: FrameLine,
}
impl FrameProcessor {
    fn new() -> FrameProcessor {
        let ys = [ ①
            FrameLine::from_slice(&super::YELLOW_SMALL),
            FrameLine::from_slice(&super::YELLOW_MED),
            FrameLine::from_slice(&super::YELLOW_LARGE),
            FrameLine::from_slice(&super::YELLOW_XL),
        ];
        // Question Mark
        let white_50_pct = PixelColor::WHITE.dim(0.5); ②
        let q_mark = FONT_COLLECTION.get('?').unwrap();
        FrameProcessor {
            off_frame: FrameLine::from_slice(&super::OFF), ③
            yellow_squares: ys,
            question_mark: font_to_frame(&q_mark.byte_array(), white_50_pct), ④
        }
    }
}
Listing 9-5

The code from the FrameProcessor ; file is in led/screen.rs

  • ➀ Creates an array containing all the individual yellow squares.

  • ➁ For use with the question mark, we want the display of it to be only 50% brightness.

  • ➂ Initializes our new frame starting with the OFF frame from the slice we defined in led/mod.rs.

  • ➃ The question mark uses font_to_frame from the sensehat_screen to convert the font to a FrameLine.

You will notice that this struct is not public, and it’s intended to be used only by the LedControls struct that we are creating next which will use these to send to the LED matrix.

LED Controls
Now that we have the processing for a few of our image use cases and understand how we display with framebuffers, let’s turn our attention to implementing the code to display to the LED. We are going to create a LedControls structure that will wrap all our calls to the Screen and FrameProcessor. This will drive all the interactions from the outside modules to the LED display. In Listing 9-6, we are creating that struct.
use sensehat_screen::{font_to_frame, PixelColor, Screen, FrameLine, FONT_COLLECTION, Scroll}; ①
use std::thread;
use std::time::Duration;
const LED_DEV_PATH: &str = "/dev/fb1"; ②
pub struct LedControls {
    screen: Screen, ③
    frame: FrameProcessor
}
// Clone is needed because ::  pub trait FlowDelegate: Clone {
// so its constituent parts have to be Clone
// This is needed for our Authenticator
impl Clone for LedControls { ④
    fn clone(&self) -> Self {
        LedControls {
            screen: Screen::open(LED_DEV_PATH).unwrap(),
            frame: FrameProcessor::new()
        }
    }
}
impl LedControls {
    pub fn new() -> LedControls { ⑤
        LedControls {
            screen: Screen::open(LED_DEV_PATH).unwrap(),
            frame: FrameProcessor::new()
        }
    }
}
Listing 9-6

Creating the struct for LedControls; file is in led/screen.rs

  • ➀ Use all the structs from the crate that are needed by our application.

  • ➁ The LED Path; this is the default path to talk to the file descriptor; this should be the same on your boards as well.

  • ➂ Our struct has two properties that we need to instantiate before using.

  • ➃ The Clone will be used later when having to pass through to our authentication modules.

  • ➄ The implementation and creating of our LedControls.

Note using creating new FrameProcessor::new() on cloning is probably not the best idea since you could potentially have two different parts of an application writing to the screen at the same time and lead to a corrupted screen. However, on startup, the authentication is a blocking process and won’t allow any further functions till you log in, so this shouldn’t occur.

Two things really of note here: on bullet 5, if there is an error accessing the file descriptor, this will cause a panic. I haven’t wrapped it because if it does, then the rest of our app has a difficult time working since we will never be able to display the authentication or anything else to the screen. In addition, you will notice here we are implementing the Clone in bullet 4 as opposed to deriving it. The reason for this is Screen does not derive Clone so we would not be able to use the self-deriving macro. For our code, we are just going to open another connection to the file descriptor.

Next, we are going to add the individual functions that will produce output to the screen. I won’t be adding the code to test those here, as we are going to wait till we integrate it with the rest of the system, but feel free to run through testing each as we go along on your own Pi. We will be able to use the FrameLine instances we created earlier to pass to the screen to write them out. We use the call screen.write_frame passing in the FrameLine to it, which will write those frames to the LED. Let’s take a look at each method we are implementing; you will note they all follow the same general pattern.

Blank Screen

First up in Listing 9-7 is blanking the screen. This will use the off_frame from the FrameProcessor we just created.
    pub fn blank(&mut self) {
        self.screen.write_frame(&self.frame.off_frame);
    }
}
Listing 9-7

Blanks the screen; file is in led/screen.rs

Question Mark

In addition, we can combine multiple write_frame together with delays in order to change what is displayed. In Listing 9-8, we display a question mark first, wait 3 seconds, and then blank the screen since we no longer want the question mark at that point.
    pub fn question(&mut self) {
        self.screen.write_frame(&self.frame.question_mark);
        thread::sleep(Duration::from_secs(3));
        self.screen.write_frame(&self.frame.off_frame);
    }
}
Listing 9-8

Displays a question mark for 3 seconds; file is in led/screen.rs

Displaying an Image

Here we make use of the Christmas tree and pumpkin images we created earlier to display them to the screen for a given time and then blank them out after. Now in Listing 9-9, since the images we used before are in a format of [u8: 128], we will have to convert that to a FrameLine before processing. We could have put these two displays in our FrameProcessor, but I didn’t want to tie this method to only be able to display those two images.
    pub fn display_symbol(&mut self, frame: &[u8; 128], length: u64) {
        let frame_line = FrameLine::from_slice(frame);
        self.screen.write_frame(&frame_line);
        thread::sleep(Duration::from_secs(length));
        self.screen.write_frame(&self.frame.off_frame);
    }
}
Listing 9-9

Displays an array image for a given set of time before blanking out the screen; file is in led/screen.rs

Processing Screen

The processing screen function in Listing 9-10 will use the yellow concentric arrays we created earlier and run it through three times to make it appear like there is a “busy” state on the board. Ideally, you could make this fancier with interrupts or checks if it needs to keep going, but this is something simple one can build up on to later.
    pub fn processing(&mut self) {
        let sleep_time = 500;
        let yellow_squares = self.frame.yellow_squares;
        for x in 0..2 {
            for ys in &yellow_squares {
                    self.screen.write_frame(ys);
                thread::sleep(Duration::from_millis(sleep_time));
            }
        }
    }
}
Listing 9-10

Processing display; file is in led/screen.rs

Display Text

Finally, we get to actually displaying text that is passed in. Here we need to pick the color, pick the font, and then convert that text into a FrameLine. In Listing 9-11, we display text before blanking the screen after 800 ms.
    pub fn display(&mut self, word: &str) {
        // get the screen text
        // uses a macro to get the font string
        let screen_text = FONT_COLLECTION.sanitize_str(word).unwrap(); ①
        let white_50_pct = PixelColor::WHITE.dim(0.5);
        // Display the items
        for unicode in screen_text.chars() {
            if let Some(symbol) = FONT_COLLECTION.get(unicode) { ②
                let frame = font_to_frame(&symbol.byte_array(), white_50_pct); ③
                self.screen.write_frame(&frame); ④
            }
            thread::sleep(Duration::from_millis(800));
        }
        // now turn the display back off
        self.screen.write_frame(&self.frame.off_frame);
    }
}
Listing 9-11

Display a text string; file is in led/screen.rs

  • ➀ Will take the text passed in and convert the set to a vector of FontUnicode.

  • ➁ Now converts the Unicode to a byte array representation.

  • ➂ Converts that byte array to a FrameLine that can be used to write to the LED.

Scroll Text

The preceding listing just displays the text one by one with a delay between each letter, but a somewhat fancier method is to scroll the text across the LED display. The code in Listing 9-12 will scroll text across the screen.
    pub fn scroll_text(&mut self, word: &str) {
        let sanitized = FONT_COLLECTION.sanitize_str(word).unwrap();
        // Render the `FontString` as a vector of pixel frames, with
        // a stroke color of Blue and a BLACK background.
        let pixel_frames = sanitized.pixel_frames(PixelColor::BLUE, PixelColor::BLACK); ①
        // Create a `Scroll` from the pixel frame vector.
        // this will create some arrows to scroll over
        let scroll = Scroll::new(&pixel_frames); ②
        // Consume the `FrameSequence` returned by the `left_to_right` method.
        scroll.left_to_right().for_each(|frame| { ③
            self.screen.write_frame(&frame.frame_line());
            thread::sleep(::std::time::Duration::from_millis(250));
        });
    }
}
Listing 9-12

Function to scroll text across the screen; file is in led/screen.rs

  • ➀ Create a vector of PixelFrame for each inner font of the text passed in with the colors for the font color and background image (black will be blank).

  • ➁ Create the Scroll structure that will store the pixel frames for display.

  • ➂ Finally display the text, scrolling in from left to right with a 250 ms delay between each.

You can scroll any direction from left to right, right to left, top to bottom, or bottom to top. Each has an appropriately named method.

All of these functions will provide us all our tools needed to interact with the screen in the rest of the application. I would suggest trying a few just to get the hang of it.

Temperature Display

For now, let’s move on to our sensor inputs. There are actually quite a few sensors on the board, and the crate we are using for this section has access to all the sensors. The gyroscope and accelerometer though didn’t really fit into a use case for the book, but you can investigate this crate later to learn how to gain access and use them. For this book, we are going to focus on getting a temperature reading. There are two temperature sensors on the board: one collocated with the pressure sensor and the other with the humidity sensor. You can use either sensor or you can use both and take the average between the two. We are just going to use the humidity sensor (I actually had issues with one of my SenseHats on the pressure sensor).

Before we actually show the full set of code, let’s build up on getting the temperature. First off, we are going to import our crates, and once again this will require some minor modifications. The branch I created adds more feature flags in order to turn off sensors we aren’t going to use without the code panicking if it has an issue accessing a sensor. If you recall earlier, there was an issue accessing the pressure sensor; if it was in a UU state, the code would panic; I decided this isn’t really a good thing if you aren’t going to use that sensor anyways. And feature flags seemed to be the least intrusive way to fix this. The import of crates in Listing 9-13 will then not try to use or initialize the pressure sensor.
[dependencies]
sensehat = { version = "1.1.1", default-features = false, features = ["humidity"], git = "https://github.com/nusairat/sensehat-rs.git", branch = "chore/fix-retrieve-error"}
Listing 9-13

Adding sensehat-screen to our Pi application, in Cargo.toml

You will notice one of the feature flags is for humidity; I also added one for pressure that is available but not being used. Now earlier we talked about how we need to talk to the sensors in the i2c bus at specific addresses. This crate uses the i2cdev crate to do all the communication for us and get all the information needed from those sensors and wrap it in a nice usable struct. If you get curious, there is a htss221.rs module in the crate that has all the code and information on how to communicate with the i2cdev crate and how to read the bytes that the device is transmitting. The i2cdev crate is an easy-to-use crate to interact with if you know how you are trying to communicate. For example, if you wanted to interact with the humidity chip, we know from before that we are communicating to the /dev/i2c-1; since this is attached to the GPIO board, we also know from the documentation that it is on the address 0x5f; in order to instantiate the LinuxI2CDevice to communicate specifically with that chip, all you’d have to do is this:

i2cdev::linux::LinuxI2CDevice::new('/dev/i2c-1', 0x5f)

And if you searched the sensehat-rs crate we are using, you’d find a line similar to that in order to initialize access to the board. If you run into a new i2c bus device that there is no current support for, one cheat is to find a corresponding Python library. Python has a very rich community that creates Pi applications. You can use that as a basis to know what kind of data is needed to be read and written to the i2c. But let’s take a look at what we want; we want access to the temperature reading; this is pretty easy; in Listing 9-14, we initialize the SenseHat and retrieve the temperature reading in celsius.
let hat = SenseHat::new().unwrap()
let temp = hat.get_temperature_from_humidity().unwrap().as_celsius();
println!("Temp : {:?} C", temp);
Listing 9-14

Determining the temperature from the humidity sensor

This will print out the temperature right now. You can go ahead and try it locally.

If you actually did it, you will notice the temperature seems a bit hotter than you expected. The reason for this is those sensors are RIGHT next to the board and other chips that are heating the air around our HAT sensors. This is picking up the heat dissipated by our Pi giving us a spurious reading. Now if you only care about registering drastic changes in temperature, then this is probably fine, and you can continue. However, if you want to try and get a more accurate temperature, there are a few things we can do.

First thing we can do is get a ribbon cable and move the SenseHat further away from the board; this will help produce a more accurate reading because it won’t be picking up any heat from the Pi’s CPU. However, this will look bad, and I don’t particularly like that solution.

The other solution is going to require us do two things:
  1. 1.

    Get the temperature of the CPU which we can use to know how much the temperature of the sensor is being offset by.

     
  2. 2.

    Apply a factor to it to try and determine the calibrated difference.

     

Essentially, we are going to have this equation to determine the real temperature:

temp_calibrated = temp - ((cpu_temp - temp)/FACTOR)

This will take the difference of the CPU minus the sensors’ temperature and divide it by a factor that will give us the amount to subtract. This factor is the tricky part; this is a calculated value based on the Pi’s reading and and actual thermometer’s reading. If you have the time and a real thermometer in the house, I’d take six or so readings during the day and apply them to that equation and average it to get the most accurate factor for your area. If instead you want a more average of a factor, the Weather Underground Pi project has done this already. We can use the factor they determined and generalize it to all. It won’t be as accurate since location and other factors are involved, but the number they came up with was 5.466 as the factor. Now this factor made it more accurate for me but was still not 100% accurate. But you at least get the idea of what we are trying to accomplish. If you want, run this and create your own factor to use.2

The one question you may be asking yourself after reading that is how do I get the CPU temperature? It’s actually easier than you may think; the temperature is set in celsius in the file /sys/class/thermal/thermal_zone0/temp. Now we have all the details we need. Let’s begin our code; what we are going to do is similar to our LED and create an Atmospheric wrapper struct to get a formatted version of the temperature back. This structure will wrap that connection to the SenseHat. In Listing 9-15, we create our Atmospheric struct with one public method get_temperature() that will return the temperature in fahrenheit.
use sensehat::SenseHat;
use std::fs;
use log::{debug, info};
const THERMAL_TEMP: &str = "/sys/class/thermal/thermal_zone0/temp"; ①
const WUNDERGROUND_ADJUSTMENT_FACTOR: f64 = 5.466;
pub struct Atmospheric {
    hat: SenseHat<'static> ②
}
impl Atmospheric {
    pub fn new() -> Atmospheric {
        Atmospheric {
            hat: SenseHat::new().unwrap() ③
        }
    }
    pub fn get_temperature(&mut self) -> String {
        // Get the temperature from the humidity
        // we could also do pressure
        let temp = self.hat.get_temperature_from_humidity().unwrap().as_celsius(); ④
        let thermal_tmp = fs::read_to_string(THERMAL_TEMP.to_string()).unwrap(); ⑤
        let thermal_tmp_str = thermal_tmp.as_str().trim();
        // CPU temp needs to be divided by a 1000 to get the actual Celsius temperature,
        // It supplies it like : 55991
        let cpu_temp: f64 = thermal_tmp_str.parse().unwrap(); ⑥
        let calculated_temp = temp - (((cpu_temp * 0.001)- temp)/5.466) - 6.0; ⑦
        let calc_temp_f = calculated_temp * 1.8 + 32.0; ⑧
        debug!("Calculated Temp: {:?} C", calculated_temp);
        info!("Calculated Temp: {:?} F", calc_temp_f);
        format!("{:.1} F", calc_temp_f) ⑨
    }
    pub fn get_temperature_in_c(&mut self) -> f32 {
        // Get the temperature from the humidity
        // we could also do pressure
        let temp = self.hat.get_temperature_from_humidity().unwrap().as_celsius();
        let thermal_tmp = fs::read_to_string(THERMAL_TEMP.to_string()).unwrap();
        let thermal_tmp_str = thermal_tmp.as_str().trim();
        // acquire CPU temp
        let cpu_temp: f64 = thermal_tmp_str.parse::<f64>().unwrap() * 0.001;
        let calculated_temp = temp - ((cpu_temp - temp) / WUNDERGROUND_ADJUSTMENT_FACTOR );
        // F32 is the type needed by hap current_temperature
        calculated_temp as f32
    }
}
}
Listing 9-15

Creating our Atmospheric interactions ; file is in sensors/atmospheric.rs

  • ➀ Set as a constant the location of the file containing the CPU temperature.

  • ➁ Our only property is the SenseHat struct from the crate.

  • ➂ Instantiate the struct; please note if you’ve added any features that sensor is marked as UU in our i2cdetect, this will fail.

  • ➃ Retrieve the temperature from the humidity sensor in celsius.

  • ➄ Retrieve the thermal temperature from the file.

  • ➅ Convert the string temperature to a float.

  • ➆ Apply our equation we went over earlier using the temperatures we just retrieved.

  • ➇ Convert the temperature to fahrenheit because while I have two science degrees, I live in the United States.

  • ➈ Format the value back to only have one digit, since the computed value has many decimal values.

If your sensors all work and want an even more accurate reading, you can take an average of the humidity and pressure sensors for the temperature as well. One thing to note is that the std::fs::read_to_string read is relatively fast and cheap, especially given the file is only one line, so we don’t need to worry about constant reads from the application. Also we are only going to be pulling the temperature sporadically. We will be using this code later for part of our daily schedule and with our joystick interactions.

Joystick Control

You may have not noticed it, but there is a little button on the top of the Sense HAT, closest to the Ethernet port; this is our joystick. The joystick can be moved left, right, top, or bottom, and there is even a center switch as well. Using the sensehat-stick crate (https://github.com/saibatizoku/sensehat-stick-rs) will allow us an easy interaction with the joystick. This crate allows us to detect the direction and the action performed on the joystick. We will only be doing some basic things with this crate for this chapter, but in later chapters, we are going to expand on this module.

For now, let’s start by adding the crate to our dependencies. In Listing 9-16, we add the stick.
[dependencies]
sensehat-stick = { version = "0.1", default-features = false, features = ["linux-evdev"], git = "https://github.com/nusairat/sensehat-stick-rs.git" }
Listing 9-16

Adding sensehat-stick-rs to our Pi application in our Cargo.toml

Once again, I had to update the crates for this; there was an issue that while the crate allowed you to see all the actions, it didn’t let you do any comparisons or equal comparisons to the actions. Interacting with the joystick requires one just to perform an infinite loop processing the events as they come in. In Listing 9-17, we have a simple example where we get the event and print out the results.
use sensehat_stick::{JoyStick, JoyStickEvent, Action, Direction};
let stick = JoyStick::open().unwrap();
loop {
    for ev in &stick.events().unwrap() {
        info!("Stick -- {:?}", ev);
    }
}
Listing 9-17

Simple example of interacting with the joystick

For each event, there is an action and a direction; the various directions are Enter-Up-Down-Left-Right.

And the various actions are Release-Press-Hold.

I think it’s fairly self-explanatory what each of these means. Go ahead and run the code in your application placing it as the last thing you do in the main and you can see some variety of the output it creates as you move the joystick around. We will be creating a joystick module later that will help with our interactions.

Creating Interactions

At this point, you should be able to interact with the various sensors on the Raspberry Pi, and hopefully you’ve run through a couple quick tests. This was the easy part, because running everything one time as a single call in Rust is straightforward. But now, we want to take everything we’ve done and combine it into a few interactions.

As I mentioned when we started this section, we will be using channels via tokio crate. The channels will allow us to create multiple producers and a single consumer. The question of course begs what we are producing and what we are consuming. For our applications, the producers will produce a single command at a time. Commands will be enums that we can easily expand to in the future. Right now, our command enums are as follows in Listing 9-18.
#[cfg(feature = "ch09")]
#[derive(Debug)]
pub enum Action {
    ShowTemperature,
    Print(Display)
}
#[derive(Debug)]
pub enum Display {
    Halloween,
    Christmas,
    QuestionMark,
    Text(String)
}
Listing 9-18

The commands we will be using for our first iteration of the pattern; file is in manager.rs

These commands will handle the two main use cases of displaying the temperature and printing out text or images to the screen. In the end, our consumer will receive the commands from the channel and perform actions on the LedControls and Atmospheric structs that we created earlier.

To start off, we are going to create two main set of modules producing the following commands: the daily and joystick modules.
  • Daily – A module that runs at intervals to display the temperature at 8 a.m. and will display either a Christmas tree or a pumpkin at noon if it’s the Christmas or Halloween season.

  • Joystick – A module that will perform actions when we click in different directions. For this chapter, when you depress the center, it will display the temperature.

When all of this is put together, we will have a Pi board that can respond to commands happening in real time by the user and also be allowed to perform background routine operations all against the same modules. Giving us in essence multi-threaded multi-module access to singular components without violating any borrow checking or constantly creating new inputs to the sensors potentially causing deadlocks.

Tokio Async Run

We’ve used tokio in previous chapters mostly with the Rumqtt crate, but let’s dive into a bit more detail. The asynchronous processing has changed quite a bit with Rust in 2019, and tokio packages have been updated accordingly. If you used tokio in version 0.1, you had to do quite a bit of instantiating your runners directly and had to handle your process through promises and futures. Now with the async feature in the 1.39 version of Rust, we will be using polling instead of the promise/future route. Tokio 0.2 takes full use of the new async/await methods.

The async/await allows a developer to create a function asynchronously and then to await the return of the data and the finish of processing it. We will be using this to run our scheduler.

First up, let’s set up our dependencies in Listing 9-19 to include the latest tokio and futures crate. This crate is heavily contributed and added to so don’t let the 0.3 version scare you, they have a solid road map to 1.0 and heavily respond to questions on Discord.
[dependencies]
tokio = { version = "0.2.4", features =["full"] }
tokio-timer = "0.2.12"
futures = "0.3"
Listing 9-19

The tokio and futures crate dependencies, code in file Cargo.toml

Now at the end of our main method, we are going to call an asynchronous method that will launch all of our background threads. In here, we create a channel that will allow us to send values between. This channel is similar in operation to channels you may have used in languages like Golang. The channel itself is initialized with a buffer, which we are keeping low since most of the communication will be performed by someone interacting with the Raspberry Pi and thus should never get too high. The channels return two objects: a transmitter and a receiver. The transmitters are used to transmit the data to the channel, and the receiver receives it. In Listing 9-20, we create the function that will be run in our main; afterward, we will implement each of the functions it calls.
#[tokio::main] ①
async fn run(matches: &ArgMatches, uuid: &Uuid) -> Result<(), Box<dyn std::error::Error>> {
    use tokio::sync::mpsc;
    info!("Setup and start our channel runners ...");
    // defines the buffer to send in
    let (tx, rx) = mpsc::channel(100); ②
    let joy_tx: Tx = tx.clone(); ③
    let daily_tx: Tx = tx.clone();
    // Start our timer matcher
    // we want to do this after the authentication so we don't have any interruption from the
    // login; this will also run Asynchronously
    daily::run(daily_tx); ④
    // Setup and run the Joystick now; ⑤
    joystick::run(joy_tx);
    // Ready our receivers
    let led_controls = Arc::new(Mutex::new(LedControls::new())); ⑥
    let atmospheric = Arc::new(Mutex::new(Atmospheric::new()));
    manager::run(rx, &led_controls, &atmospheric).await; ⑦
    debug!("Complete");
    Ok(())
}
Listing 9-20

Implementation of the tokio async code, code in file main.rs

  • ➀ Uses the macro definition shortcuts creating a Builder::new with the default of a threaded_scheduler to start up our async processing.

  • ➁ Creates the channel with an initialization of a 100 in the buffer.

  • ➂ Since we cannot pass the transmitter to multiple modules, we need to clone it for each module we want to pass it to.

  • ➃ Runs our daily background scripts transmitting commands when it hits a daily input.

  • ➄ Awaits our joystick input, sending commands back based on the input.

  • ➅ We wrap the LedControls and Atmospheric since they will be run asynchronously in the manager.

  • ➆ Calls our manager that will await forever waiting for transmissions.

In that function, you have three calls we have not defined yet; let’s go ahead and define them.

Daily Runs

Our first stop is to set up the daily module. The daily module is a module that will run every hour on the hour firing off an event checker, as well as firing when we first boot up the application. In this module, we will spawn a thread that loops infinitely. We will then fire off our events inside the loop. Now we wouldn’t want an infinite loop that constantly checks, since that would be a waste of resources. Instead, we will make use of tokio::time module to control an interval and duration to fire. Having this duration will allow us to only fire the event checker when it’s on the hour. We will use our loop in conjunction with an interval check. We first will create the interval, given the time we want it to start at and the duration; from there, we can let the interval tick, pausing till the time has passed. This gives us in Listing 9-21 the ability to run code once an hour.
const INTERVAL_IN_SECONDS: u64 = 60 * 60;
 pub fn run(mut tx: Tx) {
    use std::ops::Add;
    let local: DateTime<Local> = Local::now(); ①
    let min = local.minute();
    // Determine the time till the top of the hour
    let time_from_hour = 60 - min; ②
    debug!("Min from hour : {:?}", time_from_hour);
    let time_at_hour = Instant::now();
    time_at_hour.add(Duration::from_secs((60 * time_from_hour).into())); ③
    // Compute the interval
    let mut interval = interval_at(time_at_hour, Duration::from_secs(INTERVAL_IN_SECONDS)); ④
    tokio::spawn(async move { ⑤
        // run on initial start-up then timers after
        run_initial(&mut tx).await; ⑥
        loop {
            interval.tick().await; ⑦
            info!("Fire the Timer Checker; ");
            display_special(&mut tx); ⑧
        }
    });
 }
async fn send(tx: &mut Tx, action: Action) { ⑨
    if let Err(_) = tx.send(action).await {
        info!("receiver dropped");
        return;
    }
}
Listing 9-21

The main daily runner that will loop and run our special printouts, code in file daily.rs

  • ➀ Get the local time; this is going to be used as a basis to know how long we are from the hour.

  • ➁ Determine the amount of minutes till the top of the hour since we want to fire this at the top of the hour.

  • ➂ Add that difference so now that time_at_hour will be the time of the next hour (i.e., if it’s 3:37, now this variable will be 4:00).

  • ➃ Create our interval; the first parameter is the start, and the second is the interval; in this case, we check it every 60 minutes.

  • ➄ Spawn a new thread for our asynchronous call.

  • ➅ Run our initial call to print either a Christmas tree or pumpkin.

  • ➆ This is the start of the infinite loop; this will await the first tick which occurs at the top of the hour.

  • ➇ On the hour, it now runs the display.

  • ➈ The send method used by other calling functions to send our Action to the receiver.

Few things to note here, the tx.send can only be called in an async functions which also means its parent has to be in an async function as well and so forth. This is why you are going to see a layer upon layer of async until you get to the tokio::spawn; from that point, the async addition to the function is no longer necessary. Also this send method will be in the other modules, but we aren’t going to print it out each time in the book.

We should also handle errors from the receiver better, but this code was already complicated enough as it is, but something for the reader to think about when using in a real-world application.

Next let’s look at that run_inital function that gets ran when the Pi app first starts up; in Listing 9-22, we have that function which will check if it’s Christmas or Halloween.
async fn run_initial(tx: &mut Tx) {
    let local: DateTime<Local> = Local::now();
    if is_christmas(&local) {
        send(tx, Action::Print(Display::Christmas)).await;
    }
    else if is_halloween(&local) {
        send(tx, Action::Print(Display::Halloween)).await;
    }
}
Listing 9-22

Checks if it’s Christmas or Halloween, code in file daily.rs

And then in Listing 9-23, we run the daily_special that gets ran hourly, which will send an action to show the temperature at 8 a.m. or at noon if it’s Christmas or Halloween to display the tree and pumpkin, respectively.
 async fn display_special(tx: &mut Tx) {
    let local: DateTime<Local> = Local::now();
    // now switch based on the variable to display
    // we will only call this on the hour so we don't need to check the minute
    // also could be a delay so better to not be that precise
    if local.hour() == 8 {
        //display_weather(tx);
        send(tx, Action::ShowTemperature).await;
    }
    else if local.hour() == 12 {
        if is_christmas(&local) {
            send(tx, Action::Print(Display::Christmas)).await;
        }
        else if is_halloween(&local) {
            send(tx, Action::Print(Display::Halloween)).await;
        }
    }
 }
Listing 9-23

Our daily checker that gets ran hourly, code in file daily.rs

Finally, we should take a look at the functions that check if it’s the month of October (Halloween) or December (Christmas) in Listing 9-24.
 fn is_halloween(local: &DateTime<Local>) -> bool {
     local.month() == 10 && local.day() == 31
 }
 fn is_christmas(local: &DateTime<Local>) -> bool {
     // Any day in Christmas before the 25th
    local.month() == 12 && local.day() <= 25
}
Listing 9-24

Checks if it’s October or if it’s December before the 26th, code in file daily.rs

This section shows us how to loop through a spawned function as well as how to create an interval and duration.

Joystick

We discussed the joystick code before but didn’t write any code specifically for the application; let’s loop back and write some code. This code will be a combination of what you saw earlier with the tokio async processing and the joystick code we did earlier. We will create a new module joystick with an entry point of run(__) to the module. Once again, we need to create a spawned thread and loop through it, except this time in Listing 9-25, we are checking for event input and responding to the event accordingly.
pub fn run(mut tx: Tx) {
    let stick = JoyStick::open().unwrap();
    run_on_loop(stick, tx);
}
fn run_on_loop( mut stick: JoyStick,
                mut tx: Tx) {
    use tokio::task;
    info!("Run Async Calls on the joystick");
    // Use Spawn Blocking since Stick Events is a blocking call, otherwise we risk blocking
    // the current thread
    task::spawn_blocking(move || { ①
        loop {
            // TODO : Add some logic to break up the time if not you hold the button down
            // And you may get it displaying 5 times
            for ev in &stick.events().unwrap() {
                info!("Stick -- {:?}", ev);
                // Create a response based on events
                // can be blank since the processing is inside
                if check_temp_event(&ev) { ②
                    info!("Check Temperature Event");
                    send(&mut tx, DisplayAction::ShowTemperature)
                }
                // TODO we will add more complexity later to this
                else {
                    // let's just display a question mark
                    warn!("Not Supported Event");
                }
            }
        }
    });
}
Listing 9-25

Joystick responses, code in file joystick.rs

  • ➀ Iterates through any events received.

  • ➁ Checks for a temperature event; we will build this out more in future chapters.

Lastly in Listing 9-26, we are going to implement the check_temp_event that will check if the user entered and held down the button, the trigger for displaying the temperature to the screen.
fn check_temp_event(ev: &JoyStickEvent) -> bool {
    // When the button is held down.
    if ev.action == Action::Hold
        && ev.direction == Direction::Enter {
            return true;
    }
    return false;
}
Listing 9-26

Checks if the user wanted the temperature, code in file joystick.rs

We now have all the producers we are creating for this chapter; you have potentially three sets of producers sending data to the receiver to use.

Receiver

As you can see clearly in all of this code, each time we are sending back the enums we defined earlier. In some cases, we pass in dynamic values like the text; in others, they are singular commands, ShowTemperature, but each one sends this through the transmitter. Now we will create the receiver for those commands. In Listing 9-27, we have our receiver that awaits an event, the rx.recv().await ; this will await events and continuously run and wait for the next event. Incidentally, this forever await is also why we don’t have to add a loop{} in the main.rs, because this await is going to be waiting indefinitely for the next command to appear.
pub type Tx = mpsc::Sender<Action>; ①
pub type Rx = mpsc::Receiver<Action>;
#[cfg(feature = "ch09")]
pub async fn run(   mut rx: Rx,
                    led_controls: &Arc<Mutex<LedControls>>,
                    atmospheric: &Arc<Mutex<Atmospheric>>) {
    // Receives the information
    while let Some(action) = rx.recv().await { ②
        info!("Received :: {:?}", action);
        // now let's parse out what should happen.Action
        match action { ③
            Action::ShowTemperature => {
                display_weather(&atmospheric, &led_controls);
            },
            Action::Print(display) => { ④
                match display {
                    Display::Halloween => {
                        display_halloween(&led_controls);
                    },
                    Display::Christmas => {
                        display_christmas(&led_controls);
                    },
                    Display::Text(text) => {
                        display_text(text, &led_controls);
                    },
                    Display::QuestionMark => {
                        question_mark(&led_controls);
                    }
                }
            },
        }
    }
}
Listing 9-27

Our receiver await processing, code in file manager.rs

  • ➀ Defines transmitter and receiver types of Action; this is used as a type shortcut in the other modules to know the type of transmitter being sent; it could be any struct or enum, but they have to be the same struct/enum for each channel.

  • ➁ Awaits the receiver for transmitted data.

  • ➂ Matches our Actions.

  • ➃ Matches our Displays.

Finally in Listing 9-28, everything we have been working on in this section comes together. The sensor struct you created is now called by the various commands we passed through. For each command we add, we will have to create a corresponding function that processes and handles. Your logic should mostly still occur in producers; this is merely to handle the interactions the board provides.
fn question_mark(led_controls: &Arc<Mutex<LedControls>>) {
    //let mut led = Arc::get_mut(&mut led_controls).unwrap();
    let mut led = led_controls.lock().unwrap();
    led.question();
}
// Display Christmas tree for 30 seconds
fn display_christmas(led_controls: &Arc<Mutex<LedControls>>) {
    let mut led = led_controls.lock().unwrap();
    led.display_symbol(&CHRISTMAS_TREE, 30);
}
 // Display pumpkin tree for 30 seconds
fn display_halloween(led_controls: &Arc<Mutex<LedControls>>) {
    let mut led = led_controls.lock().unwrap();
    led.display_symbol(&HALLOWEEN, 30);
}
fn display_weather(atmospheric: &Arc<Mutex<Atmospheric>>, led_controls: &Arc<Mutex<LedControls>>) {
    let mut atmo = atmospheric.lock().unwrap();
    let temp: String = atmo.get_temperature();
    let mut led = led_controls.lock().unwrap();
    led.display(&temp);
}
// Display any text
fn display_text(text: String, led_controls: &Arc<Mutex<LedControls>>) {
    let mut led = led_controls.lock().unwrap();
    led.scroll_text(&text);
}
Listing 9-28

Processes each of the messages, code in file manager.rs

This section gives us the start to being able to expand functionality to the board as well as expand background processing that will be necessary when we add other modules to the Pi. Pis are powerful computers, so don’t be afraid to create a multi-threaded device so long as you keep all the memory and borrowing safeguards in place.

Logging In

We can now interact with the board devices, but we need to be able to interact more with all those endpoints we created in the first half that require a user. In order to interact with the services, we are going to have an authenticated user. The authenticated user will allow us to send a request token to the server to verify our users and verify they have access to box.

In Chapter 6, we went over the device authentication flow. In that chapter, we showed via curl commands and the web UI interactions how to perform a device flow authentication with Auth0. In this chapter, we are going to use those interactions into our code and integrate it into our Raspberry Pi application.

Yup OAuth 2

There are quite a few different authentication crates for Rust out there, and I looked through a few different ones. The yup-oauth2 is one of the more popular and has a solid diverse set of functionality in it. My main choice for picking it however is that none of the other crates had the necessary level of interaction that yup-oauth2 did for device flows. Remember, a device authentication flow calls to the server to get a code, returns that to the user, and then keeps checking back to the server to make sure the user has been authenticated. This was not the code I wanted to customize myself.

The only downside to yup-oauth2 is it seemed very much geared to the Google OAuth 2 Device Flow, which probably makes sense in terms of percentage of usage; however, that meant out of the box it did not work for our needs. Mainly there was one critical thing that needed to be customized, the name of the device_code sent; this field is not configurable, and we needed to configure them because Auth0 expects it to be device_code and Google just expects it to be code, different for Auth0. The code is hard-coded throughout yup-oauth2, so the only way to make this work was to branch this off. Once again, we will have to use a modified version; you can find that version at https://github.com/nusairat/yup-oauth2/. They are doing quite a few changes; in fact, from the time I started to write this chapter to finishing this book, the code changed drastically, and they are constantly improving the crate.

Let’s talk about the flow we are going to code to our device:
  • On startup of our application, the application will check for a valid JSON Web Token stored in the filesystem; if it exists and is valid, you will just proceed as normal and be considered logged in.

  • If the token does not exist, then call out to Auth0 to get a device code to use for authentication.

  • Display the device code on the LCD so that the end user knows what the device code they need to use is.

  • In the background, periodically check to see if the user has been authenticated.

  • Once authenticated, proceed to the next steps.

Luckily, most of this comes out of the box for us to use and will only require us configuring the application. For the authentication functionality we will create its own library. This will allow us greater reuse in terms of writing other apps that need authentication as well.

Since the authentication has to communicate directly with the LedControls and it’s now in its own library, this presents another problem but gives us a new learning experience; how do we have the LedControls we created in the previous section interact with the library? The solution is we will use traits that will configure how to display the logic, which in reality really is a better way since apps may want to write the output differently.

Let’s get this app started though. Since this application will not be part of the main Pi application, but instead will be a library referenced by it, let’s start by creating a library. We still use the cargo new like we normally do, but now we pass in a --lib flag which generates a lib.rs instead of a main.rs. In Listing 9-29, we generate this application.

I’d put this in the same level that your other applications are in; it will be useful later when we have to reference it.
➜ cargo new rasp-auth --lib
     Created library `rasp-auth` package
➜ ls -al rasp-auth/src
total 8
drwxr-xr-x  3 joseph  staff   96 Dec 15 16:28 .
drwxr-xr-x  6 joseph  staff  192 Dec 15 16:28 ..
-rw-r--r--  1 joseph  staff   95 Dec 15 16:28 lib.rs
Listing 9-29

Creating a rasp-auth library package

We have created the library; now let’s update the Cargo.toml file to the contents in Listing 9-30.
[package]
name = "authentication"
version = "0.1.0"
authors = ["Joseph Nusairat <[email protected]>"]
edition = "2018"
[dependencies]
yup-oauth2 = {git = "https://github.com/nusairat/yup-oauth2.git", branch = "chore/fix-merge-errors"}
tokio = { version = "0.2", features = ["fs", "macros", "io-std", "time"] }
chrono = "0.4"
log = "0.4"
Listing 9-30

Creating a rasp-auth library package

This will include all the dependencies we need for this section; you will notice for the yup-auth2 I am referencing the git directory for my fork that has the necessary changes to support Auth0.

For our library, all the code we will be writing next will be contained in the lib.rs file. The code is not that long and it’s all performing a singular operation, so it did not make sense to add multiple modules.

Authentication Library

This library is going to have to do essentially three things:
  1. 1.

    Be a wrapper to run the authentication system so that we can minimize our code in the Pi app.

     
  2. 2.

    Create a flow specifically to be used for Auth0 and a device flow procedure.

     
  3. 3.

    Implement the use of a generic wrapper for display.

     
Before we start coding, let’s discuss a bit how yup-auth0 works. The yup-auth0 crate requires you populate a few structs before you run the authorization:
  • ApplicationSecret – This contains the endpoint for the authorization and auth orization URIs as well as any SSL certs and client IDs and secrets.

  • FlowDelegate – One of the traits we will be customizing. There is a more specific trait for our use, DeviceFlowDelegate. The delegate is used to help the OAuth system to determine what to do at each phase of the authentication flow. It controls the flow for the following events:

  • When user code is presented

  • Request code is expired

  • Pending

  • Denied

  • DeviceFlowAuthenticator – Builds our authenticator that will take in the DeviceFLowDelegate and also set where we plan to store the persisted token to.

Each of these builds up on one another until we have our authorization ready to be executed. The JSON token being persisted allows us to have an easy way to access the token between restarts and software failures without forcing the user to constantly re-authorize. And bonus with this crate is that it will check for an existing token, and if valid, it will not ask the user to have to go through the authorization flow again. Once executed, we will execute the future in tokio, awaiting retrieving the final token. In Listing 9-31, we have the authentication method that will implement this.
    pub async fn authenticate(&self) -> bool
        where VD: VisualDisplay + Send + Sync { ①
        // Trait needed for the futures use
        info!("Authenticate");
        // Create our application secret
        let application_secret = ApplicationSecret { ②
            client_id: self.client_id.clone(),
            client_secret: self.client_secret.clone(),
            token_uri: format!("https://{}/oauth/token", self.url),
            auth_uri: format!("https://{}/authorize", self.url),
            redirect_uris: vec![],
            project_id: Some(PROJECT_ID.to_string()),   // projectId
            client_email: None,   // clientEmail
            auth_provider_x509_cert_url: None,   // X509 cert auth provider
            client_x509_cert_url: None,   // X509 cert provider
        };
        // Create the flow delegate
        let flow_delegate = Auth0FlowDelegate { ③
            output: self.output.clone()
        };
        let auth = DeviceFlowAuthenticator::builder(application_secret) ④
            .flow_delegate(Box::new(flow_delegate))
            .device_code_url(format!("https://{}/oauth/device/code", self.url))
            .persist_tokens_to_disk(JSON_SECRET_LOCATION)
            .grant_type("urn:ietf:params:oauth:grant-type:device_code")
            .build()
            .await
            .expect("authenticator");
        // Set our scopes of data we want to obtain
        let scopes = &["offline_access", "openid", "profile", "email"]; ⑤
        match auth.token(scopes).await { ⑥
            Err(e) => warn!("error: {:?}", e),
            Ok(t) => info!("token: {:?}", t),
        }
        // Unblocked now, let's blank out before we return
        let mut output_ctrls = self.output.lock().unwrap(); ⑦
        output_ctrls.clear();
        true
    }
Listing 9-31

The authentication method

  • ➀ Used to define the VisualDisplay we are passing through (we will get to this more in a bit).

  • ➁ Defines the ApplicationSecret as well as setting our URIs that are Auth0 specific.

  • ➂ The FlowDelegate; here we are using a custom struct so that we can display the pin to the Sense HAT.

  • ➃ Creates the authenticator which takes the flow delegate as well as the JSON location to persist to disk on a successful authentication and to be reused on refresh. This also takes in our grant_type that is specific to Auth0.

  • ➄ The scopes we need to add to get all the tokens we need in the response; you may recall these are the same scopes we used in Chapter 6.

  • ➅ Runs authentication of a user for the given scopes and with the configurations we created in step 4.

  • ➆ Clears out our LCD display which may have had the device code displayed.

This method will run the basic authentication; as you can see, this has quite a few moving parts that we need to go over:
  • VisualDisplay – Will be our trait that defines the input

  • Access – Will be the struct that this function will be an implementation on.

VisualDisplay

As stated, we need to make the display become generic, specifically because of our application needing to display to an LED output, but this also makes the authentication a more reusable library for different front-end devices that want to display the user data. In the future, you could reuse this module and change from our LED display to an LCD display or an Android display, without having to change the authentication module. In Listing 9-32, we are defining three functions for a trait needed to display.
use yup_oauth2::{self, ApplicationSecret, DeviceFlowAuthenticator}; ①
use yup_oauth2::authenticator_delegate::{DeviceAuthResponse, DeviceFlowDelegate};
use log::{info, warn};
// Used to pin that data to a point in memory, makes sure it's a stable memory location
use std::pin::Pin;
use std::future::Future;
// Always store to this location
const JSON_SECRET_LOCATION: &str = "tokenstorage.json"; ②
// Probably should be a command line parameter
const PROJECT_ID: &str = "rustfortheiot"; ③
pub trait VisualDisplay { ④
    fn clear(&mut self);
    fn display_text(&mut self, text: &str);
    fn display_processing(&mut self);
}
Listing 9-32

The VisualDisplay trait

  • ➀ Imports needed for this crate.

  • ➁ JSON secret storage location; you can share this location between the apps running to be able to send authentication requests.

  • ➂ The name of our project id that we defined in Auth0.

  • ➃ Visual Display trait.

We define three methods on that trait:
  • clear – To clear out the display, blanking the screen so that whatever output we have doesn’t stick around.

  • display_text – Displays any output text that we want to display. This can be the user code or an error response.

  • display_processing – Used to display an output saying the input is processing right now.

All of these cover all the use cases for displaying to the user the status via the LED display; this methodology of using a trait also allows us to add on to it as need be.

Entry Point Struct

Next, we are going to go over a struct that will be created in order to instantiate our access into this application. This will be an Access struct; it will contain information specific to your individual authentication application. In Listing 9-33, we have that initial structure.
pub struct Access<VD> {
    client_id: String,
    client_secret: String,
    url: String,
    output: Arc<Mutex<VD>>
}
Listing 9-33

The Access struct that is our public input into the application

The first three items are directly related to your individual Auth0 accounts, and you will need to plug in your own values when we instantiate the Access. The fourth, the output, is what is driving our input to the display device. Since we are performing multi-threaded access, we are also wrapping it in an Arc<Mutex<T>> call.

Now in Listing 9-34, we are going to define the implementation to this. You’ve actually already seen one of the methods we will put into the impl, it’s the authenticate function we defined earlier. But now, let’s define the where clause for the impl as well as a method to instantiate it. The where clause is necessary so that the compiler knows certain traits are on the VisualDisplay and are needed by subsequent calls in these functions (I generally prefer using new whenever the function is being accessed outside of the immediate module).
impl<VD> Access<VD>
    where
        VD: VisualDisplay + Send + Clone + 'static ①
{
    pub fn new(client_id: String, client_secret: String, url: String, output: Arc<Mutex<VD>>) -> Access<VD> { ②
        // retrieve the access tokens if they exist
        Access {
            client_id: client_id,
            client_secret: client_secret,
            url: url,
            output: output
        }
    }
Listing 9-34

Defining the impl for the Access struct

  • ➀ Defining the conditions the VisualDisplay will have to implement to be used in our application.

  • ➁ The new function to create an Access struct.

The Send, Clone, and 'static are needed because the VisualDisplay is going to be used in various phases of the Auth life cycle, and they are required to by the future call, making use of the Send trait, and the flow delegate, which requires the Clone trait. This is a pretty standard procedure for Rust and allows us to enforce traits that need to be on the property that is being passed. In addition, the method we defined in Listing 9-31 will be part of this implementation.

At this point, you have almost all that is needed for authentication, one final set of code to implement for here, and that is to create our own flow delegate.

Auth0 FlowDelegate

The final set of code is our implementation of the FlowDelegate ; there are really two reasons we need to create our own flow delegate:
  1. 1.

    We need to overwrite the default settings to use Auth0-specific properties instead of Google-specific properties on the JSON authorization request.

     
  2. 2.

    We want to customize the output of the device code to the LED display instead of just to the console log which is the default.

     
In order to output to the LED display, we are going to have to pass in our VisualDisplay in order for the code to have access. In Listing 9-35, we set up the struct that we instantiated in the authentication method.
use std::sync::{Arc, Mutex};
// Flow Delegate requires a Clone
#[derive(Clone)]
pub struct Auth0FlowDelegate<VD> {
    output: Arc<Mutex<VD>>
}
Listing 9-35

Defining the Auth0FlowDelegate

The struct is now set up; we now need to implement the FlowDelegate trait which will make this application work. In Listing 9-36, we start the implementation of the FlowDelegate.
impl<VD> DeviceFlowDelegate for Auth0FlowDelegate<VD> ①
    where
        VD: VisualDisplay + Send + Sync ②
{
    /// Display to the user the instructions to use the user code
    fn present_user_code<'a>( ③
        &'a self,
        resp: &'a DeviceAuthResponse,
    ) -> Pin<Box<dyn Future<Output = ()> + Send + 'a>> {
        Box::pin(present_user_code(&self.output, resp))
    }
}
Listing 9-36

The implementation for Auth0FlowDelegate

  • ➀ Defines that we are implementing Auth0FlowDelegate for the FlowDelegate trait.

  • ➁ The clone is needed for any properties used in the FlowDelegate and that is why we need to add it to our VisualDisplay.

  • ➂ Sends a call to display our user code to the LED display.

The last thing we need to do is output the device code. Since our Pi only has an LED display, we are going to need to output to the LED the device code in order for the end user to know what to do. We will output it with the format “ > [DEVICE CODE]”; ideally, you would tell the user in an instruction manual that they have to authenticate when they see dashes come across the screen. In Listing 9-37, we implement the present_user_code to perform that functionality.
async fn present_user_code<VD>(output: &Arc<Mutex<VD>>, resp: &DeviceAuthResponse)
    where
        VD: VisualDisplay {
    use chrono::Local;
    info!("Please enter {} at {} and grant access to this application", resp.user_code, resp.verification_uri); ①
    info!("You have time until {}.", resp.expires_at.with_timezone(&Local));
    // Push to the ED Display
    let mut output_unwrap = output.lock().unwrap(); ②
    let text = format!("> {}  ", resp.user_code);
    // Bit of a fake since it will stop processing after this function
    output_unwrap.display_text(text.as_str()); ③
    output_unwrap.display_processing(); ④
}
Listing 9-37

The present_user_code function on the Auth0FlowDelegate implementation

  • ➀ Print out to the logger the user code and verification URL; this is to make easier for our debugging purposes.

  • ➁ Get the output VisualDisplay that we passed in to the authorization. Here we unwrap and lock which is needed to get the object from Arc<Mutex<T>>.

  • ➂ Output the device code to our LED display. This will give the user a visual representation of the device code they need to log in.

  • ➃ Change the UI to display a “is processing” image that is repetitive.

In a perfect world, you’d probably repeat the code or add other queues or a way to repeat if necessary. I didn’t want to overly complicate the code so I will leave it to the reader to do that part. Our library is now complete; we can switch back to integrating this code into the Raspberry Pi application.

Raspberry Pi App Integration

The final step is to integrate with the Raspberry Pi application itself. Luckily, with the way we designed our library file, this is a pretty easy task. The first thing you need to do is set up a few more argument matchers to store our client id, secret, and auth URI. I am not going to implement them here (we’ve done it plenty of times), but the names for what we are creating are in Table 9-3.
Table 9-3

Arguments used for the Authentication

Name

Short

Description

auth_client_id

-i

Will store the client id for the Auth0 device flow we set up early.

auth_client_secret

-t

Will store the client secret for Auth0.

auth

-a

The URL for our Auth0 account; for my application, I set the default to rustfortheiot.auth0.com.

Ideally, you can check in all the property values but the client_secret , this you should set dynamically on start. Even more, you’d probably want to have it in something like a vault repository. Next, we need to implement a struct that implements the VisualDisplay since the library has no knowledge that we have an LED matrix to display to. In Listing 9-38, we implement the trait for the LedVisualDisplay.
impl authentication::VisualDisplay for LedControls {
    fn clear(&mut self) {
        // let mut led_control_unwrap = self.led.lock().unwrap();
        // led_control_unwrap.blank();
        self.blank();
    }
    fn display_text(&mut self, text: &str) {
        // let mut led_control_unwrap = self.led.lock().unwrap();
        // led_control_unwrap.scroll_text(text);
        self.scroll_text(text);
    }
    fn display_processing(&mut self) {
        // let mut led_control_unwrap = self.led.lock().unwrap();
        // led_control_unwrap.processing();
        self.processing();
    }
}
Listing 9-38

Implement the LedVisualDisplay that is for our authentication; code is in main.rs

You will notice we are able to just apply the VisualDisplay trait onto the LedControls; this adds the extra functionality and allows us to simply pass in the LedControls to the authentication module. In Listing 9-39, I have the final set of code that will call the authentication library using those parameters passing in the LedControls.
#[tokio::main]
async fn run_authentication(matches: &ArgMatches) {
    use authentication::Access;
    info!("Run Authentication ...");
    // Initialize the LED controls
    let led_controls = Arc::new(Mutex::new(LedControls::new()));
    let client_id = matches.value_of(args::auth_client_id::NAME).unwrap().to_string();
    let client_secret = matches.value_of(args::auth_client_secret::NAME).unwrap().to_string();
    let url = matches.value_of(args::auth0::NAME).unwrap().to_string();
    let access = Access::new(client_id, client_secret, url, led_controls);
    // Authenticate
    if access.authenticate().await == false {
        error!("Not Logged In:");
    }
}
Listing 9-39

The run_authentication function that is in our main.rs of our Pi application

Summary

In this chapter, we greatly expanded the ability of our Pi device with the added Sense HAT hardware. We now have a device that can communicate securely with the backend as well as perform various local interactions. As we continue, we will make use of the authentication module to transmit and receive data.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.209.80