© Stewart Watkiss 2020
S. WatkissLearn Electronics with Raspberry Pihttps://doi.org/10.1007/978-1-4842-6348-8_8

8. Using the Raspberry Pi Camera

Stewart Watkiss1 
(1)
Redditch, UK
 

In this chapter, we will look at the Raspberry Pi camera and how it can be controlled using electronic sensors. This will look at using the camera to take photographs automatically based on a PIR sensor and how to use the camera for creating stop frame animation.

Cameras Available for the Raspberry Pi

This will start by setting up the Raspberry Pi camera by connecting it to the camera connector directly on the Raspberry Pi. This camera connector has been included on all versions of the Raspberry Pi, except for the early versions of the Pi Zero.

At the time of writing, there are three different official cameras. The camera module V2, the Pi NoIR module V2, and the Raspberry Pi high-quality camera. The first two modules have an 8-megapixel sensor with a fixed focus. The standard module is useful for most general use; the Pi NoIR camera is the same but without the infrared filter suitable for night photography using infrared lighting. The high-quality camera is a 12.3-megapixel camera which needs a separate C- or CS-mount lens to be attached; it provides better quality photos but needs the user to manually adjust the aperture and focus.

All the official cameras use the camera connector. The camera connector provides direct access between the camera and the processor which is more efficient than using a webcam which connects using the USB protocol.

To connect the camera, lift the tab on the connector next to the HDMI connector (HDMI 1 on the Raspberry Pi 4). Insert the ribbon cable into the connector with the blue side facing away from the HDMI connector. Push the tab back down which should hold the camera cable firmly into place. Figure 8-1 shows a Raspberry Pi with the camera module connected.
../images/417997_2_En_8_Chapter/417997_2_En_8_Fig1_HTML.jpg
Figure 8-1

Circuit diagram for the infrared receiver circuit

The camera module needs to be enabled through the Raspberry Pi configuration tool. From the menu, choose Preferences and then Raspberry Pi Configuration. The camera is enabled through the Interfaces tab. This is shown in Figure 8-2. After enabling the camera, you will need to reboot before you can use it.
../images/417997_2_En_8_Chapter/417997_2_En_8_Fig2_HTML.jpg
Figure 8-2

Configuration screen to enable the Raspberry Pi camera

Once connected and enabled, you can test if the camera works using the raspistill command:
raspistill -o photo1.jpg

This will take a photograph and store it as photo1.jpg. If you have a screen directly connected to the HDMI port, then you will see a preview prior to the photograph being taken; otherwise, you will just notice a delay before the photo.

Assuming this command works, you can now proceed to controlling the camera through Python. First is a simple program that takes a photograph and saves it to the local computer. This is shown in Listing 8-1 and is called picamera-test.py in the source code.
import picamera
camera = picamera.PiCamera()
camera.capture('/home/pi/photo1.jpg')
camera.close()
Listing 8-1

Simple program to test the Raspberry Pi camera

This is a simple program which should be easy enough to follow. The first line imports the picamera module. In earlier examples, the from keyword was used which meant that we could use the imported module without referring to the module. In this case, the standard import keyword has been used. This means when you create your object from the picamera module, you need to prefix the instruction with the module name which is picamera.

The code then creates an instance of the picamera object called camera. You can then call the camera methods using this object. The first is called capture. As you can probably guess, this captures a photograph and saves it in a file called photo1.jpg. Finally, the close method is called to clean up the resources that were allocated.

This code has used the same filename as the previous command, so it will overwrite the file created previously. If creating multiple photos, then you will need a way of giving each file a new name. There are two simple ways we could do this. The first is to add a unique number which is incremented for each new file. This involves keeping track of the number. The alternative is to add the date and time to each file, which will make it easy to create the filename. Both options have their pros and cons; I think that using the date is a good way of doing this. This will use the time module and the strftime method which formats the time into a readable date format. The date format used is the ISO 8601 date format, which formats the date in order of the most significant part of the date first. This provides the date in the form of year-month-dayThour:minutes:seconds. The advantages of this date format are that ordering the files by filename will put them into chronological order and that it is a date format that is recognized anywhere around the world. The method time.gmtime() can be used to get the current time, which is in seconds since the UNIX epoch (1970). This is converted to a string which is stored in timestring. It is then included in the filename.

The updated code is shown in Listing 8-2 and saved as camera-unique.py.
import picamera
import time
camera = picamera.PiCamera()
timestring = time.strftime("%Y-%m-%dT%H:%M:%S", time.gmtime())
camera.capture('/home/pi/photo_'+timestring+'.jpg')
camera.close()
Listing 8-2

Saving camera photos as a unique name

There is a potential problem with this solution. The Raspberry Pi does not include a real-time clock. If the Raspberry Pi has a network connection (wired or wireless), then it will update the time of the Raspberry Pi using network time servers. If the Raspberry Pi does not have a network connection (such as an outdoor sensor to monitor wildlife), then the date and time may not be correct. If that is the case, the files can be renamed when they are transferred to another computer for later viewing.

Using the PIR Motion Sensor to Trigger the Camera

Chapter 5 showed how a PIR motion sensor could be connected to the Raspberry Pi to detect when someone passes nearby. This can be used in conjunction with the camera to take a photo of the person as they walk near so that you know who has been entering a certain area. It can also be used for detecting and taking photographs of wildlife.

To use the next project, the PIR should be connected to the GPIO as in the diagram in Figure 5-2. The next program will combine the code form the earlier PIR sensor with the camera code. This will wait for motion to be detected and then capture a photo of whoever or whatever triggered the sensor. The files will have the date and time included in the filename.

The complete source code for this is in Listing 8-3 which you can name camera-pir.py.
from gpiozero import MotionSensor
import picamera
import time
# PIR sensor on GPIO pin 4
PIR_SENSOR_PIN = 4
# Minimum time between captures
DELAY = 5
# Create pir and camera objects
pir = MotionSensor(PIR_SENSOR_PIN)
camera = picamera.PiCamera()
while True:
    pir.wait_for_motion()
    timestring = time.strftime("%Y-%m-%dT%H:%M:%S", time.gmtime())
    print ("Taking photo " +timestring)
    camera.capture('/home/pi/photo_'+timestring+'.jpg')
    time.sleep(DELAY)
Listing 8-3

PIR triggered camera

This code is primarily a merge of the PIR and camera programs listed previously. The main change is for the camera code to be included in the while loop. The camera.close entry has also been removed, as it will continue capturing photos. Ideally, the close should still be called when the program is terminated; however, this code runs continuously (unless it is stopped from Mu or Ctrl-c is pressed), so the close has been removed.

The print statement shows the time that the photograph is taken. This is useful during testing but can be removed once the program is proved to be working correctly.

The code is saved as camera-pir.py which can then be run from the command line. You may want to look at this code running automatically when the Raspberry Pi is started, which was explained in Chapter 7.

Stop Motion Videos

A popular use of the Raspberry Pi and the camera module is in creating stop motion videos. This is where you create a video story by taking photographs for each frame in the video. For a professional video, you may look at taking around 24 photographs for each second of video; for a home video, around ten frames per second would be a good figure. This still needs a lot of photographs to be taken, so to make this easier, this will show how you can add a simple push button to take the photos and then how they can be combined into a video.

Crimp Connections

The buttons used so far have been mounted on a breadboard, but for this, it is easier to have a button on a lead so that it can be placed in a more convenient position. The push button that I chose has the option for soldering on the connector tabs, or they can instead be connected using a 2.8mm female spade connector. A wire can be crimped to the space connector using a crimp tool, which removes the need for soldering. Crimp connectors are available with insulation on (often used in car electrics) or uninsulated. The uninsulated ones are better for use with the thin wires I have used. A photograph of the crimp tool and suitable connector is shown in Figure 8-3.
../images/417997_2_En_8_Chapter/417997_2_En_8_Fig3_HTML.jpg
Figure 8-3

Crimp tool with female spade connector

I have placed the crimp connectors on male-to-female 12-inch (30cm) long jumper leads. This allows the female end to be connected directly to the GPIO ports of the Raspberry Pi and gives a reasonable amount of flexibility for positioning the button. This is shown in Figure 8-4.
../images/417997_2_En_8_Chapter/417997_2_En_8_Fig4_HTML.png
Figure 8-4

Push-button switch with crimped jumper wires

The button can be connected to GPIO port 10 (physical pin 19) and Ground (physical pin 6) the same as the push-button switch used in Chapter 3. The code is based on the picamera code used previously, now using the bottom and sequential number of the files. This is shown in Listing 8-4, called camera-stopmotion.py.
#!/usr/bin/python3
from gpiozero import Button
import picamera
import os
# PIR sensor on GPIO pin 4
BUTTON_PIN = 10
image_dir = '/home/pi/film/'
# Create pir and camera objects
button = Button(BUTTON_PIN)
camera = picamera.PiCamera(resolution=(720,576))
camera.hflip=True
camera.vflip=True
image_number = 1
while True:
    filename = "{}frame{:03d}.jpg".format(image_dir, image_number)
    # Loop to ensure that filename is unique
    while os.path.isfile(filename):
        image_number += 1
        filename = "{}frame{:03d}.jpg".format(image_dir, image_number)
    camera.start_preview()
    button.wait_for_press()
    print ("Taking photo {}".format(image_number))
    camera.capture(filename)
    image_number += 1
Listing 8-4

Stop motion camera program code

The code uses sequential numbering of the files, which makes it easier for combining these into a video. To prevent overwriting an existing file, it has an additional while loop to check that the file doesn’t already exist. If the file exists, then it increments the image number until there is no matching file. This uses os.path.isfile which will identify if the filename matches a file.

You will also see that the filename is made up using a complex string. This uses the string.format method. The curly braces {} in the first part of the string are replaced with the arguments passed to the format method. The entry {:03d} ensures that there are always three decimal places in the number which is prefixed with zeros as required.

The directory image_dir will be used to store the files. This needs to be created before running the program using
mkdir /home/pi/film

There are a couple of changes made to the way that the images are taken. The first is to reduce the resolution of the camera to 720 x 576 pixels. This creates smaller files which are easier to merge into a video. This can be left to the default but will create larger files and take longer to process. The other is required because the camera mount I used holds the camera in the upside-down position (with the cable entering from the top). The hflip and vflip attributes have been set to True to turn the camera the correct way around. This is only required if the camera is mounted upside down with the cable coming from above.

The start_preview method is also used to show a preview of the image on the screen before the button is pressed. This is so that you can see what the camera is looking at prior to taking the photograph. One restriction for the preview is that it will only show on a screen physically attached to the Raspberry Pi (e.g., through the HDMI port or if using a Raspberry Pi screen connected to the display adapter). If you want to preview the images through VNC, then you need to enable direct capture mode on the VNC server on the Raspberry Pi.

If you run the program, then it will show a preview and wait for the button to be pressed before capturing the image.

Creating the Film

Now that we have the hardware ready, it’s time to focus on creating the story. Professional stop frame animation normally uses expensive flexible models. A good example of this is the Wallace and Gromit film, which uses models made partly out of clay. If you are good with modeling, then feel free to make your own clay models, but a good way of creating a simple animation on a small budget is to use existing toys such as Lego models. I have used a combination of Lego City and some Lego Friends, although these are not quite to the same scale, they are close enough using the Lego Friends model in the background. You can use any other kinds of models or toys, such as action figures, dolls, puppets, or plasticine monsters.

I also made some backdrops using photos of places that I’ve visited and different colored papers and cards for the base. You could also create model and landscapes using craft materials.

Editing the Video

You should now have a series of files starting with frame0001.jpg up to whatever number of photos that you have taken. These still images can be combined into a video using a script on the Raspberry Pi or by transferring it to another computer first. I will show you how these can be combined into a video on the Raspberry Pi, which is useful if you want to automate the creation of a video, but most likely, you will want to transfer this to a PC or laptop which will provide more flexibility.

This book is about the hardware and software used to capture the photographs rather than being a guide to video editing, but I will provide some of the basics to get you started and some suggestions for special effects.

Creating the Video on a Raspberry Pi

First, we will look at how we can combine the photos into a video using the Raspberry Pi.

The files can be converted to a video using the ffmpeg command-line program. Change to the directory holding the images and then run the following command.
ffmpeg -framerate 10 -pattern_type glob -i "frame*.jpg" -c:v libx264 video.mp4

As long as the images were padded with the zeros, this will create an MP4 video using the frames in numerical order. This is at a frame rate of ten frames per second, so you will need a lot of frames to create a reasonable length video.

You will also need a player to play back your video. If you don’t have one already installed, then I recommend VLC which is available from the software manager or through
sudo apt install vlc

Editing the Video Using OpenShot

The command-line tools such as ffmpeg are OK for automatically combining videos into a sequence, but they don’t offer the same flexibility as a graphical non-linear editor. Fortunately, there is a free editor called OpenShot which can be used either on the Raspberry Pi or on a PC. If running on a Raspberry Pi, then I recommend using a Raspberry Pi 4, preferably with 4GB or more memory. If your Raspberry Pi isn’t powerful enough, then you may prefer to transfer the files to a PC and edit it there.

To install on the Raspberry Pi (or Debian-/Ubuntu-based Linux), you can install using
sudo apt install openshot

For OS X or Windows, you can download the program from the following link:

www.openshot.org/download/

When I installed OpenShot on the Raspberry Pi, it created a launch icon, but for some reason, that is not displayed. That can be fixed by deleting and re-adding the icon through the menu editor, or it can be launched using openshot-qt on the command line.

If you have already converted your photos to a video, then after launching OpenShot, the video can be imported using the Import Files option on the File menu or by dragging the file into the Project Files area. The video can then be dragged from the Project Files area onto one of the tracks in the timeline area at the bottom of the screen. This is shown in Figure 8-5.
../images/417997_2_En_8_Chapter/417997_2_En_8_Fig5_HTML.jpg
Figure 8-5

OpenShot with a simple video file

You can combine this video with other video files or photos and then export it in a suitable format.

If you only have the still photos and haven’t yet converted them to a video file, then you can import them directly into OpenShot. This is achieved using Import Files on the File menu. Select all the image files and then click Yes when asked if you would like to import the files as an Image Sequence. This will add the photos but also add a video named frame%03d.jpg. You may need to change the frame rate through file properties, and you can rename the file to something easier to remember. The %03d part of the filename is similar to using :03d in the Python string format. Unfortunately, there are different ways of representing string values in different programming languages or even within the same programming language.

The OpenShot video editor is a fully featured editor which allows you to add other photographs and video, music, or voice over. You will need to use an external microphone (such as a USB microphone) if you want to record voice directly on the Raspberry Pi.

Pan and Tilt Camera

A useful thing for the Raspberry Pi is to have the ability to change the direction that the camera is pointing. This can be achieved using servo motors, which have already been covered in Chapter 6. There is a pan and tilt unit which uses two servo motors that can be connected to the Raspberry Pi. The one used here is created by Pimoroni which is available through several Raspberry Pi suppliers. There are alternatives available, and it is possible to create your own through 3D printing, but this will concentrate on the Pimoroni model.

The pan-tilt module is available with or without a HAT for connecting to the Raspberry Pi. The HAT provides a way to communicate with the servos through I2C as well as provides an output which can be used to connect to LEDs or WS2812 LEDs. This example uses the HAT which uses an I2C interface to control the servos. An alternative is to connect the servos direct to the Raspberry Pi as previously covered in Chapter 6. I have found that the HAT is more reliable in positioning the servos compared to driving the servos directly from the Raspberry Pi using software PWM. The pan-tilt module and HAT are shown in Figure 8-6, which also includes an optional NeoPixel lighting strip.
../images/417997_2_En_8_Chapter/417997_2_En_8_Fig6_HTML.jpg
Figure 8-6

Pan-tilt HAT with Raspberry Pi camera module and NeoPixel light

The pan-tilt module consists of two SG90 servo motors. There is one which connects to the base which provides the pan and one that connects to the clamp for the camera module which provides the tilt capability.

There is a Python module that is available for controlling the pan-tilt HAT. If not already installed, this can be installed using
sudo apt install python3-pantilthat
When installed, this can be imported into a Python program. The commands are easy to use. An example that moves the camera around is shown in Listing 8-5.
#!/usr/bin/python3
import pantilthat
from time import sleep
while True:
    # pan from one side to another
    pantilthat.pan(-90)
    sleep (5)
    pantilthat.pan(90)
    sleep (5)
    # pan to the middle
    pantilthat.pan(0)
    sleep (5)
    # tilt up
    pantilthat.tilt(-90)
    sleep (5)
    # tilt down
    pantilthat.tilt(90)
    sleep (5)
    # tilt to center
    pantilthat.tilt(0)
    sleep (5)
Listing 8-5

Test program for the pan-tilt HAT

This can be saved as pan-tilt-test.py and when run will move the camera from side to side and top to bottom.

Using Motion to Stream Video

There are a few different ways that video from the camera can be streamed to a web browser. The one used here is called motion, which is free software available at https://motion-project.github.io/.

The operating system needs to be told about the video driver. Set up the camera v4l (video for Linux) driver:
sudo modprobe bcm2835-v4l2
Install motion using
sudo apt install motion
Enable the daemon by editing the file /etc/default/motion and changing the entry to
start_motion_daemon=yes
Then update the configuration file /etc/motion/motion.conf and change the appropriate settings. The following are recommended as a minimum:
daemon on
stream_localhost off     # allow remote viewing
rotate 180
webcontrol_port 8082

The reason for changing the port for webcontrol is so that it doesn’t conflict with port 8080 which will be used for the web page for the user to connect to. There are lots of other settings in the file, most can be left at their default values, but you may want to change the width and height for a higher resolution.

After configuring motion, it can be started using
sudo systemctl start motion
Set motion to start automatically at reboot using
sudo systemctl enable motion

You can test that motion is working correctly by visiting http://127.0.0.1:8081/ from a web browser on Raspberry Pi. You can also connect from another computer on the same network by using the IP address of the Raspberry Pi.

Adding a Web Interface

Finally, you can add a web interface to allow the camera to be controlled and viewed from a computer on the network. This will use the same technique as used in Chapter 7 to provide a web interface for the model train.

If you haven’t already installed Python Bottle, it can be installed using
sudo apt install python3-bottle
You will also need to create directories /home/pi/camera and /home/pi/camera/public (or if using a different directory, you will need to update the source code accordingly). Then download the jQuery file using
cd ~/camera/public
wget http://code.jquery.com/jquery-3.5.1.min.js
There are two files that need to be added, the source file pantiltcamera.py which is in Listing 8-6 and index.html which is in Listing 8-7.
#!/usr/bin/python3
from gpiozero import Servo
from time import sleep
import sys
import pantilthat
from bottle import Bottle, route, request, response, template, static_file
app = Bottle()
STEP_SIZE = 5
# Change IPADDRESS if access is required from another computer
IPADDRESS = '0.0.0.0'
# Where the files are stored
DOCUMENT_ROOT = '/home/pi/camera'
#Setup lights as NeoPixels
pantilthat.light_mode(pantilthat.WS2812)
pantilthat.light_type(pantilthat.GRBW)
# public files
# *** WARNING ANYTHING STORED IN THE PUBLIC FOLDER
# WILL BE AVAILABLE TO DOWNLOAD
@app.route ('/public/<filename>')
def server_public (filename):
    return static_file (filename, root=DOCUMENT_ROOT+"/public")
@app.route ('/')
def server_home ():
    return static_file ('index.html', root=DOCUMENT_ROOT+"/public")
@app.route ('/move')
def move_motor():
    getvar_dict = request.query.decode()
    pantilt = request.query.pantilt
    direction = int(request.query.direction)
    if pantilt == "pan":
        pan_value = pantilthat.get_pan()
        if direction == -1:
            if pan_value - STEP_SIZE >= -90:
                pantilthat.pan (pan_value - STEP_SIZE)
                return ("Pan right")
            else:
                return ("Pan right limit reached")
        elif direction == 1:
            if pan_value + STEP_SIZE <= 90:
                pantilthat.pan (pan_value + STEP_SIZE)
                return ("Pan left")
            else:
                return ("Pan left limit reached")
        else:
            return ("Invalid direction")
    elif pantilt == "tilt":
        tilt_value = pantilthat.get_tilt()
        if direction == -1:
            if tilt_value - STEP_SIZE >= -90:
                pantilthat.tilt (tilt_value - STEP_SIZE)
                return ("Tilt up")
            else:
                return ("Tilt up limit reached")
        elif direction == 1:
            if tilt_value + STEP_SIZE <= 90:
                pantilthat.tilt (tilt_value + STEP_SIZE)
                return ("Tilt down")
            else:
                return ("Tilt down limit reached")
        else:
            return ("Invalid direction")
    else:
        return ("Invalid command")
@app.route ('/light')
def set_light():
    getvar_dict = request.query.decode()
    set = request.query.set
    if (set == "on"):
        pantilthat.set_all(0,0,0,255)
        pantilthat.show()
        return ("Light On")
    else:
        pantilthat.clear()
        pantilthat.show()
        return ("Light Off")
app.run(host=IPADDRESS)
Listing 8-6

Web application for pan-tilt camera

<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Raspberry Pi Camera</title>
<!-- Add jQuery -->
<script type="text/javascript" src="/public/jquery-3.5.1.min.js"></script>
</head>
<body>
<h1>Raspberry Pi Camera</h1>
<iframe src="http://192.168.0.153:8081" style="width:400px;height:300px;"></iframe>
<div id="status">...</div>
<p>
<button onclick="moveCamera('tilt', -1)">Up</button>
<br />
<button onclick="moveCamera('pan', 1)">Left</button><button onclick="moveCamera('pan', -1)">Right</button>
<br />
<button onclick="moveCamera('tilt', 1)">Down</button>
</p>
<p>
<button onclick="setLight('on')">Light On</button><button onclick="setLight('off')">Light Off</button>
</p>
<script>
// call back function from ajax code
function updateStatus (data) {
    // Update screen with new status
    $('#status').html(data);
}
function moveCamera (pantilt, direction) {
    $.get('/move', 'pantilt='+pantilt+'&direction='+direction, updateStatus);
}
function setLight (set_status) {
    $.get('/light', 'set='+set_status, updateStatus);
}
</script>
</body>
</html>
Listing 8-7

Index.html file for the web camera

These files are based on the pan-tilt test program and the web part from the IoT model train code in Chapter 7.

As well as controlling the motor, this code and html file include support for an RGBW light module. This is optional and if not included will have no effect.

More Video Editing

This chapter has given some examples of things that can be done using electronics combined with the Raspberry Pi cameras. It’s looked at using infrared to trigger the Raspberry Pi camera. It then covered stop motion animation, taking photographs using a switch as a trigger and then merging the still photos into a video using the command line and OpenShot.

Another example is in creating a CCTV-style web interface, providing pan and tilt capability using the pan-tilt HAT.

Once you have made your first video, it can be fun to try different techniques and see how they look. You can find that it takes up a lot of time as hand-editing individual frames can be quite time-consuming, but the results can be very rewarding.

You could also look at improving the web interface for the pan-tilt camera such as being able to move to specific scenes. You may also want to add a capture button to save a static picture.

The next chapter will look at creating a robot with the Raspberry Pi.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.214.32