Chapter 16. Interfaces and Controls

Creating controls is one of the most interesting and challenging tasks that an interactive designer can take on because the very nature of tools implies an application that is defined by its use in a given task. A game or a toy or even a piece of art can all be defined by their playfulness, their aesthetic qualities, or their novelty. A tool will be judged by how it aids the completion of a task, which is a much more severe test of the quality of an application or system. As any industrial designer will tell you, creating a successful tool is one of the most challenging and rewarding tasks you can undertake. You need to have a good understanding you must have about a task, its context, and its challenges, and it’s a different way to think about a task. When you see a task, you can analyze the different aspects of it: the hard parts and the easy parts or the subtasks that take a long time or those that don’t. To understand how that task is understood by those who perform it and understand the needs that they have for a tool in that task is a different matter altogether.

However, this chapter is not just to introduce some of the things you have to think about when creating tools; it’s about creating systems with which a user will interact. This can certainly be a tool, or it can be any object that a user will use for a long period of time with a specific goal in mind. That includes controllers, instruments, and systems. The kind of thinking required to successfully create a tool is invaluable when creating interactive objects. It requires a level of thinking about the human mind and body and how to adapt an object to them and for them. This process will teach you one of the fundamental aspects of all great design.

In this chapter, you’ll examine a few ready-built modules that let you use familiar interfaces in new and exciting ways. These are complex input interfaces; specifically, they’re complex in how the user can input information and complex in how the user perceives their input.

Examining Tools, Affordances, and Aesthetics

One of the interesting successes in device design is the popularization of interface design. In the same way that the design and aesthetics of the shape of an automobile or a fine shirt can be appreciated and understood, all the aesthetics of the interface have become popularized. Even nontechnical audiences are now regularly examining interfaces in sophisticated and critical ways and thinking not only in terms of the way that the interface helps them to function but also about the interface as a design object separate from its use. To you, the designer, that means that novel and new interfaces are interesting to users not only for what they enable the user to do but in the way that the interface itself functions. The emotional relationship between users and the interfaces of their machines has become more than a simple matter of inputting data to accomplish a task. It can now be playful, fascinating, multivariate, and engaging.

This is not to say that the playfulness, aesthetic appearance, or other nonfunctional aspects of an interface are the most important. Very often, those who use a tool most understand and value its affordances most. Something that we use every day and that we have to use repetitively for important tasks should not surprise us or take us away from performing our task. Those who either do not use a tool often or do not use it for its intended purpose understand its aesthetic affordances most. A beautiful violin that has an imperfectly crafted neck is still beautiful when it is not being played, but the person who plays music with it determines the value of the object as the user of the tool. Interfaces that are used for specific tasks have many of the same characteristics: they can have many secondary characteristics, but their core user base will judge them by how they perform their primary task.

This discrepancy between the different ways of valuing an interface sometimes leads to an incorrect estimation of its value because the commercial value of something is often determined by its popular appeal. The use value of an interface is different and should never be underestimated. The aesthetics are important, and are in fact central, but the definition is in the functional affordances. That is what defines the tool or control and separates it from a novel object that exists.

Think of an extremely simple object like a flute. It makes a specific set of tones when it is blown into in a very particular way. Now think of how complex what we do with that tone is and how many different factors can alter that tone and the way that it functions in how we experience it. From a very simple but somewhat difficult interface, we have a very wide range of potential uses and meanings and ranges of apparent application. Notice that I say “apparent applications.” These aren’t actual applications; a flute is designed only to facilitate making a certain set of tones. One interesting thing that happens with well-designed and simple objects and interfaces that do interesting things is that they allow for virtuosity. They let users become very good at using them in the way that some people are very good at video games or operating a backhoe or using a piece of musical software. Because a user can become a virtuoso with a tool, it becomes absolutely imperative that the tool be predictable even at the expense of changes or new features or aesthetic considerations. If a user can’t predict how a tool or a control will behave, then they can’t practice because practice is repeating.

For artists, the notions of tools and controls are interesting elements to play with because when a viewer “uses” an art piece, it means that not only will they be interacting with it but that they will be trying to perform some action with it as well. That’s not a common experience for viewers in a traditional museum, but artists like Amit Pitaru, Toshio Iwai, and Marianne Weems have all used this to great effect in installations and performance. For interaction designers, the ability to think clearly about task performance and the way to design for tasks is very important whether designing web interfaces, industrial objects, gadgets, games, or machinery. In part because this is such a fundamental aspect of design and of being a designer, there are a number of wonderful books that address these designing interactions around tools. Again, I’ll reference Don Norman and two of his books: The Design of Everyday Things and The Design of Future Things (both by Basic Books). In the book Where the Action Is (MIT Press), Paul Dourish examines different ideas of rationality and also of tools and action in Western philosophy. He examines how thinking about tools is relevant to some of the most important questions in physical computing: What is embodiment? How does that change how we think about interaction design? How do we help users act and perform? Another interesting idea in describing and understanding how interactions work is a notion called activity theory, which originated in the Soviet Union but has become popular across a range of disciplines, including design. The crux of thinking in this practice draws heavily on psychology and tends to concentrates on interactions as being between a “subject” and an “object” rather than a “user” and a “system.” Particularly in the design of controls and tools, a book like Context and Consciousness (MIT Press), edited by Bonnie A. Nardi, might be of interest when designing tools and systems for users.

The design of tools and creation of controls is probably the most complex and fundamental task in interaction design, and it’s also the one that provides the greatest potential rewards. Whether you’re considering more artistically oriented projects or some more practical ones, thinking clearly about an interaction in terms of it enabling a task that a user might want to perform and seeing how a good control can couple with a system are important steps in designing an interaction. First, though, a short technical reintroduction to accelerometers is needed to show an important detail.

Reexamining Tilt

For technical reasons that will become apparent later in this chapter, it’s worth reexamining the accelerometer that Chapter 8 introduced. It is also a good chance to reexamine a very simple control, LIS3LV02DL (shown in Figure 16-1), that can provide input for a tool and that has a great number of potential applications in controls and tools.

The LIS3V02D and connecting it to the Arduino board
Figure 16-1. The LIS3V02D and connecting it to the Arduino board

The LIS3LV02DL is a more accurate sensor than the other accelerometers introduced in this book. However, it requires that you use the Wire library and I2C, which makes it a little more, but not a lot more, difficult to set up. Before we start looking at code, I should mention this code is inspired by the work of Julian Bleecker.

The Wire library allows you to communicate with I2C/TWI devices. On the Arduino, SDA (data line) is on analog input pin 4, and SCL (clock line) is on analog input pin 5. Since this section of Arduino code is a little longer, the code is going to be broken up into little sections with explanations nested in between. The first bit defines a few constants that are going to be used throughout the application and that correlate to the high and low values of each of the different dimensions in which the accelerometer can detect changes:

#include <Wire.h>
#define i2cID 0x1D
#define outXhigh 0x29
#define outYhigh 0x2B
#define outZhigh 0x2D
#define outXlow 0x28
#define outYlow 0x2A
#define outZlow 0x2C

Next, the setup() method starts the Wire library so that we can receive reports back from the accelerometer and the Serial library so we can print what’s happening on the serial monitor:

void setup()

{
    Wire.begin(); // join i2c bus (address optional for master)
    Serial.begin(9600);

    Serial.println("Wire.begin");

The first message to the accelerometer sets the ID of the device that we’re going to be listening for information on. All devices have an ID that they use when communicating over I2C so that the computer or processor can tell which device is sending a signal and which device a signal should be sent to. In the case of our accelerometer, it’s 0x1D or 29:

    Wire.beginTransmission(0x1D);

The next message tells the accelerometer the destination of the message, that is, which register we want to send the message to. The register is a location in memory of the accelerometer, and the register that we’re sending the message to here, 0x20, is the one that determines what mode the device should be in: regular mode or testing. The next message sets the device mode. If you’re interested in more about the messages for the accelerometer, check the user manual for the accelerometer:

    Wire.send(0x20); // Tell the device the next message will set up the control
    Wire.send(0x87); // Power up the device, enable all axes, don't self-test
    Wire.endTransmission(); // all done!

}// end of setup

void loop() {

Next, we create variables to hold all the different values that we’re going to be getting back from the accelerometer:

byte z_val_l, z_val_h, y_val_l, y_val_h, x_val_l, x_val_h;

The x_val is being stored as a word, which is the same as an unsigned int:

word x_val;

There’s an important point to understand about the values that we’ll be getting back from the accelerometer: the messages are so small that in order to get a whole value, we have to put two messages together, the low part of the int, called x_val_l, and the high part of the int, called x_val_h, in the following example. We’ll be using this technique for x, y, and z values, so we’ll see it again.

Here, we tell the accelerometer which value we want:

sendWireMessage(outXlow); // we want the low x value

if(Wire.available()) {
    x_val_l = Wire.receive();
}
sendWireMessage(outXhigh); // we want the high x value

if(Wire.available()) {
    x_val_h = Wire.receive();
}

With this code, the value for the x value is assembled by bit-shifting the high and low values into place. You could do this like so:

x_val = x_val_h;
x_val <<= 8;
x_val += x_val_l;

Or, you can use the word() method. The word() method has the following signature:

word(h, l);

h is the high-order (rightmost) byte of the word, and l is the low-order (leftmost) byte of the word.

This simplifies the code greatly, as you can see here:

x_val = word(x_val_h, x_val_l);

Figure 16-2 shows what is going on.

Creating an integer from two different bytes
Figure 16-2. Creating an integer from two different bytes

The rest of the code follows the same pattern for the y and z values:

int y_val;
sendWireMessage(outYlow);
while(Wire.available()) {
    y_val_l = Wire.receive();
}

sendWireMessage(outYhigh);

if(Wire.available()) {
    y_val_h = Wire.receive();
}

y_val = word(y_val_h, y_val_l);

// ------ read the z-axis
int z_val;
sendWireMessage(outZlow);
while(Wire.available()) {
    z_val_l = Wire.receive();
}

sendWireMessage(outZhigh);
if(Wire.available()) {
    z_val_h = Wire.receive();
}

z_val = word(z_val_h, z_val_l);
delay(250);
}

The sendWireMessage() method is where sending the data over the I2C connection is taken care of. While this code could be repeated over and over again, it’s far more efficient to write this functionality once, in one place, and use it again and again:

void sendWireMessage(byte message) {
    Wire.beginTransmission(i2cID);
    Wire.send(message);
    Wire.endTransmission();
    Wire.requestFrom(i2cID, 1);
}

You may ask yourself whether we really just go through bytes, bits, and all that stuff, just to look at another accelerometer. In a word, no. Now you have a more solid footing in working with binary information and that’s going to be very important when reading and storing GPS data.

Exploring InputShield

The InputShield shown in Figure 16-3 is one of the two open source hardware products from Liquidware that you’ll be seeing in this chapter. The InputShield is a small shield that fits on top of the Arduino controller and provides a small joystick, two input buttons, and a vibration motor in much the same configuration as the classic Nintendo Game Boy. You could think of this as being a gaming interface, but its familiarity and its simplicity make it just as useful in physical computing, in the control of robotic instruments, or even as a rudimentary musical instrument.

The Liquidware InputShield
Figure 16-3. The Liquidware InputShield

One of the interesting transformations that has been going on in the Arduino community is the move toward more and more sophisticated devices and configurations. When the Arduino, or its ancestor the Wiring board, were first introduced, the difference between an embedded device and a laptop computer was much more pronounced than it is today. As users in the real world have become more familiar with small computers, the notion of what an embedded device can be and what an embedded computing platform can do has changed greatly. Liquidware is one group that is pushing this the furthest, creating peripheral devices for the Arduino team that let you create devices as complex as some PDA devices while still using the Arduino environment. There are other companies like Libelium and Adafruit that are pursuing similar projects as well.

Reading data from the InputShield is quite easy. Depending on the mode, you simply need to read the pin mapped to the joystick and the button. Table 16-1 shows how these are mapped.

Table 16-1. Arduino pin to mode map

Arduino pin

Mode A

Mode B

4

Button B

None

5

Button A

None

6

Joystick button

None

7

Vibration enable

None

8

None

Button B

9

None

Button A

10

None

Joystick button

11

None

Vibration enable

Analog 0

Joystick lateral movement

None

Analog 1

Joystick vertical movement

None

Analog 2

None

Joystick lateral movement

Analog 3

None

Joystick vertical movement

The different modes let you use the remaining pins on the Arduino for different purposes, such as controlling a servo motor, reading GPS data, sending data to an LCD display, and so on.

A small vibration motor is attached to the bottom of the shield. The vibration motor will vibrate when pin 7 (mode A) or pin 11 (mode B) is grounded. This lets you send the user physical feedback quickly and easily. The buttons output digital signals, and the joystick output uses varying voltage based on the rotation angle in both lateral and vertical directions. For instance, in mode A, the following code checks and prints both lateral and vertical values of the joystick:

void loop() {
    unsigned int joyLatValue;
    unsigned int joyVertValue;
    joyLatValue  = analogRead(0);
    joyVertValue  = analogRead(1);
    Serial.println(joyLatValue);
    Serial.println(joyVertValue);
}

Now you can connect the InputShield to an Arduino and use the serial port to create a video game control. If you’re interested in using a regular video game controller using a USB port, then you can look into a library like the proCONTROLL that allows you to communicate with joysticks and video game controllers in a Processing application. What might be more interesting is the easy control of robotics and physical computing that the Arduino affords you. Using the InputShield to control multiple servo motors, for instance, is a excellent way to get fine-grained and natural control over some simple robotics. To save a bit on space, the discussion on controlling multiple servos will wait until later in this chapter, but rest assured that the code there can very easily be used with the InputShield as well.

Understanding Touch

The cultural moment of the touchscreen has most certainly arrived. By the time you read this book, it’s quite likely that many consumer laptops will be equipped with sophisticated touchscreens. The wild popularity of the iPhone, and indeed of its interface, have taken the touchscreen from a sophisticated tool seen in interface research labs and expensive trade demos to the primary interface for a tool that enables interaction in a few different areas in contemporary society. It is soon to become as ubiquitous, as commonplace, and as well understood as the keyboard, the mouse, or even the dial. There is, however, quite a difference between these common input devices and the touchscreen. The first factor is the multimodal data: we can evaluate the location of the touch, the pressure on the screen, the relative distance of other touches in a multitouch environment, and, over time, the gestures consisting of one or many points.

There are also a great number of simple and common tasks that are difficult to do on a touchscreen. Typing, for instance, is an interesting challenge that a great number of different approaches and devices have attempted to simplify and standardize from the use of swipe-based typing systems to projected keyboards. There are numerous approaches and design philosophies that are tackling this problem. Another problem is the language of touchscreen gestures. Since one of the strengths of the touchscreen is the ability to use a gesture expressively, using two fingers to zoom in, for instance, the standardization of these gestures and the way that they are used becomes important to consider. A user will expect to use certain gestural behaviors across all systems and applications; ensuring consistency is an important part of the designers’ job.

Exploring Open Source Touch Hardware

The idea of open source software is a fairly simple one: When you allow an end user to see and modify the source code, your project is open source. In other words, the source can be read and reused with some reservations. Open source hardware is a little different. In one sense, hardware can’t be truly free and open source because, somewhere, someone along the line will need to pay for materials and hardware. However, the schematics and the source code that runs on the hardware can be open source and available. This is the philosophy of two interesting touchscreen projects that are currently providing schematics and hardware, selling kits, and sharing source code openly. The first project is Nort_/D, a touchscreen project built on and around openFrameworks. The other is the Liquidware TouchScreen built on the Arduino platform.

Nort_/D

Nort_/D is a design collective that creates hardware and software for multitouch screens. The collective provides both instructions for creating touchscreens and also kits that include screens, cameras, and supports. Its aim is to make multitouch readily available in an open source fashion. Over the past few years, Nort_/D has created a few different multitouch systems: the CUBIT multitouch system, the TouchKit, and Tactility, which uses LCD screens. The TouchKit includes a Nort_/D Multitouch Screen, Nort_/D’s Multitouch Software (computer vision, calibration, API), and a camera (calibrated for use with the TouchKit).

The ofxTouchApp class is the main superclass for ofxTouch applications. It is a substitution for the typical openFrameworks ofBaseApp from which all applications usually inherit. The ofxTouchApp class adds to ofBaseApp multitouch-specific functionality. You’ll recall that all the event handlers in the ofxTouchApp class are sent a reference to this object to notify the application of the position, side, and order of the finger touching the surface.

In the vector<ofxTouchFinger> fingers vector, fingers stores all the blobs, which are areas detected in the video of your application that are moving together, that are detected and tracked on the touchscreen. They are stored as the ofxTouchFinger type, which is a simple class that the ofxTouchApp uses to store information about the location and movement of a finger on the touchscreen.

The class provides these event handlers to notify your application that a finger event has happened and pass the reference to the ofxTouchFinger instance representing the blob that triggered the event. The names are quite self-explanatory:

  • fingerDragged( const ofxTouchFinger& )

  • fingerPressed( const ofxTouchFinger& )

  • fingerReleased( const ofxTouchFinger& )

The ofxTouchApp class also has some convenience methods that make it easier for you to test your application with stored video files:

setSimulationMode( bool value )

Sets whether the application is going to be live (using a touchscreen) or a simulation using the mouse instead of finger touches.

setVideoPlayerMode( bool value )

Passes whether you want the application to be running from a camera or from a test video file that will be used for setup and demo purposes.

setVideoPlayerFile( string path )

Passes a file that you would like the ofxTouchApp class to use in place of live finger detection. This is excellent for testing or configuring your touchscreen when you first set it up.

The other key component of the ofxTouch add-on is ofxTouchFinger, which is used to store information about finger events. These are the two methods useful for keeping track of the fingers:

int id()

Every finger object has an ID assigned to it.

int initialOrder()

This is the order of the touches.

To determine information about the finger, there are four integer values about the finger (x, y, width, and height) and a float value (radius).

Your application’s .h file will look something like this:

class testApp : public ofxTouchApp {

Note that instead of extending the ofBaseApp class, the application extends the ofxTouchApp class:

    public:
        bool bDragging;
        void setup();
        void update();
        void draw();
        void keyPressed  (int key);
        void keyReleased  (int key);
        void mouseMoved(int x, int y );
        void mouseDragged(int x, int y, int button);
        void mousePressed(int x, int y, int button);
        void mouseReleased();
        void fingerDragged( const ofxTouchFinger& finger );
        void fingerPressed( const ofxTouchFinger& finger );
        void fingerReleased( const ofxTouchFinger& finger );

};

In the main.cpp file of an ofxTouchApp, instead of seeing the standard call to ofRunApp(), you’ll see a call to ofxTouchAppRelay() wrapped inside the ofRunApp() call:

ofRunApp(new ofxTouchAppRelay(new testApp));

If you take a look at the ofxTouchApp.h file, you’ll see the following properties. If you read Chapter 14, the first two objects should be familiar to you.

ofxTouchContourFinder contourFinder

This is an instance of the ofxOpenCV contourFinder that you read about in Chapter 14.

ofxTouchBlobTracker blobTracker

This is an instance of the blob tracker that you looked at Chapter 14.

ofxTouchGraphicsWarp graphicsWarp

This is an instance of the ofxTouchGraphicsWarp class that is used to calibrate the camera and screen. This is a more advanced setting that allows you to configure the application for specific camera types and screen configurations. There isn’t space to cover it in full, but in short, combined with the ofxTouchVisionWarp class to set the properties of your particular screen and camera, you can set up the camera and project in different configurations. By using ofxTouchGraphicsWarp instead of matching the camera and project perfectly to ensure that the projector covers the entire TouchKit, you can adjust graphics warp so the projector and the TouchKit screen match.

Moving on, a very simple application might look like the following:

void fingerApp::setup() {
    setSetupMode( true );
    cwidth = 800;
    cheight = 800;

    //setSimulationMode( true );  //uncomment this to use mouse simulation
    setVideoPlayerMode( true );   //comment this out to run live from cam
    setVideoPlayerFile("touchkit-fingers.mov");
    bDragging = false;
    ofEnableAlphaBlending();
}


void fingerApp::update(){
    // nothing here
}


void fingerApp::draw() {

Since the fingers vector stores all the detected touch fingers, you can use this code to display the fingers:

    for( int i=0; i<fingers.size(); i++ ) {
        ofSetColor( 255, 255, 255);
        ofCircle( fingers[i].x, fingers[i].y, 4*fingers[i].radius );
        ofSetColor( 255, 0, 0 );
        ofCircle( fingers[i].x, fingers[i].y, 3*fingers[i].radius );
        ofSetColor( 255, 255, 255);
        ofCircle( fingers[i].x, fingers[i].y, 2*fingers[i].radius );
        ofSetColor( 255, 0, 0);
        ofCircle( fingers[i].x, fingers[i].y, 1*fingers[i].radius );
        ofSetColor( 255, 255, 255 );
        ofCircle( fingers[i].x, fingers[i].y, 0.5*fingers[i].radius );

    }
}

This will create a simple bull’s-eye at the location of each detected finger touch. There are some other excellent examples for download from the Nort_/D site at http://touchkit.nortd.com. For information on their other projects, look at nortd.com.

Liquidware TouchShield

Another much smaller touchscreen device that you might be interested in exploring is the TouchShield developed by Liquidware (Figure 16-4). It plugs into the Arduino and lets you use small touchscreen devices both to send information to a listening laptop or desktop computer or to simply create a small device that relies entirely on the Arduino for its processing power. The TouchShield greatly simplifies working with resistive touch sensing and drawing.

The Liquidware TouchShield
Figure 16-4. The Liquidware TouchShield

To connect the TouchShield onto an Arduino Diecimila or Duemilanove, you simply mount it as shown previously in Figure 16-2 on the right. The TouchShield does quite different things than the standard Arduino components. In addition to handling touch data, the TouchShield also draws to its screen. The Liquidware team took a logical direction in how to best providing methods and a programming interface for the developer to draw to the screen: they emulated the Processing language and application structure.

The very simplest TouchShield application looks much like a Processing application. For instance, to draw a rectangle to the screen of the TouchShield, your application would look like this:

COLOR red = {255,0,0};
COLOR blue = {0,0,255};

void setup() {
    rect(20,20,80,80, red, blue);
}
void loop(){}

To upload your code to the TouchShield, you’ll need to use a slightly modified version of the Arduino IDE that the Liquidware team has put together and that is available for download on the Liquidware site. The only modifications that have been made are to allow for the uploading of code to the TouchShield. When you have an Arduino application that communicates with the TouchShield, you are in essence running two applications at the same time: one on the TouchShield and one on the Arduino. You could think of this metaphorically as being like the relationship between code running on your central processor unit communicating with code running on your graphics processor unit.

While the code for the TouchShield is quite similar to Processing, it is not actually Processing, and so it has a few slightly different methods:

beginCanvas()

This begins the drawing, allowing the TouchShield to accept drawing functions from the Arduino. It starts the serial connection that the TouchShield uses to read data from an Arduino at 9600 baud (the equivalent of calling Serial.begin(9600) in the Arduino) and sets up the TouchShield to receive drawing commands from the Arduino. These can be things like the following:

|RECT20208080

Note the pipe (|) at the beginning of the command. You would send this like so:

Serial.write("|RECT20208080");

This will draw a rectangle at 20, 20, with a width and height both of 80 pixels. Almost all the drawing commands for the TouchShield can be called in this way via the Serial port. These commands are documented on the Liquidware site in greater detail.

delay()

Like the Arduino version of this method, this delays for the specified millisecond amount of time passed to it.

random()

This generates a random number (long number).

To read the input locations and information about the user touching the screen, the TouchShield library defines the following methods and variables:

gettouch()

This checks the touchscreen for any registered touches. The OLED screen on the TouchShield is resistive, so a user will need to be pushing on it a little bit harder than they would need to with a capacitive screen. Once the gettouch() method is called, the mouseX and mouseY variables will have values assigned to them for that loop. The best way to do this is to call gettouch() in the loop() method of your application and then do something with the newly updated position values:

void loop(){
    gettouch();
    ellipse(mouseX, mouseY, 20,20);
}
mouseX

This stores the X location of the touch when the gettouch() method is called.

mouseY

This stores the Y location of the touch when the gettouch() method is called.

The TouchShield also defines quite a few different methods to draw on the canvas:

ellipse()

Draws an ellipse with the first two parameters specifying the location of the ellipse and the second two determining the width and height, like so: ellipse(x, y, width, height).

line()

Draws a line from point to point, like so: line(x1, y1, x2, y2).

point()

Draws a single pixel.

rect()

Draws a rectangle with a specified beginning point, width, and height, like so: rect(x, y, width, height).

triangle()

Draws a triangle with three specified points, like so: triangle(x1, y1, x2, y2, x3, y3).

quad()

Draws a polygon with four specified lines from four specified points, like so: quad(x1, y1, x2, y2, x3, y3, x4, y4).

To set the color properties and line properties when you’re drawing, you can use the following methods (note that all the colors are passed as three values, red, green, and blue with values between 0–255):

background()

Colors the whole screen with the color value passed to it. This also overwrites all the graphics on the screen.

fill()

Sets the color used to fill the inside of shapes. This method takes a color value.

noFill()

Allows a shape to be drawn with no fill color.

stroke()

Stores the color of a shape’s outlines or of text. This method takes a color value.

noStroke()

Allows a shape to be drawn without an outline around it, just like the Processing application.

strokeWeight()

Changes the outline thickness of an ellipse or rectangle in pixels.

One of the great strengths of open source software is that it allows developers to alter some of the internals of the program or add new features to support different functionality. Since the TouchShield is in a sense like an Arduino controller, in that it has its own processor that the Arduino IDE must compile and load code onto, you’ll need a version of the Arduino IDE that knows how to create and upload code from the TouchShield. Liquidware created a version of the IDE that adds this new board into the available configurations and didn’t change anything else, so you can use the TouchShield as well as all your other Arduino boards with this altered Arduino IDE. Those altered IDEs are available on the Liquidware site, http://liquidware.com. To work with the TouchShield, you’ll need to download and install the appropriate one for your operating system. Once you’ve done that, you’ll see the TouchShield appear in your Arduino IDE (Figure 16-5).

Selecting the TouchShield in the modified Arduino IDE
Figure 16-5. Selecting the TouchShield in the modified Arduino IDE

In the Board menu, you’ll notice a couple new boards, the TouchShield Slide and TouchShield Stealth. There are currently two different touch-enabled shields that Liquidware manufacturers; one is the TouchShield Slide and the other is the TouchShield Stealth. The workflow for uploading code to a TouchShield attached to a Duemilanove would look like this:

  1. Select Tools→Boards→TouchShield Slide.

  2. Write the TouchShield code.

  3. Touch the programming button on the TouchShield.

  4. Compile and upload the application code to the TouchShield from the Arduino IDE by hitting the Upload button.

  5. Select Tools→Boards→Arduino Duemilanove (or the name of your board).

  6. Write Arduino code.

  7. Compile and upload the application code to Arduino controller from the Arduino IDE by hitting the Upload button.

That’s all there is to it. Let’s take a look at a couple of examples.

Drawing to the TouchShield Screen

The following example shows how to draw different shapes and characters to the screen:

#define SHAPE_SQUARE  0
#define SHAPE_CIRCLE  1
#define  SHAPE_X   2
#define  SHAPE_TRI   3

int nextShape = SHAPE_SQUARE;

int pMouseX;
int pMouseY;
char xposStr[4];
char yposStr[4];

void setup() {
    pMouseX = 0;
    pMouseY = 0;
}

void loop()
{

The getTouch() method polls the screen to see whether any touches have been registered. When you want to detect touches on the screen, you’ll want to do this each time the loop() is called:

    gettouch();

    if(pMouseX != mouseX || pMouseY != mouseY)
        touched();   // screen touched so draw new shape

    delay(50);

} //end loop

void touched()

{
    nextShape++;
    if(nextShape > 3) { // shape 3 is the last shape
        nextShape = 0;
    }

Just as in a Processing application, this clears the stage by overwriting everything:

    background(0);

Now the location of the touch is stored for comparison to the next time the loop() method is called:

    pMouseX = mouseX;
    pMouseY = mouseY;

The text() method writes characters to the screen, the first parameter are the characters to be written, and the next two are the x and y locations on the screen:

    text(mouseX, mouseX, mouseY+20);
    text(mouseY, mouseX+30, mouseY+20);

Depending on the value of the nextShape variable, a different shape is drawn to the stage:

    switch(nextShape)
    {
    case SHAPE_SQUARE:
        fill(0, 255, 0);
        rect(mouseX-20, mouseY-20, 30, 30);
        break;

    case SHAPE_CIRCLE:
        fill(0, 0, 255);
        ellipse(mouseX-10, mouseY-10, 15, 15);
        break;

    case SHAPE_X:
        stroke(255, 255, 255);
        line(mouseX - 10, mouseY, mouseX+10, mouseY);
        line(mouseX, mouseY - 10, mouseX, mouseY+10 );
        break;
    case SHAPE_TRI:
        fill(255, 255, 0);
        triangle(mouseX+10, mouseY-10, mouseX-10, mouseY+10,
              mouseX+20, mouseY+10);
        break;
    }
}

The application will look something like Figure 16-6.

Drawing a circle and a triangle to the TouchShield
Figure 16-6. Drawing a circle and a triangle to the TouchShield

Now that you’ve seen how to do simple drawing operations with the TouchShield, you can use the touch information and send it to the Arduino to control other objects. The following example allows the user to select the color of the circle they are drawing, draw a circle, and send those values to the Arduino controller. From there, the Arduino controller can, for example, use the x and y positions to set the rotation of a servo, relay those values to another listening controller, or, as shown in the next example, position a light.

Controlling Servos Through the TouchShield

In this example, though, those values will be used to position two servo motors that will shine a three-color LED with the color selected by the user. Before putting this code into the Arduino IDE, make sure that you have the TouchShield board selected, or the code won’t compile because the compiler will not have access to the right libraries:

COLOR curr;

Notice that the color is stored as a COLOR type. This is a struct that is defined in the TouchShield libraries and has red, green, and blue integer properties that can be read and written:

int redR[] = { 0, 0, 20, 20 };   // rectangle coordinates
int blueR[] = { 0, 20, 20, 40 };
int yellowR[] = { 0, 40, 20, 60 };
int greenR[] = { 0, 60, 20, 80 };
int whiteR[] = { 0, 80, 20, 100 };
int blackR[] = { 0, 100, 20, 120 };

COLOR red = {255,0,0};
COLOR green = {0,255,0};
COLOR blue = {0,0,255};
COLOR yellow = {255,255,0};

boolean changedColor = false;

void setup() {

    drawBackground();
    Serial.begin(9600);
    delay(3000);
    Serial.print('U'),
}

The drawBackground() method draws six boxes on the left of the TouchShield that the user can select. These will be used to set the color value of the color LED attached to the two servo motors:

void drawBackground()
{

Draw the first rectangle in the top left of the screen, first setting the fill using the fill() method. The fill() method takes the different values of the predefined color values included in the TouchShield API:

    fill(red.red, red.green, red.blue);
    rect(redR[0], redR[1], redR[2], redR[3]);

Now draw the other rectangles for blue, yellow, green, and white:

    fill(blue.red, blue.green, blue.blue);
    rect(blueR[0], blueR[1], blueR[2], blueR[3]);
    fill(yellow.red, yellow.green, yellow.blue);
    rect(yellowR[0], yellowR[1], yellowR[2], yellowR[3]);
    fill(green.red, green.green, green.blue);
    rect(greenR[0], greenR[1], greenR[2], greenR[3]);
    fill(white.red, white.green, white.blue);
    rect(whiteR[0],whiteR[1], whiteR[2], whiteR[3]);
}

void loop() {

The refresh rate on the TouchShield is quite a bit slower than a laptop computer. If you call the background() method on a loop and then do a relatively expensive drawing operation like the drawBackground(), then you’ll see the screen begin to flicker:

gettouch();
    check_touch();

    Serial.print(mouseX);
    Serial.print(mouseY);

If the color has changed, then send the values to over the hardware Serial port to the Arduino controller:

    if(changedColor) {
        Serial.print(curr.red);
        Serial.print(curr.green);
        Serial.print(curr.blue);
    }
}

void check_touch(){
    changedColor = true;
    // if the mouse is greater than 20, it's not on any of the rectangles
    if(mouseX > redR[0]) {
        return;
        changedColor = false;
    }
    if(mouseY < redR[2]) {
        curr = red;
    }
    else if(mouseY > blueR[2] && mouseY < blueR[4]) {
        curr = blue;
    }
    else if(mouseY > yellowR[2] && mouseY < yellowR[4]) {
        curr = yellow;
    }
    else if(mouseY > greenR[2] && mouseY < greenR[4]) {
        curr = green;
    }
    else if(mouseY > whiteR[2] && mouseY < whiteR[4]) {
        curr = white;
    }
    else {
        changedColor = false;
    }
}

Setting Up Communication Between Arduino and TouchShield

In this example, you’ll see how the Arduino board and the TouchShield can communicate. The principle idea in this example is to use two servos together to create a hemispheric range for a light, that is, to enable an LED to cover an area that more or less would look like the top half of a sphere. The way you do that is by combining two servos and using one to provide 180 degrees of motion on the x-axis and the other to provide 180 degrees of motion on the y-axis. There are quite complex ways to do this, but there are simple ones that work almost as well. You can simply attach one servo to the other and then attach the LED to the second servo. This provides you with the range of motion to point the light. Using a “super-bright LED” means that you can attach a small object to a small servo and still create a good amount of light.

Figure 16-7 shows a diagram of how to connect the two servos together and how they would be wired to the Arduino.

Now, the Arduino code that receives the x and y values that will be used to set the two servos and the color values to set the LED. This code is for the Arduino, so don’t forget to make sure that you have the correct board selected in your Arduino IDE, or the code may not compile:

#include <Servo.h>

// create servo object to control a servo
Servo xServo;
Servo yServo;
int redPin = 5;
int greenPin = 6;
int bluePin = 11;

// variable to store the servo position
int xpos = 0;
int ypos = 0;

void setup()
{
Connecting the servos and LED to the Arduino
Figure 16-7. Connecting the servos and LED to the Arduino

Here you start up the communication between the Arduino and the two servos by calling the attach() method of the Servo instance:

    xServo.attach(9);
    yServo.attach(10);

    Serial.begin(9600);
}

void loop()
{
    int tmpx, tmpy, tmpred, tmpblue, tmpgreen = 0;

Get all the available bytes from the TouchShield that have been sent over the Serial port. I’ve set my hardware up so this data is being sent from the TouchShield, but you could also use the TouchShield and have the data be sent from a Processing or oF application with which the user is interacting. The positions and colors will be stored in the five integers:

    if(Serial.available() > 4) {
        tmpx = Serial.read();
        tmpy = Serial.read();
        tmpred = Serial.read();
        tmpblue = Serial.read();
        tmpgreen = Serial.read();

After five values are received, set each respective color values, red, green, and blue, of the three color LED lights:

        analogWrite(redPin, int(tmpred));
        analogWrite(greenPin, int(tmpgreen));
        analogWrite(bluePin, int(tmpblue));

Now, set the location of each servo. Since the locations are being sent as integers, they’ll need to be cut down into smaller values to set the rotation of the servo.

       xpos = map(tmpx, 0,255, 0,180);
       ypos = map(tmpy,0,255,0,180);
       xServo.write(xpos);
       yServo.write(ypos);
       delay(15);

    }
}

You could use this same technique to position a camera, a microphone, or many other kinds of devices. The pairing of multiple servos is one of the basics of complex mechanical motion and is a very useful thing to look into. You might check out Robot Building for Beginners by David Cook (Apress). It is a good introduction to mechanics and robotics that is aimed at newcomers.

Another approach to this is to use a pan and tilt servo unit like the one shown in Figure 16-8, made by Trossen Robotics. It’s actually just two servo motors with a stronger mounting, which means you can attach larger lights or cameras to the servo.

The Trossen pan-tilt servo
Figure 16-8. The Trossen pan-tilt servo

The TouchShield is a wonderful tool for creating small touch-enabled applications. One thing to consider when working with the TouchShield is the relatively slow refresh rate of the drawing on the TouchShield. The code used with TouchShield is based on Processing, but the hardware will not be like a laptop or desktop computer. So, you’ll need to consider what drawing functions to call and when to call them to ensure that you don’t overtax the hardware. Another issue to consider is the power that the TouchShield requires. If you’re going to use the TouchShield as a mobile device, then you might want to consider using something like the Lithium Backpack that provides a few extra hours of battery power.

Communicating Using OSC

OpenSoundControl (OSC) was originally designed as a protocol for communication between computers, sound synthesizers, and any other devices to be used in a networked setting. It allows any device that can send and receive OSC messages to share music performance data in real time over a network. In this way it’s similar to MIDI but is designed expressly for network communications, whereas MIDI was designed more as a low-level way of communicating between devices that couldn’t be networked. This is largely because MIDI was developed in 1983 before networks were fast enough to consider sending live, real-time audio and video data over them. Since it’s a network protocol, OSC allows musical instruments, controllers, and multimedia devices to communicate on any network, including over the Internet.

You may remember the UDP protocol for network communication that we mentioned earlier. OSC is communication over UDP just as sending and receiving web pages (HTTP) is communication over TCP. Now, there’s an important caveat here: nothing says OSC has to be over UDP, that’s just the way most people use it. You might want to make your OSC application communicate over TCP, and that’s perfectly valid as well. The advantages of UDP being light, fast, and easy to parse all make it extremely valuable for real-time audio communication. OSC enables real-time interactions between almost any kind of device that can be connected to a network and also allows you a lot of flexibility in terms of the data you can send over the wire. This means your applications or devices can communicate with each other at a high level or a low level, however you see fit for your application.

What can you use it for? Since OSC is a data protocol, any application that can send OSC messages can use it to communicate with any other application that can understand OSC messages. You can use OSC to communicate between applications like oF and Processing. You can also use OSC to communicate between Processing or oF and another platform like the Max/MSP, PureData, or Supercollider. Since OSC is really just communication using the UDP protocol, by using an Ethernet shield an Arduino can send and receive any OSC messages as well. So, what does an OSC message look like? Usually something like this:

'/pitch', '122'

That looks a little strange, but it’s actually fairly straightforward. The message has a title, in this case pitch, and a value, in this case 122. A message can have any number of values attached to it, but generally the messages are quite short. They send things like a note, a change in pitch, a mouse location, a camera action, movement detection, and so on. In practice, OSC isn’t all that well suited for sending massive amounts of data at a time; it’s far better at sending lots of small messages quickly.

For working with OSC in oF, you can use the ofxOSC library, created by Damian Stewart and available from http://addons.openframeworks.cc. It is built on top another OSC library, called oscpack, that was built by Ross Bencina. It consists of three main classes that should have a familiar breakdown:

ofxOSCSender

This is the object that dispatches messages. It doesn’t need to accept connections from a client. Instead, it simply dispatches the messages to anything listening on the port that it uses.

ofxOSCReceiver

This is the object that receives and parses messages; it listens on a particular port number and then can parse messages by name and get their values.

ofxOSCMessage

This represents the message. OSC can have int, String, or Float values. The OSCMessage class provides methods for adding all of these to a message.

For using OSC in Processing, check out oscP5, an OSC implementation for the programming environment processing developed by the prolific Andreas Schlegel. If you already have been working with oscP5 in the past, please note that you will have to import netP5.* in addition to importing oscP5.*:

OscP5

This is the main class that enables you to initialize the oscP5 library and set up a server that can send and receive messages. This class handles both sending and receiving messages.

OscProperties

This is used to start oscP5 with more specific settings, and it is passed to the OscP5 object in the constructor when initializing a new OscP5 object.

OSCArgument

This represents the value of the message sent and allows you to read the message back in several different formats: int, boolean, float, String, or char.

OscMessage

Like the ofxOSCMessage class, this contains an address, a type, and multiple values.

Though space constraints don’t allow us to delve further into OSC, it might be worth checking out the netP5 and the oscP5 libraries if you’re looking to share real-time data across several different machines. One immediately practical application for this protocol is communication with the Wii Remote (unofficially nicknamed Wiimote), which you’ll learn how to do in the next section using the ofxOSC library.

Using the Wiimote

By now most of you will have seen the Nintendo WiiMote controller, the small accelerometer-driven controller for the Wii system. The Wii Remote is the primary controller for Nintendo’s Wii console. A main feature of the Wii Remote is its motion-sensing capability, which allows the user to interact with and manipulate items on the screen via movement and point toward objects through the use of accelerometer and optical sensor technology. There are two different controllers, the Wii Remote and the Wii Nunchuk, that attach to the Wii Remote to allow for two-handed control.

Using the Wii Nunchuck in Arduino

You can communicate with the Wii in two ways, either by plugging it into an Arduino using the Wii stick developed by Tod Kurt of Todbot or by using the Bluetooth connection on the Wii Remote. You can also connect the Nunchuck directly to the Arduino by cutting off the connector and using the wiring, as shown in Figure 16-9.

Two ways of connecting the Nunchuck to the Arduino: on the left, the Nunchuck Adapter and, on the right, the Nunchuk with the connector removed and wires connected directly to the Arduino
Figure 16-9. Two ways of connecting the Nunchuck to the Arduino: on the left, the Nunchuck Adapter and, on the right, the Nunchuk with the connector removed and wires connected directly to the Arduino

The following Arduino code uses the values from the Nunchuck to control a servo and also sends the accelerometer data from the Nunchuck to a Processing application:

#include <Wire.h>
#include <Servo.h>

// create servo object to control a servo
Servo xServo;
Servo yServo;

int angleX = 90;    // angle to move the X servo
int angleY = 90;    // angle for the Y servo

int refreshTime = 20;  // the time in millisecs between updates
int loop_cnt = 0;

int joy_x_axis;
int joy_y_axis;
int accel_x_axis;
int accel_y_axis;
int accel_z_axis;

int z_button;
int c_button;

void setup()
{
    Serial.begin(9600);
    xServo.attach(9);
    yServo.attach(10);
    nunchuck_init(); // send the initialization handshake to nunchuck
}

void loop()
{
    checkNunchuck();
    xServo.write(angleX);
    xServo.write(angleY);
    if( nunchuck_zbutton() ){  // light the LED if z button is pressed
        Serial.print(angleX,DEC);
        Serial.print(":");
        Serial.print(angleX,DEC);
        Serial.print(''),
    }
    delay(refreshTime);        // this is here to give a known time per loop
}

void checkNunchuck()
{
    if( loop_cnt > 5) {  // loop every 20msec, this is every 100msec

        nunchuck_get_data();
        nunchuck_print_data();

        angleX = map(accel_x_axis,70,185,0,180); // nunchuck range is ~70 - ~185
        angleY = map(accel_y_axis,70,185,0,180); // Arduino function maps this
                                                      // to angle 0-180
        loop_cnt = 0;  // reset
    }
    loop_cnt++;
}

The rest of the code defines the Nunchuck functions to initialize communication between the Arduino and the Nunchuck and to send and receive data from the Nunchuck. These methods all use the Wire library to read the data. Credit for these methods goes to Tod Kurt:

static uint8_t nunchuck_buf[6];   // array to store nunchuck data,

// initialize the I2C system, join the I2C bus,
// and tell the nunchuck we're talking to it
void nunchuck_init()
{
    Wire.begin();                    // join i2c bus as master
    Wire.beginTransmission(0x52);    // transmit to device 0x52
    Wire.send(0x40);        // sends memory address
    Wire.send(0x00);        // sends sent a zero.
    Wire.endTransmission();    // stop transmitting
}

// send a request for data to the nunchuck
// was "send_zero()"
void nunchuck_send_request()
{
    Wire.beginTransmission(0x52);    // transmit to device 0x52
    Wire.send(0x00);        // sends one byte
    Wire.endTransmission();    // stop transmitting
}

// receive data back from the nunchuck,
// returns 1 on successful read. returns 0 on failure
int nunchuck_get_data()
{
    int cnt=0;
    Wire.requestFrom (0x52, 6);    // request data from nunchuck
    while (Wire.available ()) {

The Wire library will receive a byte that needs to be converted to an integer, by using the nunchuck_decode_byte() method, defined later in this application:

        nunchuck_buf[cnt] = nunchuck_decode_byte(Wire.receive());
        cnt++;
    }
    nunchuck_send_request();  // send request for next data payload
    // If we recieved the 6 bytes, then go print them
    if (cnt >= 5) {
        return 1;   // success
    }
    return 0; //failure
}

This method prints out the input data received from the Nunchuck:

void nunchuck_print_data()
{
    static int i=0;
    joy_x_axis = nunchuck_buf[0];
    joy_y_axis = nunchuck_buf[1];
    accel_x_axis = nunchuck_buf[2];
    accel_y_axis = nunchuck_buf[3];
    accel_z_axis = nunchuck_buf[4];

    z_button = 0;
    c_button = 0;

The sixth byte of the data from the Nunchuck, nunchuck_buf[5], contains bits that indicate whether the z or c buttons have been pressed:

// so we have to check each bit of byte outbuf[5]
    if ((nunchuck_buf[5] >> 0) & 1)
        z_button = 1;
    if ((nunchuck_buf[5] >> 1) & 1)
        c_button = 1;

The sixth byte also contains the last bits of the accelerometer data for all the axes:

if ((nunchuck_buf[5] >> 2) & 1)
        accel_x_axis += 2;
    if ((nunchuck_buf[5] >> 3) & 1)
        accel_x_axis += 1;

    if ((nunchuck_buf[5] >> 4) & 1)
        accel_y_axis += 2;
    if ((nunchuck_buf[5] >> 5) & 1)
        accel_y_axis += 1;

    if ((nunchuck_buf[5] >> 6) & 1)
        accel_z_axis += 2;
    if ((nunchuck_buf[5] >> 7) & 1)
        accel_z_axis += 1;

    i++;
}

// encode data to format that most wiimote drivers except
// only needed if you use one of the regular wiimote drivers
char nunchuck_decode_byte (char x)
{
    x = (x ^ 0x17) + 0x17;
    return x;
}

// returns zbutton state: 1=pressed, 0=notpressed
int nunchuck_zbutton()
{
    return ((nunchuck_buf[5] >> 0) & 1) ? 0 : 1;  // voodoo
}

The next step would be to use the progress of the two servo motors in a Processing application. This could be easily extended to update another set of servo motors in a different location using a network, to drive complex graphics, to control sounds or video, or to implement any number of other approaches. For the sake of brevity, in this example the Processing application simply draws current positions of the servo motors:

import processing.serial.*;
Serial serial;  // The serial port
int firstAng = 0;
int secAng = 0;
byte[] inbyte = new byte[4];

void setup() {
    size(400, 400);
    String[] arr = Serial.list();
    if(arr.length > 0) {
        serial = new Serial(this, Serial.list()[0], 9600);
    }
}

void draw() {
    background(122);
    int i = 0;
    boolean changed = false;
    if(serial != null) {
        while (serial.available() > 0) {
            inbyte[i] = byte(serial.read());
            changed = true;
            i++;
        }
        if(changed) {
            firstAng = inbyte[0];
            secAng = inbyte[2];
        }
    }

    ellipse(40, 40, 50, 50);
    line(40, 40, 40+(25*cos(firstAng)), 40+(25*sin(firstAng)));
    ellipse(120, 40, 50, 50);
    line(120, 40, 120+(25*cos(secAng)), 40+(25*sin(secAng)));

}

The Nunchuck is a very cool tool, but the Wii Remote is a far more versatile tool because it doesn’t require you to have your input device tethered to a laptop. The Wii Remote can also communicate with your laptop or desktop over a Bluetooth connection. Once the remote and your computer are connected, you send OSC messages to an oF or Processing application and use the accelerometer data from the remote in your application.

Many times you’ll find yourself piecing together different libraries to create a working solution. In this case, creating a working solution that is truly cross-platform is difficult and involves using lots of libraries. Windows users should look into an application called GlovePIE, OS X users should look into an application called DarwiinOSC, and Linux users can utilize an application called WiiOSC.

Once the OSC messages are sent from the Wii controller to the oF application, you can use the ofxOSC add-on to read the messages and use them for drawing, controlling video, and so on:

#ifndef _TEST_APP
#define _TEST_APP

#include "ofMain.h"
#include "ofxOSC.h"

class remoteApp: public ofBaseApp{

public:

    void setup();
    void update();
    void updateOSC();
    void draw();

    void keyPressed(int key);
    void keyReleased(int key);
    void mouseMoved(int x, int y );
    void mouseDragged(int x, int y, int button);
    void mousePressed(int x, int y, int button);
    void mouseReleased();

    ofxOscReceiver    receiver;

These are the variables that will be store the values received from the Wii Remote:

float pitch, roll, yaw, accel;
};

#endif

Now on to the .cpp file, where the ofxOSC instance will read OSC messages from the application that handles the communication between the Wii Remote and the computer and then draws that data to the screen:

#include "remoteApp.h"
void remoteApp::setup(){

    receiver.setup( 9000 );
    ofSetVerticalSync(true);

}
void remoteApp::update(){

The call to the updateOSC() method ensures that the OSC listener grabs any new data in the OSC buffer:

    updateOSC();
}

void remoteApp::updateOSC() {

    // check for waiting messages
    while( receiver.hasWaitingMessages() ) {

Get the next message:

        ofxOscMessage m;
        receiver.getNextMessage( &m );

Check to make sure that the new message is from the Wii Remote. The getAddress() call here is using an OS X address. Depending on your operating system your address will look different:

        if ( strcmp( m.getAddress(), "/wii/1/accel/pry" ) == 0 ) {

            // verify the type
            if( m.getArgType(0) != OFXOSC_TYPE_FLOAT ) break;
            if( m.getArgType(1) != OFXOSC_TYPE_FLOAT ) break;
            if( m.getArgType(2) != OFXOSC_TYPE_FLOAT ) break;
            if( m.getArgType(3) != OFXOSC_TYPE_FLOAT ) break;
            // get the new score
            pitch = m.getArgAsFloat( 0 );
            roll = m.getArgAsFloat( 1 );
            yaw = m.getArgAsFloat( 2 );
            accel = m.getArgAsFloat( 3 );
        }
    }
}

void remoteApp::draw(){

Draw the background according to the pitch, and then use those values to create the background color of the application:

    ofBackground(pitch*255, roll*255, yaw*255);
    ofSetColor(yaw*255,pitch*255,roll*255);
    ofFill();

Draw the two circles using the pitch, yaw, and roll values sent from the Wii Remote:

    ofEllipse(pitch*ofGetWidth(), yaw*ofGetHeight(), 20, 20);
    ofEllipse(yaw*ofGetWidth(), roll*ofGetHeight(), 10, 10);

}

It’s important to remember that Wii data is, compared to the touchscreens discussed earlier, quite difficult data to work with; it’s erratic, and it’s often difficult to determine the exact position of a cursor as one would with a mouse or a pair of potentiometers. For this reason, accelerometer-based controls like the Wii Remote are sometimes not appropriate for building applications that require cursor control. However, there is another way to use the Wii Remote as an infrared pointing device. This is often more appropriate than accelerometer data because it is so much more precise. Accelerometer data can be used to position a cursor as well, but it requires some rather tricky calculations that, although certainly within the realm of possibility, are outside of the scope of this book.

Tracking Wii Remote Positioning in Processing

There’s another possibility for working with the Wii Remote that provides other sorts of opportunities to work with interaction: in addition to accelerometers, the Wii Remote can also calibrate its position using an infrared sensor (Figure 16-10). To use this, you need to get four small IR lights and put them in front of the Wii Remote where it will be able to detect them. To create a 360-degree experience, you could use more IR lights and position them in multiple locations around the Wii Remote, though this is substantially more complex.

Creating an LED array to use with the Nunchuck
Figure 16-10. Creating an LED array to use with the Nunchuck

The first step in using the Wii Remote as a pointer is to read the IR data sent from the controller. This is sent as 12 floating-point numbers. The oscP5 code calls ir when new IR data is received and gives us 12 parameters. Those are four times x, y, and brightness values for each tracked LED. If the brightness value is bigger than 15, no point is recognized. To get usable x and y position values, you will want to pass them through a method that might look like this:

float ir[12]
void ir( float f10, float f11,float f12, float f20,float f21,
    float f22, float f30, float f31, float f32, float f40,
    float f41, float f42 ) { // first all the numbers
    ir[0] = f10;
    ir[1] = f11;
    ir[2] = f12;
    ir[3] = f20;
    ir[4] = f21;
    ir[5] = f22;
    ir[6] = f30;
    ir[7] = f31;
    ir[8] = f32;
    ir[9] = f40;
    ir[10] = f41;
    ir[11] = f42;

    points = 0;

We want every third value for the x position and every fourth digit for the y position:

    for (int i = 0; i < 4; i += 3) {
        if (ir[i+2] < 15) {
            x[points] = 0.5 - ir[i]; // average out the x values
            y[points] = 0.5 - ir[i+1]; // average out the y values
          points++;
        }
    }
}

The trickiest part of getting and using the infrared data is correctly parsing out the x and y values. Once you’ve done that, it’s quite easy to use them in an oF or Processing application. Over OSC, the IR data will be passed as 12 floating-point values, and you simply call this to have all the infrared data messages plugged into the ir array:

osc.plug(this,"ir","/wii/irdata");

This simple Processing application draws a rectangle to the screen at the locations of the infrared data:

import oscP5;

float ir[12];
OscP5 osc;

void setup() {
    size(800,600);

    // open an udp port for listening to incoming osc messages
       //from darwiinremoteOSC
    osc = new OscP5(this,5600);

    osc.plug(this,"ir","/wii/irdata");
    osc.plug(this,"connected","/wii/connected");
}

void ir(
float f10, float f11,float f12,
float f20,float f21, float f22,
float f30, float f31, float f32,
float f40, float f41, float f42
) {
    ir[0] = f10;
    ir[1] = f11;
    ir[2] = f12;
    ir[3] = f20;
    ir[4] = f21;
    ir[5] = f22;
    ir[6] = f30;
    ir[7] = f31;
    ir[8] = f32;
    ir[9] = f40;
    ir[10] = f41;
    ir[11] = f42;
}

void draw() {
    for(int i=0;i<12;i+=3) {

Every third value in the array is the size, and a size of either 0 or 15 indicates that IR point is not available:

    if(ir[i+2]<15 && ir[i+2]>0) {
        fill(255, 0, 0);

Now we’ll draw a rectangle at the point where the Wii Remote detected the light:

        rect(ir[i] * width, ir[i+1] * height, 5, 5);
    }
  }
}

What’s Next

As you’ve seen in this chapter, you can approach creating tools from so many different angles that the unifying concept we’ve explored isn’t so much a technical idea but rather a design concept: that of carefully measured feedback and particularly controlled input. There are a number of really great controller technologies and designs out now that might be interesting to play with or use for inspiration.

The Monome project is a series of boards with extremely minimal interfaces that can be used to create music, send MIDI and OSC messages, or do almost any other thing you can imagine for a device with 64 soft buttons and very little extraneous hardware. The Monome creators provide both prebuilt boards in extremely small quantities or boards with instructions to build your own. They also have a small but active community of developers, musicians, artists, and hackers working with the boards and creating new applications. Take a look at http://monome.org/ for more information.

The Nintendo DS may seem to be an odd platform for a designer or artist to hack at but the DS has a few aspects that might make it interesting to you. It’s has a fully rewritable firmware, which means that like the Arduino you can completely rewrite and alter the core software running on the controller. It also has a WiFi connection and uses SD storage. There are also library and toolkits that allow you to program the DS in C++ or C as well as several tools to help you make games without heavy programming.

LadyAda and AdaFruit industries have created a tool called the x0xb0x, which is a board that can act as both a synthesizer and a sequencer. It has a MIDI in, out, and through port; a CV and Gate (1/8-inch jacks); and an headphone, mix-in line, and line-level out port with 1/4-inch jacks; and finally a USB jack.

Another option is the Wacom drawing tablet. As a user interface, it can be instinctive and very intuitive; the pressure sensitivity provides another range of input data as well. It can be inherently very controlled and precise but also playful, making it a great tool for artists, as demonstrated by the Sonic Wire Sculptor by Amit Pitaru. There are a few different libraries that allow you to use Wacom with Processing and oF that you can find by looking on the Processing website in the Libraries section or on the oF website in the forums.

Review

Open source hardware designs are where the schematics and the source code that runs on the hardware schematics can be open sourced and made available.

The Liquidware InputShield allows you plug interface controls that are similar to the classic video game controls into your Arduino controller.

The Liquidware TouchShield is a small Arduino-compatible shield that can be used to capture touches and send and receive messages over a Serial port with an Arduino-compatible board.

To use the TouchShield Slide or Stealth, you’ll need to use a modified version of the Arduino IDE.

The TouchShield is programmed using a language influenced by the Processing language, making it easier to write visual code to display on the TouchShield.

OpenSoundControl (OSC) is a protocol for communication between computers and other devices or for between applications. They are very simple messages that usually consist of a key and value pair, like:

'/name', 'josh'

To work with OSC in oF, you can use the ofxOSC add-on developed by Damian Stewart. To work with OSC in Processing, you can use the oscP5 library developed by Andreas Schlegel.

Another option for interaction is the Wii controllers. The Nunchuck can be plugged in an Arduino, or you can use the Nunchuk Adapter developed by Tod Kurt. The tilt data and the buttons of the Nunchuck can be read in an Arduino application.

The Wii Remote can be paired with a computer over Bluetooth using one of several libraries and can send accelerometer data to a Processing or oF application over OSC.

By creating a grouping of four LED lights and shining them toward the Wiimote, you can create an oF or Processing application that can read the positional data of the Wiimote, where it is pointing in relation to the infrared sensors. This allows you to create a simple mouse-like pointer from the Wiimote.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.201.26