Chapter 10. Using sensors

This chapter covers

  • Understanding Sensor API design
  • Interpreting sensor data
  • Using sensors in the emulator
  • Moving with the motion sensor

You can build cool applications by combining sensors with other features of the phone. Applications may respond to a user shaking the device by randomly selecting an object or clearing the screen. Games can use the device’s movement as an input mechanism, turning the whole phone into a game controller. Another class of applications augments the real world with computer-generated information. Augmented reality apps can show you the location of friends nearby in relation to your current location. Astronomy applications determine the position of your device and identify the stars in the night sky. A tourist application may be able to identify nearby landmarks.

All these applications require sensor input from the physical world. The phone’s accelerometer, compass, and gyrometer sensors capture input from the real world and serve the data to applications through the Windows Phone SDK’s Sensor API. When combined with location data from the phone’s Location Service, stunning augmented-reality applications are possible. We discuss the Location Service in chapter 16.

Dealing with raw data from the sensors can be tricky—for example, when you’re trying to calculate which direction a device is pointed in. The Inclinometer and Orientation-Sensor classes take input from each of the other sensors, perform several complex calculations, and provide data related to motion and a device’s relative position in the real world.

To gain a good understanding of the raw data returned by the various sensors, you’re going to build a sample application that presents data on the screen. The first part of the chapter covers the Accelerometer, Compass, and Gyrometer classes to demonstrate what data they return, how they’re similar, and how they differ. The latter half of the chapter covers the Inclinometer and OrientationSensor classes and how they’re wrappers around the three other sensors.

Before we dive into the sample application, we introduce the common Sensor API that’s the foundation for the sensors exposed by the Windows Phone SDK.

10.1. Understanding the Sensor APIs

Although the Accelerometer, Compass, Gyrometer, Inclinometer, and OrientationSensor Sensor APIs each return different types of data, they each implement the same pattern for reporting their data. Over the next several pages, you’ll learn techniques that are useful for reading data from any of the sensors. We show you these techniques as you build the foundation of the sample application. The classes and interfaces that compose the Sensor API are found in the Windows.Devices.Sensors namespace.

Note

In the Windows Phone 8 SDK there are two separate Sensor APIs. The APIs found in Microsoft.Devices.Sensors were originally part of the Windows Phone 7 SDK and have been brought forward to Windows Phone 8. The second Sensor API, found in the Windows.Devices.Sensors, comes from the Windows 8 Runtime that the Windows Phone shares with the Windows 8 operating system. If you intend to share code between Windows Phone and Windows 8 applications, you should consider using the Windows 8 Sensor API. The Windows Phone 7 Sensor API isn’t covered in this book.

Even though the Accelerometer, Compass, Gyrometer, Inclinometer, and Orientation-Sensor don’t share a common base class, they all have a number of identical properties, methods, and events. These common members are described in table 10.1.

Table 10.1. Common sensor class members

Member

Type

Description

GetCurrentReading Method Returns a read-only object containing the currently available sensor data.
GetDefault Method A static factory method that returns a sensor instance.
MinimumReportInterval Property A read-only value specifying the smallest value that can be passed set in the ReportInterval property.
ReadingChanged Event An event raised whenever the current reading changes.
ReportInterval Property Specifies how often the sensor reads new data. The data returned by the GetCurrentReading will change only once during every time interval.

An application obtains the current sensor reading by calling the GetCurrentReading method. Alternatively, an application can subscribe to the ReadingChanged event to receive a sensor reading only when the sensor has new data. The GetCurrentReading method may be called even when the sensor isn’t ready, but the value returned may be null.

Note

If the ID_CAP_SENSORS capability isn’t present in the WMAppManifest .xml file, calls to GetDefault for any sensor will result in an Unauthorized-AccessException.

Each of the sensor classes defines a static method named GetDefault. The Get-Default method allows a developer to determine whether the sensor hardware is installed on a particular device and whether the sensor is available to the application. If the device doesn’t have a particular sensor installed and available, the GetDefault method for the missing sensor returns null.

The Sensor API handles fast application switching on its own. Developers don’t need to unhook the sensors when the application is switched from the foreground. Unlike the camera, sensors automatically resume and don’t provide an explicit restore method. When the application is resumed, the sensors and events are reconnected, and data starts to flow again. Before you learn how to work with the data flowing from the sensors, you need to understand how the sensors report data in three dimensions.

10.1.1. Data in three dimensions

Each of the sensors reports data relative to the x, y, z coordinate system defined by the Windows Phone device. The device’s coordinate system is fixed to the device and moves as the phone moves. The x axis extends out the sides of the device, with positive x pointing to the right side of the device and negative x pointing to the left side of the device. The y axis runs through the top and bottom of the device, with positive y pointing toward the top. The z axis runs from back to front, with positive z pointing out the front of the device. Figure 10.1 shows the x, y, and z axes from three different views of a phone.

Figure 10.1. The x, y, z coordinate system as defined by a Windows Phone

The coordinate system used by the sensors doesn’t necessarily match the coordinate system used by other APIs. One example is the coordinate system used by XAML. In portrait mode XAML, the y axis points in the opposite direction, with positive y pointing out the bottom of the device.

Now that you understand the coordinate system used by the sensors, let’s take a closer look at reading data from the sensors.

10.1.2. Reading data with events

Each of the sensors supports an event-driven interaction model with the Reading-Changed event. The ReadingChanged event sends an event args class instance to an event handler, where the type of event args class varies with each sensor. The Accelerometer sends an AccelerometerReadingChangedEventArgs, the Compass sends a Compass-ReadingChangedEventArgs, and so on.

The ReadingChanged event handler is called on a background thread. If the event handler updates the user interface, the update logic must be dispatched to the UI thread. The following code snippet shows an example that handles the Reading-Changed event from the Gyrometer sensor:

void sensor_ReadingChanged(object sender,
    GryometerReadingChangedEventArgs e)
{
    GryometerReading reading = e. Reading;
    Dispatcher.BeginInvoke(() =>
    {
        // add logic here to update the UI with data from the reading
        ...
    }
}

The Sensors sample application you’ll build in this chapter doesn’t use the Reading-Changed event. Instead, the sample application will poll for data using the Get-Current-Reading method.

10.1.3. Polling for data

An application doesn’t need to wait for the sensor to raise an event to ask for data. Each sensor exposes data through the GetCurrentReading method. The GetCurrentReading method can be called whenever the application data determines it needs new data. For example, the reading may be initiated from a button click, a timer tick event, or a background worker:

if (compassSensor != null)
{
    CompassReading reading = compassSensor.GetCurrentReading();
    if (reading != null)
    {
        // add logic here to use the data from the reading
        ...
    }
}

You’ll read sensor data from a timer tick event in the sample application. Before we can show you the sensors in action, you need to create a new project and prepare the application to display sensor data.

10.2. Creating the sample application

Open Visual Studio and create a new Windows Phone App called Sensors. The sample application will read values from the Accelerometer, Compass, Gyrometer, Inclinometer, and OrientationSensor. The sample application, shown in figure 10.2, displays a set of colored bars for data from the Accelerometer, Gyrometer, and Inclinometer. Each set of bars displays sensor readings for the x, y, and z coordinates. At the bottom of the screen, the application displays a legend and informational messages about the sensors, as well as readings from the Compass and OrientationSensor.

Figure 10.2. The Sensors sample application displays bars representing the x, y, and z values reported by the Accelerometer, Gyrometer, and Inclinometer.

When a sensor’s value is positive, a bar will be drawn to scale above the effective zero line. A negative sensor value results in a bar drawn below the zero line. Because the range of possible values differs between each sensor, the height of the bar is transformed from the sensor’s value into a pixel height using a scaling factor. We talk more about each sensor’s range of values throughout the chapter.

First, you’ll create a reusable control to display the positive and negative bars.

10.2.1. Creating a reusable Bar control

To simplify the sample application, you’ll build a reusable control that allows you to set a scale factor and a sensor value. When the scale or value properties change, the control should draw the appropriate positive or negative bar and display the value with a label. The Bar control will be implemented using the Windows Phone User Control item template, accessed via the Project > Add New Item menu. Name the new item Bar. The XAML markup for the new control is shown in the following listing.

Listing 10.1. Markup for the Bar control

The grid is divided into two halves with each half containing a Rectangle. The first Rectangle displays positive values, and the other displays negative values. A label is placed in the middle to show the bar’s value. Figure 10.3 demonstrates what the control will look like for a Bar with a scale value set to 2.0 and a current value set to –1.0.

Figure 10.3. A Bar control with a scale value of 2.0 and a current value of –1.0

Pages that host a Bar control need the ability to set different fill colors for the Rectangles. Add a new property named BarFill to the Bar.xaml.cs code behind file:

using System.Windows.Media;
public Brush BarFill
{
    get { return positiveBar.Fill; }
    set
    {
        positiveBar.Fill = value;
        negativeBar.Fill = value;
    }
}

The setter for the BarFill property assigns the specified Brush to both the positiveBar and negativeBar Rectangles.

Note

If you were building a reusable XAML control, the BarFill property and the other properties would be dependency properties. The control would declare template parts and would provide XAML markup as the default template. See Pete Brown’s book Silverlight 5 in Action (Manning Publications, 2012) for more details on building reusable XAML controls.

Next, you create properties to set the scale and value for the bar. Because you don’t know the full range of values, you need the caller to tell the control how to scale the value to the height of the rectangles. Let’s say you need the bar to display a value between 2 and –2, and the Bar control is 200 pixels high. A value of 2 would require the positive bar to be 100 pixels high, whereas a value of –1 would require the negative bar to be 50 pixels high. The following listing details how the bar height is calculated using the Scale and Value properties.

Listing 10.2. Calculating bar height with the Scale and Value properties

Both the Scale and the Value properties are implemented with backing fields and simple getters and setters. Inside the setter of each property, you call the Update method to recalculate the height of the bar rectangles and update the user interface. Inside the Update method you multiply the scale and barValue fields , and the resulting value is the number of pixels high the bar should be drawn. If the calculated height value is greater than 0, the positiveBar’s Height is updated to the new value. If the calculated height value is less than 0, you invert the calculated value before assigning the negativeBar’s height. Finally, you use the ToString method with a formatting string to set the label’s Text property.

Now that you have a Bar control, you can create the sample application’s user interface. You need to add an XML namespace to MainPage.xaml so that you can use your new bar control:

xmlns:l="clr-namespace:Sensors"

You’re now ready to use the Bar control in the MainPage’s XAML markup. You need to design the MainPage to have three Bar controls for each sensor, for a total of nine Bar controls.

10.2.2. Designing the main page

In figure 10.2, notice that MainPage.xaml is divided into three rows and several columns. The markup for the ContentPanel of MainPage.xaml is shown in the following listing.

Listing 10.3. Markup for MainPage.xaml

Start by dividing the ContentPanel into 3 rows and 11 columns . The first row contains three TextBlocks serving as the titles for the x, y, and z coordinates. The second row shows three bars for each of the Accelerometer, Gyrometer, and Inclinometer sensors. The Bar controls are 400 pixels high, divided into positive and negative sections of 200 pixels each. Allowing for 3 columns for each sensor and 2 spacer columns, you need a total of 11 columns. The last row contains a legend and messages.

Each Bar control is assigned a BarFill color—red for Accelerometer readings, blue for Gyrometer readings, and dark green for Inclinometer readings. Each Bar control is also assigned a scale value. We describe how the scale factors were calculated in our detailed discussion of each sensor later in the chapter.

10.2.3. Polling sensor data with a timer

In the sample application the screen is updated with data from each of the different sensors. To simplify the logic, the application won’t use the ReadingChanged events for the sensors and will use a polling method instead. A DispatchTimer will be used to update the user interface about 15 times a second. Add a DispatchTimer field:

using System.Windows.Threading;
DispatcherTimer timer;

The timer field is initialized inside the MainPage constructor. The timer ticks every 66 milliseconds, or about 15 times a second. The application will poll each of the sensors inside the timer_Tick method:

public MainPage()
{
    InitializeComponent();
    timer = new DispatcherTimer();
    timer.Tick += timer_Tick;
    timer.Interval = TimeSpan.FromMilliseconds(66);
    Start();
}

The timer is started in the start_Click method. When the timer is started, the application updates the message displayed in the TextBlock named messageBlock to let the user know which sensors have been started:

void Start()
{
    if (!timer.IsEnabled)
    {
        string runningMessage = "Reading: ";

        // Sensors will be initialized here

        timer.Start();
        messageBlock.Text = runningMessage;
    }
}

You’ll add additional code to the Start method as you hook up the Accelerometer, Compass, Gyrometer, and other sensors later in the chapter.

You could run the application now to check that the form is laid out as you expect, but the application doesn’t do anything interesting yet. You’ll remedy that by adding in the Accelerometer sensor.

10.3. Measuring acceleration with the accelerometer

The accelerometer can be used in games and applications that use phone movement as an input mechanism or for controlling game play. The accelerometer tells you when the device is being moved. It can also tell you whether the device is being held flat, at an angle, or straight up and down.

The accelerometer measures the acceleration component of the forces being applied to a device. Note that acceleration due to gravity isn’t reported by the sensor. Unless the device in is free fall, forces are always being applied to a device. The accelerometer reports numbers in terms of the constant g, which is defined as the acceleration due to Earth’s gravity at sea level. The value of g is –9.8 m/s2.

When a device is at rest lying on a table, the table is exerting a force on the device that offsets the pull of gravity. The accelerometer measures the acceleration of the force the table applies. When the device is falling, the accelerometer reports zero acceleration.

Now consider when you push the device along the surface of the table. Other forces are now in play, such as the force being applied by your hand and the force due to friction between the device and the table. The accelerometer measures all these forces. When a user shakes a phone, the x, y, and z acceleration values will rapidly change from one extreme to another in a random pattern.

Tip

The Accelerometer class has a special event named Shaken that you can subscribe to if your application needs to know when the user shakes a device.

By examining the x, y, and z values of the accelerometer, and how they change from one reading to the next, you can determine whether the device is in motion and how the device is being held. Before we get into the details about exactly what the accelerometer measures and what the reported values mean, you’ll hook up the sensor to the bars displayed in the user interface of the sample application.

10.3.1. Hooking up the sensor

The sample application you built in the previous section is designed to show how the data returned by the sensors changes as the user moves the phone. To see how the accelerometer data changes, you need to call the GetCurrentReading method of an Accelerometer instance and update the three Bar controls allocated for the accelerometer’s x, y, and z values. Before you can hook up a sensor, you need to declare a member field to reference the Accelerometer instance:

using Microsoft.Devices.Sensors;
Accelerometer accelSensor;

In the Start method, initialize the field and set the ReportInterval. The Report-Interval is set to match the tick interval of the DispatchTimer used to trigger user interface updates. Add the following snippet right before the line in the Start method where the timer’s Start method is called:

accelSensor = Accelerometer.GetDefault();
if (accelSensor != null)
{
   accelSensor.ReportInterval = 66;
   runningMessage += "Accelerometer ";
}

You’re adding the string "Accelerometer " to the message displayed in the user interface, informing the user that the accelerometer was started. If the sensor isn’t supported, GetDefault will return null.

The final step is to read the accelerometer data when the timer ticks and the timer_Tick event handler is called. You’re going to isolate the code that reads the accelerometer data into a method named ReadAccelerometerData. The timer tick method calls the ReadAccelerometerData method, shown in the following listing.

Listing 10.4. Reading acceleration

First, check whether or not an accelerometer exists before getting the current AccelerometerReading value from the sensor. Also check that GetCurrentReading returned a valid value . The acceleration reading reports acceleration values in the three directions of the phone’s coordinate system. Update the Bar controls in the user interface with the x, y, and z properties reported by the acceleration vector .

When you created the Bar controls for the accelerometer in MainPage.xaml, you set the Scale property to 100. The Bar controls are 400 pixels high, allowing for positive and negative sections of 200 pixels each. The maximum value of the acceleration vector is ±2. Using this information, you can determine that the scale factor for the bar should be 100, or 200/2.

At this point, you should be able to run the application. If you run the application on a physical device, you should see the bars grow and shrink as you move the device about. Tilt it front to back or side to side, lay it down flat, or hold it upside down. You can mimic all these movements in the emulator using the Accelerometer Tool.

10.3.2. Acceleration in the emulator

With the Sensors sample application open in Visual Studio, run the application on the emulator. The emulator’s default position is standing in portrait orientation, and the accelerometer reports an acceleration of –1 along the y axis. Open the Additional Tools windows using the Expander button on the emulator’s control bar. The Accelerometer Tool, shown in figure 10.4, is found in the first tab of the Additional Tools window.

Figure 10.4. Controlling acceleration with the emulator’s Accelerometer Tool

The Accelerometer Tool allows you to move the device by dragging the orange dot. The device can also be changed from the Portrait Standing orientation to Portrait Flat, Landscape Standing, and Landscape Flat. The Accelerometer Tool also plays a canned script that mimics shaking the device.

With the Sensors application running in the emulator, you should see the bars grow and shrink as you move the device about with the orange dot. Play the Shake script and watch how the acceleration bars bounce up and down as the data changes. Now that you have a better idea of what numbers are reported by the accelerometer, let’s take a closer look at exactly what the numbers mean.

10.3.3. Interpreting the numbers

The accelerometer sensor in the phone senses the acceleration due to forces applied to the phone but ignores the acceleration due to gravity. When a device is at rest lying on a table, the table is exerting a force on the device that offsets the pull of gravity. The accelerometer measures the acceleration of the force that the table applies to the device. When the device is falling, the accelerometer reports zero acceleration. Figure 10.5 demonstrates the values reported by the accelerometer when a phone is held in various positions.

Figure 10.5. Acceleration from the forces on a device at rest. Each of the arrows represents the acceleration due to the force holding the device in the stated position.

If the number reported by the device is related to the force a surface exerts on the device, why is the number negative instead of positive? Remember that the number reported is in terms of g or gravity, and g equals –9.8 m/s2, a negative number relative to the phone’s coordinate system. When the accelerometer reports a –1 (or a vector 1 pointing down), it means a vector with the value 9.8 pointing up. Table 10.2 lists the approximate x, y, z values reported by the accelerometer when the device is at rest in various positions.

Table 10.2. Accelerometer readings with the device at rest

Position

X

Y

Z

In free fall 0g 0g 0g
Flat on back, lying on a surface 0g 0g –1g or 9.8 m/s2
Flat on face 0g 0g 1g or –9.8 m/s2
Standing portrait, bottom is down 0g –1g or 9.8 m/s2 0g
Standing portrait, top is down 0g 1g or –9.8 m/s2 0g
Standing landscape, left is down –1g or 9.8 m/s2 0g 0g
Standing landscape, right is down 1g or –9.8 m/s2 0g 0g

When you move the device, you apply a force along the direction you want the device to move. The force causes the device to accelerate. Acceleration changes the velocity of the device, and it starts to move. Let’s say your phone is resting flat, face up on the surface of a table, at point A. Now you give your device a modest push so that it slides along the surface to point B. The initial push is a moderate force in the positive x axis. After you release the device, allowing it to slide, your initial force stops, and the force due to friction begins to slow down the device until it stops moving. Figure 10.6 shows the values reported by the accelerometer in this scenario. The numbers are somewhat contrived, because real numbers will vary based on how hard the initial push is and the amount of friction between the phone and the surface it’s resting upon.

Figure 10.6. Acceleration due to the motion of sliding the device across a table

Again, note that the numbers reported by the accelerometer are opposite what you may expect. You push the device in the direction of the positive x axis, but the number reported is a negative value. Remember that the number reported is in terms of g or gravity, and g equals -9.8 m/s2.

The figure demonstrates the forces involved in pushing a phone across a table. This is probably not something you do often. The same concepts can be applied when the device is moving in a user’s hand. When motion begins, the user’s hand is applying a force to the device in the direction of motion. When motion ends, the user’s hand is applying force in the direction opposite the motion. When the user is moving the device, there may be a period between start and stop when the device is moving at a constant rate, and the acceleration in the direction of motion is zero.

By detecting changes in acceleration values, an application can determine when the device is being moved. The acceleration data can also tell you whether the device is being held flat, at an angle, or straight up and down. What the accelerometer can’t tell you is which direction the device is pointed. If you need to know the direction the device is pointed, use the compass.

10.4. Finding direction with the compass

The compass is useful when an application needs to know which direction a device is pointed relative to the real world. The compass reports direction relative to the Magnetic North Pole. This information is useful for applications such as the astronomy application we mentioned earlier, where the application updates the display based on the direction the user is facing. The compass is also useful in motion-sensitive applications that need to know when a device is rotated.

The compass senses the strength and direction of the Earth’s magnetic field. If the compass detects that a device is aligned with the Earth’s magnetic field, then it knows that the device is pointed, or headed, north. If a device isn’t aligned with the magnetic field, then the compass measures the device’s heading—the angle between the magnetic field and the direction the device is pointed, as depicted in figure 10.7.

Figure 10.7. The compass measures the device’s heading, which is the angle between magnetic north and the direction the device is pointed.

The compass reports information with the CompassReading structure. The device direction is read with the HeadingMagneticNorth and HeadingTrueNorth properties. Before we look closer at the difference between HeadingMagneticNorth and HeadingTrueNorth, you’ll hook up the sensor to both values in the user interface of the Sensors application.

Note

The System.Devices.Sensors.Compass class in the Windows Phone Runtime reports only heading values. The Windows Phone 7 Microsoft .Devices.Sensors.Compass sensor decomposes the magnetic field into x, y, and z vectors and reports the magnitude of the vectors in microteslas (µT).

10.4.1. Hooking up the sensor

This section is going to look a lot like the section where you hooked up the accelerometer. You need to initialize the sensor in the Start method and create a method to read the sensor data. Start by defining a field in the MainPage class:

Compass compassSensor;

Initialize the sensor in the Start method. Before using the sensor, the code must first check whether the Compass is supported. If it isn’t supported, GetDefault will return null, which is the case with the emulator. After initializing the sensor, you set the ReportInterval:

compassSensor = Compass.GetDefault();
if (compassSensor != null)
{
   compassSensor.ReportInterval = 66;
   runningMessage += "Compass ";
}

Create the ReadCompassData method in order to update the user interface with the sensor’s current heading. The following listing contains the implementation of the ReadCompassData method. Don’t forget to call the new ReadCompassData method from the timer_Tick event handler.

Listing 10.5. Reading Compass data

Start by retrieving the current CompassReading value from the sensor. The code updates the message displayed in the TextBlock near the bottom of the screen to show the values of the HeadingMagneticNorth and HeadingTrueNorth properties. Figure 10.8 shows how the message appears in the application.

Figure 10.8. A screen shot of the heading TextBlock showing the HeadingMagneticNorth and HeadingTrueNorth properties. Note that the heading values differ by 7 degrees.

Now you’re ready to run the application. You must run the application on a physical device because the Compass isn’t supported on the emulator.

Note

If an application requires that a phone have a compass to work correctly, the ID_REQ_MAGNETOMETER hardware requirement should be specified in the WMAppManifest.xml file. Specifying this requirement will prevent the application from being deployed to devices that don’t have a compass, including the emulators included in the Windows Phone SDK.

When running the application, turn around and face a number of different directions. You should see the numbers grow and shrink between 0 and 360 degrees as you move the device about. Can you figure out where north is by interpreting the numbers reported in the TextBlock control? Does your device report different numbers for HeadingMagneticNorth and HeadingTrueNorth?

10.4.2. Interpreting the numbers

We mentioned that the compass works by sensing the Earth’s magnetic field and measuring the angle between the magnetic field and the direction the device is pointed. We also mentioned that the Earth’s magnetic field is aligned with the Magnetic North Pole. But the Earth’s Magnetic North Pole isn’t in the same location as the Earth’s geographic North Pole. Not only are the two poles not in the same location, but the Magnetic North Pole is constantly moving about. Each year the Magnetic North Pole shifts approximately 25 miles.

As you may have guessed, the HeadingMagneticNorth property measures the device’s heading relative to the Magnetic North Pole. The HeadingTrueNorth property measures the device’s direction relative to the geographic North Pole. The difference between the two heading measurements is called magnetic declination.

Figure 10.9 illustrates magnetic declination for two different locations on the Earth’s surface. For each location a line is drawn to the North Pole and the Magnetic North Pole. If the acute angle from the North Pole to the Magnetic North Pole is clockwise, then declination is positive. Conversely, a negative declination has a counterclockwise angle. The United States government’s National Oceanic and Atmospheric Administration (NOAA) provides online tools to calculate magnetic declination at http://www.ngdc.noaa.gov/geomag-web/.

Figure 10.9. Examples of positive and negative declination for two different locations on the Earth’s surface

The compass can’t sense where the geographic North Pole is located and must calculate HeadingTrueNorth from declination values stored in a database. In order to look up the declination for a device, the compass must know the device’s current location. If the device’s location is unknown, the compass will report the same value for both heading properties.

Tip

An application that depends on HeadingTrueNorth can retrieve the device’s current location by using the Geolocator class. You can ensure that the compass has recent location data by calling the Geolocator’s Get-Geopostion-Async method within the application. You’ll read more about using the Geolocator class in chapter 16.

If the device has traveled a considerable distance since the last time its location was read or cached, the value reported by HeadingTrueNorth may not be accurate. The value reported by HeadingMagneticNorth may be inaccurate as well for a variety of different reasons. The Earth’s magnetic field is relatively weak. This means that local environmental conditions will impact the magnetic field sensed by the compass. If the device is within a few feet of a normal magnet, for example, the compass will sense the magnetic field generated by the magnet.

The compass is useful when an application needs to know which direction the device is pointed relative to the real world. If the device is turned or rotated, an application can determine how much the device was turned by comparing the current heading with a previous heading. The compass isn’t useful if your application needs to be notified while the device is turning. The gyrometer is ideal for applications that respond when the device is rotated.

10.5. Pivoting with the gyrometer

The gyrometer sensor reports how quickly the device is turning on one or more of its axes. The rotational velocity is reported in degrees per second, and when a device is rotated in a complete circle, it rotates 360 degrees. The values are reported with counterclockwise being the positive direction.

Note

The gyrometer is optional hardware for Windows Phones and isn’t supported on many phones. If an application requires that a phone have a gyrometer to work correctly, the ID_REQ_GYROMETER hardware requirement should be specified in the WMAppManifest.xml file. Specifying this requirement will prevent the application from being deployed to devices that don’t have a gyrometer, including the emulators included in the Windows Phone SDK.

The gyrometer only reports turning motion around an axis, and if the device is held still, the sensor reports values of zero. If the device is moved from point A to point B without any twisting motion, the gyrometer also reports zero.

The gyrometer reports values with the GyrometerReading struct. Rotational velocities are read from the GyrometerReading through the AngularVeclocityX, Angular-VelocityY, and AngularVelocityZ properties that break absolute movement into rotation about the x, y, and z axes. You’ll now hook up the gyrometer to the user interface in the sample application so you can see the numbers for yourself.

10.5.1. Hooking up the sensor

The Sensor APIs are intentionally similar, and hooking up the gyrometer in your sample application is nearly identical to hooking up the accelerometer and compass. You start by declaring a field to reference the Gyrometer instance:

Gyrometer gyroSensor;

You then construct and initialize the field in the Start method, as shown in the following listing.

Listing 10.6. Initializing the Gyrometer in the Start method

When running in the emulator, the call to Gyrometer’s GetDefault method throws a FileNotFoundException, which is handled by the surrounding try-catch block .

As with the Accelerometer and Compass sensors, you create a new method to read the Gyrometer’s current reading and update the user interface. The new method is called ReadGyrometerData and is called from the timer_Tick method. The code for ReadGyrometerData is shown in the following listing.

Listing 10.7. Reading Gyrometer data

Start by retrieving the current GyrometerReading value from the sensor. Then update the user interface with the x, y, and z angular velocity values .

When you created the Bar controls for the Gyrometer in MainPage.xaml, you set the Scale property to 1.111111. The positive and negative bars are each 200 pixels. You assume the maximum rotation rate is a half spin once per second, or ±180 degrees per second. The scale of the Bar control is calculated at 1.111111 or 200/180.

What can you do to see the gyrometer bars move in the application? Let’s get dizzy. Do you have a spinning office chair? If so, you can hold the device flat in your hand and spin back and forth in your chair. You should see the z-bar move up and down as you spin. Another example is to hold the device in your hand so that it’s standing up in portrait mode. Now tilt the phone back until it’s lying flat in your hand. You should see the x-bar move down and report a negative value. Tilt the phone back up, and the bar should move up and report a positive value.

You’ve seen how each of the hardware sensors is exposed by classes and structures in the Sensors API. The sensors each return individual sets of data that can be used in various ways to build interesting applications. Each of the sensors tells you different bits of information about how the device is held, how it’s moving, and which direction it’s pointed in. Correlating this information across sensors can be tricky and involves a solid understanding of physics, mathematics, and three-dimensional coordinate spaces. Fortunately, the Windows Phone SDK provides the Inclinometer and OrientationSensor classes to perform these calculations for you.

10.6. Wrapping up with motion

Unlike the other sensors we’ve covered so far in this chapter, the Inclinometer and OrientationSensor are not hardware based. These two classes are wrappers around the Accelerometer, Compass, and Gyrometer. Instead of sensing data from hardware, the Inclinometer and OrientationSensor consume data from the other sensors and perform some convenient number crunching.

The Inclinometer class reports the results of its data analysis in the Inclinometer-Reading class. The InclinometerReading class reports Yaw, Pitch, and Roll values. The OrientationSensor class delivers data via the OrientationSensorReading class, which provides both a rotation matrix and a quaternion that can be used for coordinate mapping. We’ll show how to use both the InclinometerReading and OrientationSensorReading classes to map coordinates as you finish off the Sensors sample application.

10.6.1. Hooking up the sensors

As with the Accelerometer, Compass, and Gyrometer, the Inclinometer and OrientationSensor instances are stored in member variables and initialized in the start method of the Sensors application’s MainPage class. The following listing shows the member declarations and the code added to the Start method.

Listing 10.8. Initializing the Inclinometer and OrientationSensor

The listing starts by declaring new fields to reference the sensor instances. Following the same pattern used for the other sensors in this chapter, instances of the sensors are retrieved by their respective GetDefault methods . Before attempting to use a sensor instance, you check for null to determine whether the device supports the sensor. Set the ReportInterval property to 66 milliseconds to match the timer tick values.

Continuing with the pattern established earlier, two new read data methods are created and called from the timer_Tick method:

void timer_Tick(object sender, EventArgs e)
{
   ...

   ReadInclinometerData();
   ReadOrientationData();
}

The Inclinometer measures how far from a normal position a phone is titled or rotated, where normal is defined as lying flat, face up, with the top of the device pointed at the North Pole. In this position, the x axis points due east, the y axis points north, and the z axis points straight up. The numbers reported by the Inclinometer are properties of the InclinometerReading class called PitchDegrees, RollDegrees, and YawDegrees.

The next listing shows the implementation of the ReadInclinometerData method. The Sensors application displays inclinometer data in the third set of Bar controls you added to MainPage.xaml in section 10.2.2.

Listing 10.9. Displaying inclinometer data

First, check whether an inclinometer exists before getting the current InclinometerReading value from the sensor and check that GetCurrentReading returns a valid value. Update the Bar controls in the user interface with the Pitch, Roll, and Yaw properties reported by the sensor .

When you created the inclineX, inclineY, and inclineZ Bar controls for the inclinometer in MainPage.xaml, you set the Scale properties to 1.111111, 2.222222, and 0.555556, respectively. The Bar controls are 400 pixels high, allowing for positive and negative sections of 200 pixels each. The value of the PitchDegrees property varies between –180 and 180 degrees. Dividing 200 by 180 gives a scale of 1.111111. The RollDegrees property varies between –90 and 90 degrees, resulting in a scale of 2.222222. The YawDegrees property varies between 0 and 360. Even though a negative value isn’t possible, 360 is still used in the scale equation—therefore the scale is calculated using 200/360, resulting in 0.555556.

Where the inclinometer is useful for determining how much a device is tilted or rotated from a normal position, the OrientationSensor is useful for transforming a point in normal space to a point in the coordinate space of the phone. The following listing demonstrates how to use the RotationMatrix provided by OrientationSensor to transform the point (0,10,0) into a point in the phone’s coordinate system.

Listing 10.10. Transforming a point with orientation data

The listing begins with a using statement that includes the System.Windows .Media.Media3D namespace, the home of the Matrix3D struct used for point transformation. The sample point (0,10,0) is encoded into a Matrix3D struct and stored in a constant field of the MainPage class . Next, the current OrientationReading is read from the OrientationSensor and its rotation matrix is converted from a Sensor-RotationMatrix into a Matrix3D . The target point is transformed with matrix multiplication , and the resulting point is displayed in the user interface.

Deploy the application to your device and run it. Start the sensor and examine the inclinometer values as you move your phone around. Don’t forget to look at the point transformation message line and at how the coordinate point is transformed. Let’s take a closer look at the readings reported by the Inclinometer and OrientationSensor classes and discuss how to interpret the numbers.

10.6.2. Interpreting the numbers

To understand the uses of the Inclinometer and the OrientationSensor, you need to understand the frame of reference, or coordinate system, for both the real world and the device. These two classes assume a real-world coordinate system where y points due north, z points straight up, and x points due east. When the device is lying flat, face up, with the top of the device pointing north, the device’s frame of reference matches the real-world frame of reference. This is shown in figure 10.10.

Figure 10.10. The device frame aligns with the world frame when the device is lying flat and pointed north.

The Yaw, Pitch, and Roll readings are all approximately zero, and the rotation matrix is the identity matrix. An object at point (0,10,0) in the world frame has the same coordinates in the device frame. The device’s y axis is pointing north, and the x axis is pointing east.

When the device is rotated, its frame of reference no longer matches the real-world frame of reference. If the top of the device lying flat is rotated to point east, the device is considered to be rotated 270 degrees, and the inclinometer reading will have a Yaw reading of 270 degrees. The Yaw, or rotation about the z axis, is read as the counterclockwise angle between the two y axes, as shown in figure 10.11

Figure 10.11. The device rotated 270 degrees around the z axis

Now the device’s y axis is pointing east, and the x axis is pointing south. Again, consider an object at the coordinate (0,10,0) in the world frame. This same object will have the coordinates (–10,0,0) in the device frame.

With the top of the device still pointed east, raise the top of the device until it’s in the standing portrait orientation, with the back of the device facing east, as shown in figure 10.12. In this case you’ve rotated the device frame about the x axis and changed the Pitch of the device. The inclinometer reading will still have a Yaw reading of 270 degrees but will now also have a Pitch reading of 90 degrees. The Pitch, or rotation about the x axis, is read as a counterclockwise angle.

Figure 10.12. The device rotated 270 degrees around the z axis and 90 degrees around the x axis

Now the device’s y axis is pointing up toward the sky, aligned with the world frame’s z axis. The device’s z axis is pointing to the west. The device’s x axis is still pointing south. Again, consider an object at the coordinate (0,10,0) in the world frame. This same object will still have the coordinates (–10,0,0) in the device frame because changing the pitch didn’t change the direction of the device’s x axis.

When working with the InclinometerReading, you must remember that the Yaw, Pitch, and Roll values are order dependent. To translate a point in one frame of reference to a point in another frame of reference, you must apply Yaw first, followed by Pitch, and then by Roll.

Though we’ve referred to the Inclinometer and OrientationSensor classes as sensors, they’re more services than sensors. They make use of a few different sources of data to provide a convenient service for detecting motion and position.

10.7. Summary

In this chapter we’ve covered three different hardware sensors and two classes that wrap the other sensors. The Accelerometer reports acceleration due to the forces acting on a device. The Compass reports the heading of the device relative to north. The Gyrometer reports the rotational velocity of the device. There aren’t any sensors that report linear velocity or rotational acceleration. And there’s no sensor that reports exactly how far a phone has moved.

The Inclinometer and OrientationSensor classes use data from the Accelero-meter, Compass, and Gyrometer to perform a few complex calculations and provide the information necessary to convert device coordinates into real-world coordinates.

Application developers should consider mixing one or more sensors with the location service to build applications that mesh the real world with the digital world. Novel augmented-reality applications can be built to show the user the location of nearby landmarks or the position of constellations in the night sky.

In the next chapter we’ll explore the networking features of the Windows Phone SDK. You’ll learn how to determine network connection state and how to connect to web services. You’ll also learn how to send notifications to a phone from a web service.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.134.99.32