Chapter    24

Using Sensors

Android devices often come with hardware sensors built in, and Android provides a framework for working with those sensors. Sensors can be fun. Measuring the outside world and using that in software in a device is pretty cool. It is the kind of programming experience you just don’t get on a regular computer that sits on a desk or in a server room. The possibilities for new applications that use sensors are huge, and we hope you are inspired to realize them.

In this chapter, we’ll explore the Android sensor framework. We’ll explain what sensors are and how we get sensor data, and then discuss some specifics of the kinds of data we can get from sensors and what we can do with it. While Android has defined several sensor types already, there are no doubt more sensors in Android’s future, and we expect that future sensors will get incorporated into the sensor framework.

What Is a Sensor?

In Android, a sensor is a source of data events from the physical world. This is typically a piece of hardware that has been wired into the device, but Android also provides some logical sensors that combine data from multiple physical sensors. Applications in turn use the sensor data to inform the user about the physical world, to control game play, to do augmented reality, or to provide useful tools for working in the real world. Sensors operate in one direction only; they’re read-only. That makes using them fairly straightforward. You set up a listener to receive sensor data, and then you process the data as it comes in. GPS hardware is like the sensors we cover in this chapter. In Chapter 19, we set up listeners for GPS location updates, and we processed those location updates as they came in. But although GPS is similar to a sensor, it is not part of the sensor framework that is provided by Android.

Some of the sensor types that can appear in an Android device include

  • Light sensor
  • Proximity sensor
  • Temperature sensor
  • Pressure sensor
  • Gyroscope sensor
  • Accelerometer
  • Magnetic field sensor
  • Gravity sensor
  • Linear acceleration sensor
  • Rotation vector sensor
  • Relative humidity sensor

Detecting Sensors

Please don’t assume, however, that all Android devices have all of these sensors. In fact, many devices have just some of these sensors. The Android emulator, for example, has only an accelerometer. So how do you know which sensors are available on a device? There are two ways, one direct and one indirect.

The first way is that you ask the SensorManager for a list of the available sensors. It will respond with a list of sensor objects that you can then set up listeners for and get data from. We’ll show you how a bit later in this chapter. This method assumes that the user has already installed your application onto a device, but what if the device doesn’t have a sensor that your application needs?

That’s where the second method comes in. Within the AndroidManifest.xml file, you can specify the features a device must have in order to properly support your application. If your application needs a proximity sensor, you specify that in your manifest file with a line such as the following:

<uses-feature android:name="android.hardware.sensor.proximity" />

The Google Play Store will only install your app on a device that has a proximity sensor, so you know it’s there when your application runs. The same cannot be said for all other Android app stores. That is, some Android app stores do not perform that kind of check to make sure your app can only be installed onto a device that supports the sensors you specify.

What Can We Know About a Sensor?

While using the uses-feature tags in the manifest file lets you know that a sensor your application requires exists on a device, it doesn’t tell you everything you may want to know about the actual sensor. Let’s build a simple application that queries the device for sensor information. Listing 24-1 shows the Java code of our MainActivity.

Note

You can download this chapter’s projects. We will give you the URL at the end of the chapter. This will allow you to import these projects into your IDE directly.

Within our onCreate() method, we start by getting a reference to the SensorManager. There can be only one of these, so we retrieve it as a system service. We then call its getSensorList() method to get a list of sensors. For each sensor, we write out information about it. The output will look something like Figure 24-1.

9781430246800_Fig24-01.jpg

Figure 24-1. Output from our sensor list app

There are a few things to know about this sensor information. The type value tells you the basic type of the sensor without getting specific. A light sensor is a light sensor, but you could get variations in light sensors from one device to another. For example, the resolution of a light sensor on one device could be different from that on another device. When you specify that your app needs a light sensor in a <uses-feature> tag, you don’t know in advance exactly what type of light sensor you’re going to get. If it matters to your application, you’ll need to query the device to find out and adjust your code accordingly.

The values you get for resolution and maximum range will be in the appropriate units for that sensor. The power measurement is in milliamperes (mA) and represents the electrical current that the sensor draws from the device’s battery; smaller is better.

Now that we know what sensors we have available to us, how do we go about getting data from them? As we explained earlier, we set up a listener in order to get sensor data sent to us. Let’s explore that now.

Getting Sensor Events

Sensors provide data to our application once we register a listener to receive the data. When our listener is not listening, the sensor can be turned off, conserving battery life, so make sure you only listen when you really need to. Setting up a sensor listener is easy to do. Let’s say that we want to measure the light levels from the light sensor. Listing 24-2 shows the Java code for a sample app that does this.

In this sample app, we again get a reference to the SensorManager, but instead of getting a list of sensors, we query specifically for the light sensor. We then set up a listener in the onResume() method of our activity, and we unregister the listener in the onPause() method. We don’t want to be worrying about the light levels when our application is not in the foreground.

For the registerListener() method, we pass in a value representing how often we want to be notified of sensor value changes. This parameter could be

  • SENSOR_DELAY_NORMAL (represents 200,000 microsecond delay)
  • SENSOR_DELAY_UI (represents 60,000 microsecond delay)
  • SENSOR_DELAY_GAME (represents 20,000 microsecond delay)
  • SENSOR_DELAY_FASTEST (represents as fast as possible)

You can also specify a specific microsecond delay using one of the other registerListener methods, as long as it’s larger than 3 microseconds; however anything less than 20,000 is not likely to be honored. It is important to select an appropriate value for this parameter. Some sensors are very sensitive and will generate a lot of events in a short amount of time. If you choose SENSOR_DELAY_FASTEST, you might even overrun your application’s ability to keep up. Depending on what your application does with each sensor event, it is possible that you will be creating and destroying so many objects in memory that garbage collection will cause noticeable slowdowns and hiccups on the device. On the other hand, certain sensors pretty much demand to be read as often as possible; this is true of the rotation vector sensor in particular. Also, don’t rely on this parameter to generate events with precise timing. The events could come a little faster or slower.

Because our activity implements the SensorEventListener interface, we have two callbacks for sensor events: onAccuracyChanged() and onSensorChanged(). The first method will let us know if the accuracy changes on our sensor (or sensors, since it could be called for more than one). The value of the accuracy parameter will be SENSOR_STATUS_UNRELIABLE, SENSOR_STATUS_ACCURACY_LOW, SENSOR_STATUS_ACCURACY_MEDIUM, or SENSOR_STATUS_ACCURACY_HIGH. Unreliable accuracy does not mean that the device is broken; it normally means that the sensor needs to be calibrated. The second callback method tells us when the light level has changed, and we get a SensorEvent object to tell us the details of the new value or values from the sensor.

A SensorEvent object has several members, one of them being an array of float values. For a light sensor event, only the first float value has meaning, which is the SI lux value of the light that was detected by the sensor. For our sample app, we build up a message string by inserting the new messages on top of the older messages, and then we display the batch of messages in a TextView. Our newest sensor values will always be displayed at the top of the screen.

When you run this application (on a real device, of course, since the emulator does not have a light sensor), you may notice that nothing is displayed at first. Just change the light that is shining on the upper-left corner of your device. This is most likely where your light sensor is. If you look very carefully, you might see the dot behind the screen that is the light sensor. If you cover this dot with your finger, the light level will probably change to a very small value (although it may not reach zero). The messages should display on the screen, telling you about the changing light levels.

Note

You might also notice that when the light sensor is covered, your buttons light up (if you have a device with lighted buttons). This is because Android has detected the darkness and lights up the buttons to make the device easier to use “in the dark.”

Issues with Getting Sensor Data

The Android sensor framework has problems that you need to be aware of. This is the part that’s not fun. In some cases, we have ways of working around the problem; in others we don’t, or it’s very difficult.

No Direct Access to Sensor Values

You may have noticed that there is no direct way to query the sensor’s current value. The only way to get data from a sensor is through a listener. There are two kinds of sensors: those that are streaming and those that are not. Streaming sensors will send values on a regular basis, such as the accelerometer. The method call getMinDelay() will return a nonzero value for streaming sensors, to tell you the minimum number of microseconds that a sensor will use to sense the environment. For non-streaming sensors the return value is zero, so even once you’ve set up the listener, there are no guarantees that you’ll get a new datum within a set period of time. At least the callback is asynchronous so you won’t block the UI thread waiting for a piece of data from a sensor. However, your application has to accommodate the fact that sensor data may not be available at the exact moment that you want it. Revisiting Figure 24-1, you’ll notice that the light sensor is non-streaming. Therefore, your app will get an event only if the light level changes. For the other sensors shown, the delay between events will be a minimum of 20 milliseconds, but could be more.

It is possible to directly access sensors using native code and the JNI feature of Android. You’ll need to know the low-level native API calls for the sensor driver you’re interested in, plus be able to set up the interface back to Android. So it can be done, but it’s not easy.

Sensor Values Not Sent Fast Enough

Even at SENSOR_DELAY_FASTEST, you probably won’t get new values more often than every 20 ms (it depends on the device and the sensor). If you need more rapid sensor data than you can get with a rate setting of SENSOR_DELAY_FASTEST, it is possible to use native code and JNI to get to the sensor data faster, but similar to the previous situation, it is not easy.

Sensors Turn Off with the Screen

There have been problems in Android 2.x with sensor updates that get turned off when the screen is turned off. Apparently someone thought it was a good idea to not send sensor updates if the screen is off, even if your application (most likely using a service) has a wake lock. Basically, your listener gets unregistered when the screen turns off.

There are several workarounds to this problem. For more information on this issue and possible resolutions and workarounds, please refer to Android Issue 11028:

http://code.google.com/p/android/issues/detail?id=11028

Now that you know how to get data from sensors, what can you do with the data? As we said earlier, depending on which sensor you’re getting data from, the values returned in the values array mean different things. The next section will explore each of the sensor types and what their values mean.

Interpreting Sensor Data

Now that you understand how to get data from a sensor, you’ll want to do something meaningful with the data. The data you get, however, will depend on which sensor you’re getting the data from. Some sensors are simpler than others. In the sections that follow, we will describe the data that you’ll get from the sensors we currently know about. As new devices come into being, new sensors will undoubtedly be introduced as well. The sensor framework is very likely to remain the same, so the techniques we show here should apply equally well to the new sensors.

Light Sensors

The light sensor is one of the simplest sensors on a device, and one you’ve used in the first sample applications of this chapter. The sensor gives a reading of the light level detected by the light sensor of the device. As the light level changes, the sensor readings change. The units of the data are in SI lux units. To learn more about what this means, please see the “References” section at the end of this chapter for links to more information.

For the values array in the SensorEvent object, a light sensor uses just the first element, values[0]. This value is a float and ranges technically from 0 to the maximum value for the particular sensor. We say technically because the sensor may only send very small values when there’s no light, and never actually send a value of 0.

Remember also that the sensor can tell us the maximum value that it can return and that different sensors can have different maximums. For this reason, it may not be useful to consider the light-related constants in the SensorManager class. For example, SensorManager has a constant called LIGHT_SUNLIGHT_MAX, which is a float value of 120,000; however, when we queried our device earlier, the maximum value returned was 10,240, clearly much less than this constant value. There’s another one called LIGHT_SHADE at 20,000, which is also above the maximum of the device we tested. So keep this in mind when writing code that uses light sensor data.

Proximity Sensors

The proximity sensor either measures the distance that some object is from the device (in centimeters) or represents a flag to say whether an object is close or far. Some proximity sensors will give a value ranging from 0.0 to the maximum in increments, while others return either 0.0 or the maximum value only. If the maximum range of the proximity sensor is equal to the sensor’s resolution, then you know it’s one of those that only returns 0.0, or the maximum. There are devices with a maximum of 1.0 and others where it’s 6.0. Unfortunately, there’s no way to tell before the application is installed and run which proximity sensor you’re going to get. Even if you put a <uses-feature> tag in your AndroidManifest.xml file for the proximity sensor, you could get either kind. Unless you absolutely need to have the more granular proximity sensor, your application should accommodate both types gracefully.

Here’s an interesting fact about proximity sensors: the proximity sensor is sometimes the same hardware as the light sensor. Android still treats them as logically separate sensors, though, so if you need data from both you will need to set up a listener for each one. Here’s another interesting fact: the proximity sensor is often used in the phone application to detect the presence of a person’s head next to the device. If the head is that close to the touchscreen, the touchscreen is disabled so no keys will be accidently pressed by the ear or cheek while the person is talking on the phone.

The source code projects for this chapter include a simple proximity sensor monitor application, which is basically the light sensor monitor application modified to use the proximity sensor instead of the light sensor. We won’t include the code in this chapter, but feel free to experiment with it on your own.

Temperature Sensors

The old deprecated temperature sensor (TYPE_TEMPERATURE) provided a temperature reading and also returned just a single value in values[0]. This sensor usually read an internal temperature, such as at the battery. There is a new temperature sensor called TYPE_AMBIENT_TEMPERATURE. The new value represents the temperature outside the device in degrees Celsius.

The placement of the temperature sensor is device-dependent, and it is possible that the temperature readings could be impacted by the heat generated by the device itself. The projects for this chapter include one for the temperature sensor called TemperatureSensor. It takes care of calling the correct temperature sensor based on which version of Android is running.

Pressure Sensors

This sensor measures barometric pressure, which could detect altitude for example or be used for weather predictions. This sensor should not be confused with the ability of a touchscreen to generate a MotionEvent with a pressure value (the pressure of the touch). We covered this touch type of pressure sensing in Chapter 22. Touchscreen pressure sensing doesn’t use the Android sensor framework.

The unit of measurement for a pressure sensor is atmospheric pressure in hPa (millibar), and this measurement is delivered in values[0].

Gyroscope Sensors

Gyroscopes are very cool components that can measure the twist of a device about a reference frame. Said another way, gyroscopes measure the rate of rotation about an axis. When the device is not rotating, the sensor values will be zeros. When there is rotation in any direction, you’ll get nonzero values from the gyroscope. Gyroscopes are often used for navigation. But by itself, a gyroscope can’t tell you everything you need to know to navigate. And unfortunately, errors creep in over time. But coupled with accelerometers, you can determine the path of movement of the device.

Kalman filters can be used to link data from the two sensors together. Accelerometers are not terribly accurate in the short term, and gyroscopes are not very accurate in the long term, so combined they can be reasonably accurate all the time. While Kalman filters are very complex, there is an alternative called complementary filters that are easier to implement in code and produce results that are pretty good. These concepts are beyond the scope of this book.

The gyroscope sensor returns three values in the values array for the x, y, and z axes. The units are radians per second, and the values represent the rate of rotation around each of those axes. One way to work with these values is to integrate them over time to calculate an angle change. This is a similar calculation to integrating linear speed over time to calculate distance.

Accelerometers

Accelerometers are probably the most utilized of the sensors on a device. Using these sensors, your application can determine the physical orientation of the device in space relative to gravity’s pull straight down, plus be aware of forces acting on the device. Providing this information allows an application to do all sorts of interesting things, from game play to augmented reality. And of course, the accelerometers tell Android when to switch the orientation of the user interface from portrait to landscape and back again.

The accelerometer coordinate system works like this: the accelerometer’s x axis originates in the bottom-left corner of the device and goes across the bottom to the right. The y axis also originates in the bottom-left corner and goes up along the left of the display. The z axis originates in the bottom-left corner and goes up in space away from the device. Figure 24-2 shows what this means.

9781430246800_Fig24-02.jpg

Figure 24-2. Accelerometer coordinate system

This coordinate system is different than the one used in layouts and 2D graphics. In that coordinate system, the origin (0, 0) is at the top-left corner, and y is positive in the direction down the screen from there. It is easy to get confused when dealing with coordinate systems in different frames of reference, so be careful.

We haven’t yet said what the accelerometer values mean, so what do they mean? Acceleration is measured in meters per second squared (m/s2). Normal Earth gravity is 9.81 m/s2, pulling down toward the center of the Earth. From the accelerometer’s point of view, the measurement of gravity is –9.81. If your device is completely at rest (not moving) and is on a perfectly flat surface, the x and y readings will be 0 and the z reading will be +9.81. Actually, the values won’t be exactly these because of the sensitivity and accuracy of the accelerometer, but they will be close. Gravity is the only force acting on the device when the device is at rest, and because gravity pulls straight down, if our device is perfectly flat, its effect on the x and y axes is zero. On the z axis, the accelerometer is measuring the force on the device minus gravity. Therefore, 0 minus –9.81 is +9.81, and that’s what the z value will be (a.k.a. values[2] in the SensorEvent object).

The values sent to your application by the accelerometer always represent the sum of the forces on the device minus gravity. If you were to take your perfectly flat device and lift it straight up, the z value would increase at first, because you increased the force in the up (z) direction. As soon as your lifting force stopped, the overall force would return to being just gravity. If the device were to be dropped (hypothetically—please don’t do this), it would be accelerating toward the ground, which would zero out gravity so the accelerometer would read 0 force.

Let’s take the device from Figure 24-2 and rotate it up so it is in portrait mode and vertical. The x axis is the same, pointing left to right. Our y axis is now straight up and down, and the z axis is pointing out of the screen straight at us. The y value will be +9.81, and both x and z will be 0.

What happens when you rotate the device to landscape mode and continue to hold it vertically, so the screen is right in front of your face? If you guessed that y and z are now 0 and x is +9.81, you’d be correct. Figure 24-3 shows what it might look like.

9781430246800_Fig24-03.jpg

Figure 24-3. Accelerometer values in landscape vertical

When the device is not moving, or is moving with a constant velocity, the accelerometers are only measuring gravity. And in each axis, the value from the accelerometer is gravity’s component in that axis. Therefore, using some trigonometry, you could figure out the angles and know how the device is oriented relative to gravity’s pull. That is, you could tell if the device were in portrait mode or in landscape mode or in some tilted mode. In fact, this is exactly what Android does to figure out which display mode to use (portrait or landscape). Note, however, that the accelerometers do not say how the device is oriented with respect to magnetic north. So while you could know that the device is being held in landscape mode vertically, you wouldn’t know if you were facing east or west or anywhere in between. That’s where the magnetic field sensor will come in, which we will cover in a later section.

Accelerometers and Display Orientation

Accelerometers in a device are hardware, and they’re firmly attached, and as such have a specific orientation relative to the device that does not change as the device is turned this way or that. The values that the accelerometers send into Android will change of course as a device is moved, but the coordinate system of the accelerometers will stay the same relative to the physical device. The coordinate system of the display, however, changes as the user goes from portrait to landscape and back again. In fact depending on which way the screen is turned, portrait could be right-side up, or 180 degrees upside-down. Similarly, landscape could be in one of two different rotations 180 degrees apart.

When your application is reading accelerometer data and wanting to affect the user interface correctly, your application must know how much rotation of the display has occurred to properly compensate. As your screen is reoriented from portrait to landscape, the screen’s coordinate system has rotated with respect to the coordinate system of the accelerometers. To handle this, your application must use the method Display.getRotation(). The return value is a simple integer but not the actual number of degrees of rotation. The value will be one of Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180, or Surface.ROTATION_270. These are constants with values of 0, 1, 2, and 3, respectively. This return value tells you how much the display has rotated from the “normal” orientation of the device. Because not all Android devices are normally in portrait mode, you cannot assume that portrait is at ROTATION_0.

Accelerometers and Gravity

So far, we’ve only briefly touched on what happens to the accelerometer values when the device is moved. Let’s explore that further. All forces acting on the device will be detected by the accelerometers. If you lift the device, the initial lifting force is positive in the z direction, and you get a z value greater than +9.81. If you push the device on its left side, you’ll get an initial negative reading in the x direction.

What you’d like to be able to do is separate out the force of gravity from the other forces acting on the device. There’s a fairly easy way to do this, and it’s called a low-pass filter. Forces other than gravity acting on the device will do so in a way that is typically not gradual. In other words, if the user is shaking the device, the shaking forces are reflected in the accelerometer values quickly. A low-pass filter will in effect strip out the shaking forces and leave only the steady force, which is gravity. Let’s use a sample application to illustrate this concept. It’s called GravityDemo. Listing 24-3 shows the Java code.

The result of running this application is a display that looks like Figure 24-4. This screenshot was taken as the device lay flat on a table.

9781430246800_Fig24-04.jpg

Figure 24-4. Gravity, motion, and angle values

Most of this sample application is the same as the Accel Sensor application from before. The differences are in the onSensorChanged() method. Instead of simply displaying the values from the event array, we attempt to keep track of gravity and motion. You get gravity by using only a small portion of the new value from the event array, and a large portion of the previous value of the gravity array. The two portions used must add up to 1.0. We used 0.9 and 0.1. You could try other values, too, such as 0.8 and 0.2. Our gravity array cannot possibly change as fast as the actual sensor values are changing. But this is closer to reality. And this is what a low-pass filter does. The event array values would only be changing if forces were causing the device to move, and you don’t want to measure those forces as part of gravity. You only want to record into your gravity array the force of gravity itself. The math here does not mean you’re magically recording only gravity, but the values you’re calculating are going to be a lot closer than the raw values from the event array.

Notice also the motion array in the code. By tracking the difference between the raw event array values and the calculated gravity values, you are basically measuring the active, non-gravity, forces on the device in the motion array. If the values in the motion array are zero or very close to zero, it means the device is probably not moving. This is useful information. Technically, a device moving in a constant speed would also have values in the motion array close to zero, but the reality is that if a user is moving the device, the motion values will be somewhat larger than zero. Users can’t possibly move a device at a perfect constant speed.

Lastly, please notice that this example does not produce new objects that need to be garbage collected. It is very important when dealing with sensor events to not create new objects; otherwise your application will spend too much time paused for garbage collection cycles.

Using Accelerometers to Measure the Device’s Angle

We wanted to show you one more thing about the accelerometers before we move on. If we go back to our trigonometry lessons, we remember that the cosine of an angle is the ratio of the near side and the hypotenuse. If we consider the angle between the y axis and gravity itself, we could measure the force of gravity on the y axis and take the arccosine to determine the angle. We’ve done that in this code as well, although here we have to deal yet again with some of the messiness of sensors in Android. There are constants in SensorManager for different gravity constants, including Earth’s. But your actual measured values could possibly exceed the defined constants. We will explain what we mean by this next.

In theory, your device at rest would measure a value for gravity equal to the constant value, but this is rarely the case. At rest, the accelerometer sensor is very likely to give us a value for gravity that is larger or smaller than the constant. Therefore, our ratio could end up greater than one, or less than negative one. This would make the acos() method complain, so we fix the ratio value to be no more than 1 and no less than –1. The corresponding angles in degrees range from 0 to 180. That’s fine except that we don’t get negative angles from 0 to –180 this way. To get the negative angles, we use another value from our gravity array, which is the z value. If the z value of gravity is negative, it means the device’s face is oriented downward. For all those values where the device face is pointed down, we make our angle negative as well, with the result being that our angle goes from –180 to +180, just as we would expect.

Go ahead and experiment with this sample application. Notice that the value of the angle is 90 when the device is laid flat, and it’s zero (or close to it) when the device is held straight up and down in front of us. If we keep rotating down past flat, we will see the value of the angle exceed 90. If we tilt the device up more from the 0 position, the value of angle goes negative until we’re holding the device above our heads and the value of the angle is –90. Finally, you may have noticed our counter that controls how often the display is updated. Because the sensor events can come rather frequently, we decided to only display every tenth time we get values.

Magnetic Field Sensors

The magnetic field sensor measures the ambient magnetic field in the x, y, and z axes. This coordinate system is aligned just like the accelerometers, so x, y, and z are as shown in Figure 24-2. The units of the magnetic field sensor are microteslas (uT). This sensor can detect the Earth’s magnetic field and therefore tell us where north is. This sensor is also referred to as the compass, and in fact the <uses-feature> tag uses android.hardware.sensor.compass as the name of this sensor. Because this sensor is so tiny and sensitive, it can be affected by magnetic fields generated by things near the device, and even to some extent to components within the device. Therefore the accuracy of the magnetic field sensor may at times be suspect.

We’ve included a simple CompassSensor application in the download section of the web site, so feel free to import that and play with it. If you bring metal objects close to the device while this application is running, you might notice the values changing in response. Certainly if you bring a magnet close to the device you will see the values change. In fact, the Google Cardboard “device” uses a magnet under a physical button which is then detected by the phone as a change in the magnetic field when the button is pressed.

You might be asking, can I use the compass sensor as a compass to detect where north is? And the answer is: not by itself. While the compass sensor can detect magnetic fields around the device, if the device is not being held perfectly flat in relation to the Earth’s surface, you’d have no way of correctly interpreting the compass sensor values. But you have accelerometers that can tell you the orientation of the device relative to the Earth’s surface! Therefore, you can create a compass from the compass sensor, but you’ll need help from the accelerometers too. So let’s see how to do that.

Using Accelerometers and Magnetic Field Sensors Together

The SensorManager provides some methods that allow us to combine the compass sensor and the accelerometers to figure out orientation. As we just discussed, you can’t use just the compass sensor alone to do the job. So SensorManager provides a method called getRotationMatrix(), which takes the values from the accelerometers and from the compass and returns a matrix that can be used to determine orientation.

Another SensorManager method, getOrientation(), takes the rotation matrix from the previous step and gives an orientation matrix. The values from the orientation matrix tell you your device’s rotation relative to the Earth’s magnetic north, as well as the device’s pitch and roll relative to the ground.

Magnetic Declination and GeomagneticField

There’s another topic we want to cover with regard to orientation and devices. The compass sensor will tell you where magnetic north is, but it won’t tell you where true north is (a.k.a., geographic north). Imagine you are standing at the midpoint between the magnetic north pole and the geographic north pole. They’d be 180 degrees apart. The further away you get from the two north poles, the smaller this angle difference becomes. The angle difference between magnetic north and true north is called magnetic declination. And the value can only be computed relative to a point on the planet’s surface. That is, you have to know where you’re standing to know where geographic north is in relation to magnetic north. Fortunately, Android has a way to help you out, and it’s the GeomagneticField class.

In order to instantiate an object of the GeomagneticField class, you need to pass in a latitude and longitude. Therefore, in order to get a magnetic declination angle, you need to know where the point of reference is. You also need to know the time at which you want the value. Magnetic north drifts over time. Once instantiated, you simply call this method to get the declination angle (in degrees):

float declinationAngle = geoMagField.getDeclination();

The value of declinationAngle will be positive if magnetic north is to the east of geographic north.

Gravity Sensors

This sensor isn’t a separate piece of hardware. It’s a virtual sensor based on the accelerometers. In fact, this sensor uses logic similar to what we described earlier for accelerometers to produce the gravity component of the forces acting on a device. We cannot access this logic, however, so whatever factors and logic are used inside the gravity sensor class are what we must accept. It’s possible, though, that the virtual sensor will take advantage of other hardware such as a gyroscope to help it calculate gravity more accurately. The values array for this sensor reports gravity just like the accelerometer sensor reports its values.

Linear Acceleration Sensors

Similar to the gravity sensor, the linear acceleration sensor is a virtual sensor that represents the accelerometer forces minus gravity. Again, we did our own calculations earlier on the accelerometer sensor values to strip out gravity to get just these linear acceleration force values. This sensor makes that more convenient for you. And it could take advantage of other hardware, such as a gyroscope, to help it calculate linear acceleration more accurately. The values array reports linear acceleration just like the accelerometer sensor reports its values.

Rotation Vector Sensors

The rotation vector sensor represents the orientation of the device in space, with angles relative to the frame of reference of the hardware accelerometer (see Figure 24-2). This sensor returns a set of values that represents the last three components of a unit quaternion. Quaternions are a subject that could fill a book, so we won’t be going into them here.

Thankfully, Google has provided a few methods within SensorManager to help with this sensor. The getQuaternionFromVector() method converts a rotation vector sensor output to a normalized quaternion. The getRotationMatrixFromVector() method converts a rotation vector sensor output to a rotation matrix, and that can be used with getOrientation(). When converting rotation vector sensor output to an orientation vector, though, you need to realize that it goes from –180 degrees to +180 degrees.

The ZIP file of sample apps for this chapter includes a version of VirtualJax that shows the rotation vector in use.

References

Here are some helpful references to topics you may wish to explore further:

Summary

In this chapter, we covered the following topics:

  • What sensors are in Android.
  • Finding out what sensors are on a device.
  • Specifying the sensors that are required for an application before it will be loadable onto an Android device.
  • Determining the properties of a sensor on a device.
  • How to get sensor events.
  • The fact that events come whenever the sensor value changes, so it is important to understand there could be a lag before you get your first value.
  • The different speeds of updates from a sensor and when to use each one.
  • The details of a SensorEvent and how these can be used for the various sensor types.
  • Virtual sensors, made up of data from other sensors. The ROTATION_VECTOR sensor is one of these.
  • Determining the angle of the device using sensors, and telling which direction the device is facing.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.88.54