Chapter 10. Positioning: accelerometers, location, and the compass

 

This chapter covers

  • Sensing gravity
  • Gauging movement
  • Determining location and orientation
  • Using Core Location

 

When we first introduced the iPhone and iPad, we highlighted a number of their unique features. Among them were three components that allow the device to figure out precisely where it is in space: a trio of accelerometers or gyroscope, which gives it the ability to sense motion such as shaking or rotation; a locational device (using either GPS or faux GPS), which lets it figure out where in the world it is; and a compass to figure out which direction it’s facing.

Other than accessing some basic orientation information, we haven’t done much with these features. We’ll now dive into these positioning technologies and examine how to use them in your programming.

We’ll start with some new ways to look at orientation data and then explain how to use the accelerometers, compass, and GPS in real applications.

10.1. The accelerometers and orientation

The easiest use of the accelerometers is to determine the device’s current orientation. You already used the view controller’s interfaceOrientation property, back in chapter 5. As we mentioned at the time, you can also access orientation information through the UIDevice object. It can provide more information and real-time access that isn’t available using the view controller.

You have two ways to access the UIDevice information: through properties and through a notification. Let’s examine the orientation property first.

10.1.1. The orientation property

The easy way to access the UIDevice’s orientation information is to look at its orientation property. You must first access the UIDevice itself, which you can do by calling a special UIDevice class method, pretty much the same way you access the UIApplication object:

UIDevice *thisDevice = [UIDevice currentDevice];

After you’ve done this, you can get to the orientation property. It returns a constant drawn from UIDeviceOrientation. This looks exactly like the results from a view controller’s orientation property except there are three additional values, shown in table 10.1.

Table 10.1. UIDeviceOrientation lists seven types of the device orientation.

Constant

Summary

UIDeviceOrientationPortrait Device is vertical, right side up.
UIDeviceOrientationPortraitUpsideDown Device is vertical, upside down.
UIDeviceOrientationLandscapeLeft Device is horizontal, tilted left.
UIDeviceOrientationLandscapeRight Device is horizontal, tilted right.
UIDeviceOrientationFaceUp Device is lying on its back.
UIDeviceOrientationFaceDown Device is lying on its screen.
UIDeviceOrientationUnknown Device is in an unknown state.

These three additional values are one reason you may want to access the UIDevice object rather than examine orientation using a view controller.

10.1.2. The orientation notification

The UIDevice class can also give you instant access to an orientation change when it occurs. This is done through a notification (a topic we introduced in chapter 6). The following code shows how to access this information:

 [[UIDevice currentDevice]
    beginGeneratingDeviceOrientationNotifications];
[[NSNotificationCenter defaultCenter] addObserver:self
    selector:@selector(deviceDidRotate:)
 name:@"UIDeviceOrientationDidChangeNotification"
object:nil];

This is a two-step process. First, you alert the device that you’re ready to start listening for a notification about an orientation change. This is one of a pair of UIDevice instance methods, the other being endGeneratingDeviceOrientationNotifications. You generally should leave notifications on only when you need them, because they take up CPU cycles and increase your power consumption.

Second, you register to receive the UIDeviceOrientationDidChangeNotification messages, the first live example of the notification methods we introduced in chapter 6. Then, whenever an orientation change notification occurs, the device-DidRotate: method is called. Note that you don’t receive notification of what the new orientation is; you only know that a change happened. For more details, you have to query the orientation property.

You’ve now seen the two ways in which orientation can be tracked with the UIDevice object, providing more information and more rapid notification than you receive when using the view controller. But that only touches the surface of what you can do with the device’s accelerometers. It’s the raw data about changes in three-dimensional space that you’ll really want to access.

10.2. The accelerometers and movement

When you use orientation notification, the frameworks do the work for you: they take low-level acceleration reports and turn them into more meaningful events. It’s similar to the concept of actions, which turn low-level touch events into high-level control events.

 

Warning

Accelerometer programs can’t be tested on the Simulator. Instead, you need to have a fully provisioned iPhone or iPad to test your code. See appendix C for information about provisioning your device.

 

Notifications aren’t sufficient if you want to program entire interfaces that effectively use the device’s movement in three-dimensional space as a new user-input device. For that, you need to access two classes: UIAccelerometer and UIAcceleration. Let’s look at accessing and parsing data from UIAccelerometer. Later in the section, you’ll use the accelerometers to check for gravity and movement.

10.2.1. Accessing the UIAccelerometer

UIAccelerometer is a class you can use to receive acceleration-related data. It’s a shared object, like UIApplication and UIDevice. The process of using it is as follows:

- (void)viewDidLoad {
    UIAccelerometer *myAccel =
        [UIAccelerometer sharedAccelerometer];
    myAccel.updateInterval = .1;
    myAccel.delegate = self;
    [super viewDidLoad];
}

The first step is to access the accelerometer, which you do with another call to a shared-object method. Having this step on its own line is probably unnecessary, because you could perform the other two steps as nested calls, but we find this a lot more readable.

Next, you select your update interval, which specifies how often you receive information about acceleration. This is hardware limited, with a current default of 100 updates per second. That’s most likely just right if you’re creating a game using the accelerometer, but it’s excessive for other purposes. We’ve opted for 10 updates per second, which is an updateInterval of 0.1. You should always set the lowest acceptable input to preserve power on the device.

Finally, you must set a delegate for the accelerometer, which is how you receive data on accelerometer changes. The delegate needs to respond to only one method, accelerometer:didAccelerate:, which sends a message containing a UIAcceleration object whenever acceleration occurs (to the limit of the updateInterval). Note that the class that utilizes this mechanism needs to declare the UIAccelerometerDelegate protocol in the interface.

10.2.2. Parsing the UIAcceleration

You can use UIAcceleration information to accurately and easily measure two things: the device’s relationship to gravity and its movement through three-dimensional space. These are both done through a set of three properties, x, y, and z, which refer to the three-dimensional axes, as shown in figure 10.1.

Figure 10.1. The accelerometers measure acceleration in three-dimensional space.

The x-axis measures along the short side of the iPhone or iPad, the y-axis measures along the long side, and the z-axis measures through the device. All values are measured in units of g, which is to say g-force. A value of 1 g represents the force of gravity on Earth at sea level.

The thing to watch for when accessing the accelerometer is that it measures two types of force applied to the device: both the force of movement in any direction and the force of gravity, measured in units of g. That means an iPhone or iPad at rest always shows an acceleration of 1 g toward the Earth’s core. This may require filtering if you’re doing more sophisticated work.

10.2.3. Checking for gravity

When the accelerometers are at rest, they naturally detect gravity. You can use this feature to detect the precise orientation an iPhone or iPad is currently held in, going far beyond the four or six states supported by the orientation variables.

 

Filtering and the accelerometer

It may seem that the acceleration data is mushed together, but it’s easy to isolate exactly the data you need using basic electronics techniques.

A low-pass filter passes low-frequency signals and attenuates high-frequency signals. That’s what you use to reduce the effects of sudden changes in your data, such as those caused by an abrupt motion.

A high-pass filter passes high-frequency signals and attenuates low-frequency signals. That’s what you use to reduce the effects of ongoing forces, such as gravity.

You’ll see examples of these two filtering methods in the upcoming sections.

 

Reading Acceleration Information

The following code shows how you can use the accelerometers to modify redBall, a UIImage picture of a red ball initially set in the middle of the screen:

- (void)accelerometer:(UIAccelerometer *)accelerometer
    didAccelerate:(UIAcceleration *)acceleration {
    CGPoint curCenter = [redBall center];
    float newX = 3 * acceleration.x + curCenter.x;
    float newY = -3 * acceleration.y + curCenter.y;
    if (newX < 25) newX = 25;
    if (newY < 25) newY = 25;
    if (newX > 295) newX = 295;
    if (newY > 455) newY = 455;
    redBall.center = CGPointMake(newX,newY);
}

Any accelerometer program begins with the accelerometer:didAccelerate: method, which you access by setting the current program as a delegate of the Accelerometer shared action. You then mark the current position of the redBall.

To access the accelerometer, all you do is look at the x and y coordinates of the UIAcceleration object and prepare to modify the redBall’s position based on those. The acceleration is multiplied by 3 here to keep the ball’s movement from being snaillike. There’s also a z property for the third axis and a timestamp property indicating when the UIAcceleration object was created, none of which you need in this example. Movement has a limited effect on the example anyway, because an abrupt movement doesn’t change the ball’s slow roll much.

After acquiring your gravitic information, you make sure the 50 × 50 red ball stays within the bounds of the screen. If you wanted to be fancy, you could introduce vectors and bounce the ball when it hits the edge, but that’s beyond the scope of this example. After that check, you move the ball. Figure 10.2 shows what this program looks like on the iPad.

Figure 10.2. Gravity test as shown on the iPad. The ball falls as if pulled by gravity and responds accordingly to changes in the orientation of the device.

With a minimal amount of work, you’ve created a program that’s acted on by gravity. This program could easily be modified to act as a leveler tool for pictures (by having it move along only one of the three axes) or could be turned into a game where a player tries to move a ball from one side of the screen to the other, avoiding pits on the way.

Now, what would it take to make this example totally functional by filtering out all movement? The answer, it turns out, is not much more work at all.

Filtering out Movement

To create a low-pass filter that lets through gravitic force but not movement, you need to average out the acceleration information you’re receiving, so that at any time the vast majority of your input is coming from the steady force of gravity. This is shown in the following code, which modifies the previous example:

gravX = (acceleration.x * kFilteringFactor)
    + (gravX * (1 - kFilteringFactor));
gravY = (acceleration.y * kFilteringFactor)
    + (gravY * (1 - kFilteringFactor));
float newX = 3 * gravX + curCenter.x;
float newY = -3 * gravY + curCenter.y;

This example depends on three predefined variables: kFilteringFactor is a constant set to .1, which means that only 10 percent of the active movement is used at any time; gravX and gravY each maintain a cumulative average for that axis of movement as the program runs.

You filter things by averaging 10 percent of the active movement with 90 percent of the average. This smoothes out any bumps, which means sudden acceleration is largely ignored. This example does this for the x- and y-axes because that’s all that are used in the example. If you cared about the z-axis, you’d need to filter that too.

Afterward, you use the average acceleration instead of the raw acceleration when you’re changing the position of the ball. The gravity information can be extracted from what looked like an imposing mass of data with a couple of lines of code.

As you’ll see, looking at only the movement is just as easy.

10.2.4. Checking for movement

In the previous example, you isolated the gravitic portion of the accelerometer’s data by creating a simple low-pass filter. With that data in hand, it’s trivial to create a high-pass filter. All you need to do is subtract the low-pass filtered data from the acceleration value; the result is the pure movement data:

gravX = (acceleration.x * kFilteringFactor)
        + (gravX * (1 - kFilteringFactor));
gravY = (acceleration.y * kFilteringFactor)
        + (gravY * (1 - kFilteringFactor));
float moveX = acceleration.x - gravX;
float moveY = acceleration.y - gravY;

This filter doesn’t entirely stop gravitic movement, because it takes several iterations for the program to cut out gravity completely. In the meantime, the program is influenced by gravity for a few fractions of a second at startup. If that’s a problem, you can tell the program to ignore acceleration input for a second after it loads and after an orientation change. We’ll show the first solution in the next example.

With that exception, as soon as you start using these new moveX and moveY variables, you’re looking at the filtered movement information rather than the filtered gravity information. But when you start looking at movement information, you see that it’s trickier to use than gravity information. There are two reasons for this.

First, movement information is a lot more ephemeral. It appears for a second, and then it’s gone again. If you’re displaying some type of continuous movement, as with the red ball example, you need to make your program much more sensitive to detect the movements. You’d have to multiply the moveX and moveY values by about 25 to see movement forces applied to the ball in any recognizable manner.

Second, movement information is a lot noisier. As you’ll see when we look at real movement data, motion occurs in a multitude of directions at the same time, forcing you to parse out the exact information you want.

Ultimately, to interpret movement, you have to be more sophisticated, recognizing what are effectively gestures in three-dimensional space.

10.2.5. Recognizing simple accelerometer movement

If you want to write programs using acceleration gestures, we suggest that you download the Accelerometer Graph program available from Apple’s developer site. This is a nice, simple example of accelerometer use; but more important, it also provides you with a clear display of what the accelerometers report as you make different gestures. Make sure you enable the high-pass filter to get the clearest results.

Figure 10.3 shows what the Accelerometer Graph looks like in use (but without movement occurring). As you move the device around, you’ll quickly come to see how the accelerometers respond.

Figure 10.3. The Accelerometer Graph shows movement in all three directions.

Here are some details you’ll notice about how the accelerometers report information when you look at the Accelerometer Graph:

  • Most gestures cause all three accelerometers to report force; the largest force should usually be in the axis of main movement.
  • Even though there’s usually a compensating stop force, the start force is typically larger and shows the direction of main movement.
  • Casual movement usually results in forces of .1 g to .5 g.
  • Slightly forceful movement usually tops out at 1 g.
  • A shake or other more forceful action usually results in a 2 g force.
  • The accelerometers can show things other than simple movement. For example, when you’re walking with an iPhone or iPad, you can see the rhythm of your pace in the accelerometers.

All of this suggests a simple methodology for detecting basic accelerometer movement: you monitor the accelerometer over the course of movement, saving the largest acceleration in each direction. When the movement has ended, you can report the largest acceleration as the direction of movement.

The following listing puts these lessons together in a program that could easily be used to report the direction of the device’s movement (which you could then use to take some action).

Listing 10.1. Movement reporter that could be applied as a program controller

You start by creating a low-pass filter and then taking the inverse of it in order to get relatively clean movement data. Because the data can be a little dirty at the start, you don’t accept any acceleration data sent in the first second . You could cut this down to a mere fraction of a second.

You start looking for movement whenever one of the accelerometers goes above .3 g. When that occurs, you save the direction of highest movement and keep measuring it until movement drops below .3 g. Afterward, you make sure that at least a tenth of a second has passed, so that you know you’re not in a lull during a movement.

Finally, you do whatever you want to do with your movement data. This example reports the information in a label, but you’d doubtless do something much more intricate in a live program. Cleanup is required to get the next iteration of movement reporting going.

This sample program works well, unless the movement is very subtle. In those cases, it occasionally reports the opposite direction because of the force when the device stops its motion. If this type of subtlety is a problem for your application, more work is required. To resolve this, you need to make a better comparison of the start and stop forces for movements; if they’re similar in magnitude, you’ll usually want to use the first force measured, not necessarily the biggest one. But for the majority of cases, the code in listing 10.1 is sufficient. You now have an application that can accurately report (and take action based on) direction of movement.

Together, gravity and force measurement represent the most obvious things that you can do with the accelerometers, but they’re by no means the only things. We suspect that using the accelerometers to measure three-dimensional gestures will be one of their best (and most frequent) uses as the platform matures.

10.3. The accelerometers and gestures

Three-dimensional gestures are one of the coolest results of having accelerometers inside your iPhone or iPad. They let users manipulate your programs without ever having to touch (or even look at) the screen.

To recognize a gesture, you must do two things. First, you must accurately track the movements that make up the gesture. Second, you must make sure that in doing so, you don’t recognize a random movement that wasn’t intended to be a gesture at all.

Recognizing a gesture requires only the coding foundation that we’ve discussed already. But we’ll show one example that puts that foundation into real-world use by creating a method that recognizes a shake gesture.

10.3.1. Using accelerometers

We’re defining a shake as a rapid shaking back and forth of the device, like you might shake dice in your hand before you throw them. Apple’s Accelerometer Graph is a great tool to use to figure out what’s going on. It shows a shake as primarily having these characteristics, presuming a program that’s running in portrait mode:

  • Movement is primarily along the x-axis, with some movement along the y-axis, and even less along the z-axis.
  • There are at least three peaks of movement, with alternating positive and negative forces.
  • All peaks are at least +/-1 g, with at least one peak being +/-2 g for a relatively strong shake.

You can use the preceding characteristics to define the average requirements for a shake. If you wanted to tighten them up, you’d probably require four or more peaks of movement, but for now, this will do. Alternatively, you might want to decrease the g-force requirements so that users don’t have to shake their device quite as much. We’ve detailed the code that watches for a shake in the following listing.

Listing 10.2. Shake, shake your iPhone

In this code, you generally follow the logic you used when viewing the accelerometer graph, although with increased sensitivity, as promised. The didShake: method registers a shake if it sees three or more movements of at least .75 g, at least one of which is 1.25 g, with movements going in opposite directions.

You start by removing gravity from the accelerometer data, as you did in previous examples. This time, you don’t worry about the quirk at the beginning of data collection; it doesn’t register as a shake, because it’s a small fraction of a g.

The main work of the function is found in its latter half, which is called whenever movement continues to occur. First, you check whether the strongest movement is along the x-axis . If so, you register the movement if it’s at least .75 g and if it’s in the opposite direction of the last x-axis move. You do the latter check by seeing if the product of the last two moves on that axis is negative; if so, one must have been positive and the other negative, which means they were opposite each other.

If the strongest move was instead on the y-axis, you check for a sufficiently strong y-axis move that’s in the opposite direction as the last y-axis move . We could have written a more restrictive shake checker that only looked for x-axis movement, or a less restrictive checker that also looked for z-axis movement, but we opted for this middle ground.

As long as movement continues without a break of more than a quarter of a second, the shakecount continues to increment, but when movement stops , the program is ready to determine whether a shake occurred. You check this by seeing if the shake count equals or exceeds 3 and if the largest movement exceeded 1.25 g. Afterward, all of the variables are reset to check for the next shake.

By building this shake checker as a separate method, you could easily integrate it into a list of checks made in the accelerometer:didAccelerate: method. The following code shows a simple use that changes the color of the screen every time a shake occurs. The nextColor method can be changed to do whatever you want:

- (void)accelerometer:(UIAccelerometer *)accelerometer
    didAccelerate:(UIAcceleration *)acceleration {
    if ([self didShake:(UIAcceleration *)acceleration]) {
        self.view.backgroundColor = [self nextColor];
    }
}

We expect that the shake will be the most common three-dimensional gesture programmed into the iPhone or iPad. With this code, you already have it ready to go, though you may choose to change its sensitivity or to make it work in either one or three dimensions.

10.3.2. Gesture recognizer

Standard gestures, such as a tap, double tap, a swipe or a pan may be of use, depending on the specifics of your program; you can take the advantage of the gesture API on the standard gestures defined in the iOS platform.

UIGestureRecognizer is the base class for the gesture recognizer under iOS. The common gestures are defined as subclasses of UIGestureRecognizer:

  • UITapGestureRecognizer—This class handles single or multiple taps.
  • UIPinchGestureRecognizer—This class recognizes pinch gestures.
  • UIRotationGestureRecognizer—This class looks for gestures when the user moves fingers opposite each other in a circular motion.
  • UISwipeGestureRecognizer—This class detects swipes based on the swipe direction definition, such as from left to right or down.
  • UIPanGestureRecognizer—This class recognizes the panning/dragging gesture.
  • UILongPressGestureRecognizer—This class handles the long press gesture.

To create a gesture recognizer, you need to know which view will be the object to monitor for the gesture events. For example, inside the view controller you want to monitor the tap gesture on the view:

UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc]
     initWithTarget:self action:@selector(handleGesture:)];
   [self.view addGestureRecognizer:tap];
   [tap release];

With this code, you can create the tap gesture recognizer to the view; when the tap gesture is detected, the method handleGesture: will be called to perform the animation or other cool response, depending on the application’s specs.

Seems simple, right? Let’s practice this new API with other gesture types. For example, we’d like to present an alert view when the user presses the view for longer than 2 seconds.

Fire up the Xcode and create an application with a View-Based Application template. Go to the view controller to add in the long press gesture recognizer, as shown in the following listing.

Listing 10.3. Detect user gesture with long press gesture recognizer

In the viewDidLoad method, you created the long press recognizer and then defined the minimum press duration as 2 seconds; by default, this value is 0.5. When the user presses the view up to 2 seconds, the method longPressed gets called.

Inside the gesture recognizer, the gesture lookup is continuous. The UIGesture-Recognizer’s state will be switched among UIGestureRecognizerStatePossible, UIGestureRecognizerStateBegan, UIGestureRecognizerStateChanged, UIGesture-RecognizerStateEnded, UIGestureRecognizerStateCancelled, UIGestureRecognizerStateFailed, and UIGestureRecognizerStateRecognized. Inside our response function, you’ll present the alert view only when the gesture recognizer’s state is ended or changed. Without this condition, you might see the alert view pop up two times in a row, which isn’t desirable during the application runtime.

For now, we’ve covered all of the main points of the accelerometers: orientation, gravity, movement, and gestures. In iOS 4, the Core Motion framework is available for raw data access on the accelerometer and gyroscope. This framework is useful when you’re combining the 3D model into your app. We’re not going to cover the details in this book.

We’re now ready to dive into the other major positioning-related tool, and one that we find a lot easier to program because the results are less noisy: Core Location.

10.4. All about Core Location

We have only one unique feature left to look at: the device’s ability to detect a user’s location.

 

Warning

You can only minimally test Core Location using the Simulator. Longitude and latitude work, but they always report Apple’s Cupertino headquarters. Altitude isn’t displayed. For most realistic testing—particularly including distance or altitude—you must use a provisioned device.

 

There are three ways available on iOS to detect current location: cell phone towers, wi-fi, and, most accurately, GPS. Its accuracy could vary from a few blocks’ radius to a few miles, even in an urban area. The iPhone 4 (also iPad 3G, iPhone 3G, and 3GS) has a built-in GPS, but it still has limitations. The iPhone’s antenna power is limited, which affects accuracy, and accuracy is further limited by concerns about power usage. As a result, even if you have an iPhone with a built-in GPS, the device makes preferential use of cell tower data and provides information about GPS locations using the minimal number of satellite contacts possible (although that minimum partially depends on an accuracy requirement that you set).

With all that said, the iPhone 4 provides better location information. But it may not be entirely accurate; in particular, altitude seems to be the least reliable information. The wi-fi–only iPad can determine your location based only on its IP address, making it the least accurate.

We offer this preamble both to describe how the location information is created and to introduce a bit of skepticism about the results. What you get should be good enough for 99 percent of your programs, but you don’t want to do anything mission critical unless you’re careful.

The good news is that you don’t have to worry about which type of device a user owns. The Core Location API works identically whether they have a built-in GPS or not. Better, because GPS consumes a lot of power, you’ll learn how to save users’ battery life in chapter 21 by using the background location service available in the Core Location API. In this section, we’ll examine the location classes and how to use the compass. You’ll also build two applications: one that finds the current location and distance traveled and one that incorporates an altitude measurement.

10.4.1. The location classes

Location awareness is built into two API classes and one protocol. CLLocationManager gives you the ability to access location information in a variety of ways. It includes a delegate protocol, CLLocationManagerDelegate, which defines methods that can tell you when new location information arrives. Finally, the location information appears as CLLocation objects, each of which defines a specific location at a specific time.

Table 10.2 describes the most important properties associated with each of these classes. For more details, you should, as usual, consult the Apple class references. You should examine a number of additional properties and methods to aid with determining location (particularly for the CLLocation class), but we’re staying with the basics here.

Table 10.2. The most important methods and properties for accessing location information
 

Method/Property

Type

Summary

Class: CLLocationManager    
  delegate Property Defines the object that responds to CLLocationManagerDelegate
  desiredAccuracy Property Sets the desired accuracy of location as a CLLocationAccuracy object
  distanceFilter Property Specifies how much lateral movement must occur to cause a location update event
  location Property Specifies the most recent location
  startUpdatingLocation Method Starts generating update events
  stopUpdatingLocation Method Stops generating update events
  startUpdatingHeading Method Starts generating heading update events
  stopUpdatingHeading Method Stops generating heading update events
  headingFilter Property The minimum angle required to generate heading events
  headingAvailable Property Returns true if heading events can be generated
Class: CLLocationManagerDelegate    
  locationManager:didUpdateToLocation: fromLocation: Method Delegate method that reports whenever an update event occurs
  locationManager:didFailWithError: Method Delegate method that reports whenever an update event fails to occur
Class: CLLocation    
  altitude Property Specifies the height of the location in meters
  coordinate Property Returns the location’s coordinates as a CLLocationCoordinate2D variable
  timestamp Property Specifies an NSDate of when the location was measured

Generally, location information is generated much like accelerometer information. You access a shared object (CLLocationManager) and set some standard properties for how you want it to work, including how often to update (distanceFilter). As with the accelerometer, you also have to explicitly turn on location updating (startUpdating-Location). Afterward, you keep an eye on certain methods (as defined by CLLocationManagerDelegate). These methods generate an object (CLLocation) when the location changes; you read the object to get the specifics.

With those generalities out of the way, let’s see how CLLocation works in a real example.

10.4.2. An example using location and distance

This section shows an example of using Core Location to record a starting location, monitor the current location, and calculate the distance between them. As usual, the foundation of this program is built in Xcode. Figure 10.4 displays the general setup.

Figure 10.4. This simple utility shows off locations and distance.

There are three labels: startLabel (at the top) and endLabel (at the bottom) each display information about a location; distanceLabel shows the distance between the two. There are two controls: a button control instantly updates the current location, and a segmented control chooses between miles and kilometers. They’re each linked to an IBAction, which executes a method that you’ll meet in the code.

The following listing shows the code. This is the first of two longer examples in this chapter.

Listing 10.4. An application of Core Location for distances

This program generally follows the broad outline of steps that we’ve already discussed, but we’ll go through each step in turn.

Make sure to add the Core Location framework to your project and import Core-Location/CoreLocation.h in all the files in which you intend to utilize location services. After that, you begin by initializing a CLLocationManager object and then set some standard properties—here a delegate, the desiredAccuracy, and the distanceFilter. The desired accuracy of tens of meters and the update interval of every 100 meters may be more than this particular application requires, but you can tune these in your projects as seems appropriate. Remember that demanding more accuracy and updating more frequently will decrease the battery life of your user’s iPhone or iPad. Finally, you start the CLLocationManager running .

The locationManager:didUpdateToLocation:fromLocation: method is the workhorse of this program . It should be called shortly after the LocationManager starts updating and every time the user walks 100 meters or so. First, it saves the current location as the starting location the first time it’s called, updating the startLabel at the same time. Then, every time it runs, it updates the endLabel and the distance-Label. Note that you don’t have to use the LocationManager’s location property here (or at almost any other time in the program), because this method always provides the current location of the device; it seems to do so well before the location property is updated, based on our own tests. Caveat programmer.

The next few methods have to do with I/O. The method setEnd: is run whenever the button control is pushed, to update the current location . Unfortunately, there’s no particularly clean way to ask for an update, so you must stop and start the location updates, as shown here. Letting the user force a location update is particularly important if you’re using a high distanceFilter or if you’re trying to measure altitude changes. In the altitude example in the next section, you’ll see an alternative way to do this, where the location manager usually isn’t running at all. The controlChange: method is run whenever the segmented control is updated . It updates the distanceLabel. Note that this is the one time when you depend on the location property, because there isn’t a location event when you change the button.

The last few methods are utilities. The updateDistanceLabel: method makes use of an interesting CLLocation method that we haven’t discussed, distanceFrom-Location: . This measures the true distance between two locations, using complex calculations that correctly account for the curvature of the Earth. The method also converts meters to kilometers and alternatively converts them to miles, depending on the status of the segmented control. Finally, updateLocationFor:toLocation: updates either the startLabel or the endLabel by extracting the latitude and longitude coordinates from the CLLocation object it’s passed .

The result is a program that can show a simple distance traveled in a single direction. If we were going to improve it, we’d probably save the starting location to a file and perhaps even make it possible to record multiple trips. But for the purposes of showing how Core Location works, this is sufficient.

There’s one thing that the example didn’t show: how to measure altitude. It’s another CLLocation property, but you’ll write another short program to highlight this part of Core Location.

10.4.3. An example using altitude

Altitude is as easy to work with as longitude and latitude. It’s another property that can be read from a CLLocation object. The biggest problem is that it isn’t available to all users. The Simulator and the original iPhone don’t support altitude.

Apple suggests using the following code to determine whether altitude is unavailable:

if (signbit(newLocation.verticalAccuracy)) {

If its return is nonzero, you need to discontinue checking for altitude information.

Even if a user has an iPhone or an iPad 3G, you must watch out for two other gotchas. First, altitude information can be 10 times more inaccurate than the rest of the location information. Adjust your desiredAccuracy accordingly. Second, remember that the Core Location information updates only when you move a certain distance, as determined by the distanceFilter, in a nonvertical direction. This means you need to allow the user to update the distance by hand rather than depending on automatic updates.

Listing 10.5 repeats the techniques you used previously, applying them to altitude. It also shows another useful integration of user input with a slightly more complex program. As usual, its core objects are built in Xcode: three UILabels, one UIText-Field, two UIImageViews, and a UIActivityIndicatorView. The last is the most interesting, because you haven’t seen it before; we’ll talk about it in our quick discussion of the code. You should be able to pick out all of the objects other than the activity indicator in figure 10.5, which follows the code.

Figure 10.5. An altitude program measures how high you’ve climbed on a mountain of your choice.

Listing 10.5. Keeping track of a mountain climb with your iPhone

Much of this code combines two SDK elements that you’ve already met: the flourishes necessary to make a UITextField work and the protocols you must follow to use a location manager. You can see both of these elements in the viewDidLoad: method, which sets up the text field’s return key and then starts the location manager. Note that you don’t start the location manager updating; you can’t depend on it to update when you’re measuring only vertical change, so it’s best to have the user do it by hand. Next, you finish the text field’s functionality with the textFieldShouldReturn: method, which you’ve met before.

This project contains two controls that can generate actions. When the text field is adjusted , the project saves that destination height for future calculation and then updates the current height using the resetGPS: method. The latter method is also used when the Check Height button is pressed. Figure 10.5 shows these input devices for clarity.

Note that resetGPS: does two things. First, it starts the location update , which you turn on only for brief, one-time uses. In addition to being more appropriate for monitoring altitude, this also helps save energy. Second, it starts your activity indicator . This object is created visually, where you should mark it with the hidesWhen-Stopped property. The view is automatically hidden so it doesn’t appear when the program is loaded. As a result, there’s nothing on the screen until you start the animation, at which time a little activity indicator appears and continues animating until it’s stopped (which you’ll see in a minute).

The heavy lifting is done when the location manager reports back its information . In this section, you check whether you’re getting altitude information . If you are , you move the dot image and update its height label. To finish, you turn off the location update and then stop the animation of the activity indicator, which makes it disappear.

Voila! You have a working altitude monitor (if you have an iPhone 4, iPhone 3G, 3GS, or iPad 3G) and a nice combination of a few different SDK elements.

10.4.4. Using the compass

In addition to knowing your location, the iPhone 4 and 3GS have the ability to know what direction you’re heading. This is because the iPhone 4 and 3GS have a built-in magnetic compass.

With the addition of the CLHeading class to the Core Location framework, you can now determine your magnetic heading as well as your true heading. The magnetic heading uses the built-in magnetometer and points to magnetic north, whereas the true heading uses your current location and points to true north.

Let’s first examine the properties of the CLHeading class. Table 10.3 describes each of these properties.

Table 10.3. Properties of CLHeading used for determining the device’s heading

Property

Description

magneticHeading The heading that points to magnetic north. This value uses the built-in magnetometer and contains a value from 0 to 360.
trueHeading Represents the heading that points to geographic north. This property relies on the current location and so isn’t always guaranteed to be valid. It ranges from 0 to 360.
headingAccuracy This value represents the error in degrees of the magneticHeading. A low value means the heading is relatively accurate. A negative value means the heading is invalid and can’t be trusted.
timestamp The timestamp when the heading was found.

In addition to these properties, you have access to the raw geomagnetic data. These properties include the raw x, y, and z data, which you can use individually.

Accessing the compass information is similar to accessing the GPS information. You first get a reference to the CLLocationManager object, and then you may begin collecting data:

- (void)viewDidLoad {
   CLLocationManager * locationManager = [[[CLLocationManager alloc] init]
   autorelease];
   if (locationManager.headingAvailable == YES) {
      locationManager.delegate = self;
      [locationManager startUpdatingHeading];
   }
}

You first create a new CLLocationManager to interact with the location data. The next line is required to ensure that the device supports the compass. The only devices that return YES here are the iPhone 4 and 3GS and iPads. If this fails, it’s a good idea to notify the user that their device doesn’t support the compass. You then start the compass and begin sending data to the CLLocationManagerDelegate. In this case, the delegate is set to the caller class. Alternatively, if it’s only going to work when certain sensors are available, you can also define the hardware in the info.plist. For example, in order to make sure the Augmented Reality app will only run on the devices with magnetometer and GPS, you can add UIRequiredDeviceCapabilities key to your app’s info.plist. The App Store will make sure that only the devices with a magnetometer and GPS will be able to download your app.

10.4.5. Retrieving data from the compass

To retrieve data from the compass, you must implement the CLLocationManager-Delegate method locationManager:didUpdateHeading:. This method is called automatically every time the compass heading changes on the device. The heading variable passed into this method contains all the data as described in table 10.3. Here’s an example of how to implement this method:

- (void)locationManager:(CLLocationManager *)manager
     didUpdateHeading:(CLHeading *)heading {
    self.heading = heading;
}

This example isn’t too exciting because it only localizes the heading variable to the heading class property. This is useful because you can now use the heading in other places in the code. The two most important properties of this heading variable are magneticHeading and trueHeading.

These variables are of the type CLLocationDirection, which is a typedef double. This value ranges from 0 to 360 degrees. A reading of 0 degrees means the device is pointing north, 90 means east, 180 south, and 270 west. If this value is ever negative, that means it’s invalid.

Although the compass is a simple addition, it offers much power and flexibility within your applications. The addition has allowed for development of interesting apps, including navigation systems, augmented reality apps, and many others that depend on the user’s orientation.

10.4.6. Core Location and the internet

In this section, you’ve seen a few real-world examples of how to use location information in meaningful ways, but you’ll find that you can make much better use of the information when you have an internet connection. When you do, you can feed longitudes and latitudes to various sites. For example, you can pull up maps with a site like Google Maps. You can also improve on the altitude information by instead requesting the geographic altitude of a location using a site like GeoNames. This won’t be accurate if your user is in an airplane or a tall office building, but for the majority of situations, it’ll be better than what the device can currently deliver. See chapter 14 for some examples of using Core Location with the internet.

10.5. Summary

In this chapter, we’ve covered three of the most unique features available to you as an iOS programmer.

The accelerometers can give you access to a variety of information about where a device exists in space. By sensing gravity, you can easily discover precise orientation. By measuring movement, you can see how the device is being guided through space. Finally, you can build more complex movements into three-dimensional gestures, such as a shake.

We’ve talked about the touch screen when discussing input, but the accelerometers and gyroscopes provide another method for allowing users to make simple adjustments to a program. We can imagine game controls and painting programs built entirely around the accelerometers.

The internal GPS can give you information about longitude, latitude, and altitude. The horizontal information is the most reliable, although it’s more useful when you connect to the internet. Altitude information isn’t available to everyone, and even if it is, it has a higher chance of being incorrect, so use it with caution.

The compass gives you complete information about the user’s heading. It lets you determine exactly which way the device is facing and allows for a large variety of new application types.

In the next chapter, we’ll talk about media, highlighting pictures, videos, and sounds.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.159.82