Chapter 21. Sensors

A device may contain hardware for sensing the world around itself — where it is located, how it is oriented, how it is moving.

Information about the device’s current location and how that location is changing over time, using its Wi-Fi, cellular networking, and GPS capabilities, along with information about the device’s orientation relative to north, using its magnetometer, is provided through the Core Location framework. You’ll need to import CoreLocation.

Information about the device’s change in speed and attitude using its accelerometer is provided through the UIEvent class (for device shake) and the Core Motion framework, which provides increased accuracy by incorporating the device’s gyroscope, if it has one, as well as the magnetometer; you’ll need to import CoreMotion. In addition, the device may have an extra chip that analyzes and records the user’s activity, such as walking or running; the Core Motion framework provides access to this information.

One of the major challenges associated with writing code that takes advantage of the sensors is that different devices have different hardware. If you don’t want to impose stringent restrictions on what devices your app will run on in the first place (UIRequiredDeviceCapabilities in the Info.plist), your code must be prepared to fail gracefully and possibly provide a subset of its full capabilities when it discovers that the current device lacks certain features.

Moreover, certain sensors may experience momentary inadequacy; for example, Core Location might not be able to get a fix on the device’s position because it can’t see cell towers, GPS satellites, or both. And some sensors take time to “warm up,” so that the values you’ll get from them initially will be invalid. You’ll want to respond to such changes in the external circumstances, in order to give the user a decent experience of your application regardless.

In addition, all sensor usage means battery usage, to a lesser or greater degree — sometimes to a considerably greater degree. There’s a compromise to be made here: you want to please the user with your app’s convenience and usefulness without disagreeably surprising and annoying the user through the device’s rapid depletion of its battery charge.

Core Location

The Core Location framework provides facilities for the device to determine and report its location (location services). It takes advantage of three sensors:

Wi-Fi

The device, if Wi-Fi is turned on, may scan for nearby Wi-Fi devices and compare these against an online database.

Cell

The device, if it has cell capabilities and they are not turned off, may compare nearby telephone cell towers against an online database.

GPS

The device’s GPS, if it has one, may be able to obtain a position fix from GPS satellites. The GPS is obviously the most accurate location sensor, but it takes the longest to get a fix, and in some situations it will fail — indoors, for example, or in a city of tall buildings, where the device can’t “see” enough of the sky.

Core Location will automatically use whatever facilities the device has available; all you have to do is ask for the device’s location. Core Location allows you to specify how accurate a position fix you want; more accurate fixes may require more time.

Behavior of your app may depend on the device’s physical location. To help you test, Xcode lets you pretend that the device is at a particular location on earth. The Simulator’s Debug → Location menu lets you enter a location; the Scheme editor lets you set a default location (under Options); and the Debug → Simulate Location menu lets you switch among locations. You can set a built-in location or supply a standard GPX file containing a waypoint. You can also set the location to None; it’s important to test for what happens when no location information is available.

Location Manager, Delegate, and Authorization

Use of Core Location requires a location manager object, an instance of CLLocationManager. This object needs to be created on the main thread and retained thereafter. A standard strategy is to pick an instance that persists throughout the life of your app — your app delegate, or your root view controller, for example — and initialize an instance property with a location manager:

let locman = CLLocationManager()

Your location manager will generally be useless without a delegate (CLLocationManagerDelegate). You don’t want to change a location manager’s delegate, so you’ll want to set it once, early in the life of the location manager. This delegate will need to be an instance that persists together with the location manager. For example, if locman is a constant property of our root view controller, then we can set the root view controller as its delegate in the root view controller’s initializer:

required init?(coder aDecoder: NSCoder) {
    super.init(coder:aDecoder)
    self.locman.delegate = self
}

You must also explicitly request authorization from the user when you first start tracking the device’s location. There are two types of authorization (starting in iOS 8):

When In Use

When In Use authorization allows your app to perform basic location determination.

Always

Always authorization gives your app use of all Core Location modes and features. (I’ll describe later what these are.)

You’ll need to decide which level of authorization you need. Do not blithely pick Always authorization just because it is broader! On the contrary, you should pick When In Use authorization unless you need Always authorization for some feature of your app.

A further complication is that the user can turn off location services as a whole. If location services are off, and if you proceed to try to use Core Location anyway, the system may put up an alert on your behalf offering to switch to the Settings app so that the user can turn location services on. The CLLocationManager class method locationServicesEnabled reports whether location services as a whole are switched off. If so, a possible strategy is to call startUpdatingLocation on your location manager anyway. The attempt to learn the device’s location will fail, but this failure may also cause the user to see the system alert:

if !CLLocationManager.locationServicesEnabled() {
    self.locman.startUpdatingLocation()
    return
}

Once location services are enabled, you’ll call the CLLocationManager class method authorizationStatus to learn your app’s actual authorization status. There are two types of authorization, so there are two status cases reporting that you have authorization: .authorizedWhenInUse and .authorizedAlways. If the status is .notDetermined, you can request that the system put up the authorization request alert on your behalf by calling one of two instance methods, either requestWhenInUseAuthorization or requestAlwaysAuthorization; you must also have a corresponding entry in your app’s Info.plist, either “Privacy — When In Use Description” (NSLocationWhenInUseUsageDescription) or “Privacy — Location Always Usage Description” (NSLocationAlwaysUsageDescription), providing the body of the authorization request alert.

Oddly, neither requestWhenInUseAuthorization nor requestAlwaysAuthorization takes a completion function. Your code just continues blithely on. If you call requestWhenInUseAuthorization and then attempt to track the device’s location by calling startUpdatingLocation, you might succeed if the user grants authorization, but you might fail. The Core Location API provides no simple way for you to proceed only when you know the outcome of the authorization request.

On the other hand, when the user changes your authorization status — either granting authorization in the authorization request alert, or switching to the Settings app and providing authorization there — your location manager delegate’s locationManager(_:didChangeAuthorization:) is called. Thus, if you were to store whatever action you want to perform before obtaining authorization, you could perform that action after obtaining authorization.

Here’s a strategy for doing that. Instead of making our CLLocationManager a property of the root view controller, we have a utility class, ManagerHolder; it creates the location manager, asks for authorization if needed, and stores the function we want to call when we have authorization (for the structure of checkForLocationAccess, compare the discussion under “Music Library Authorization”):

class ManagerHolder {
    let locman = CLLocationManager()
    var doThisWhenAuthorized : (() -> ())?
    func checkForLocationAccess(always:Bool = false,
        andThen f: (()->())? = nil) {
            // no services? try to get alert
            guard CLLocationManager.locationServicesEnabled() else {
                self.locman.startUpdatingLocation()
                return
            }
            let status = CLLocationManager.authorizationStatus()
            switch status {
            case .authorizedAlways, .authorizedWhenInUse:
                f?()
            case .notDetermined:
                self.doThisWhenAuthorized = f
                always ?
                    self.locman.requestAlwaysAuthorization() :
                    self.locman.requestWhenInUseAuthorization()
            case .restricted:
                // do nothing
                break
            case .denied:
                // do nothing, or beg the user to authorize us in Settings
                break
            }
    }
}

Our utility class encapsulates management and authorization of the location manager. This gives us flexibility. With my utility class, I can instantiate a location manager anywhere and it will be managed correctly. My current plan is to attach a ManagerHolder instance to the root view controller; but it would be trivial to attach a ManagerHolder to the app delegate instead, or to both.

So now I do attach a ManagerHolder instance to the root view controller. The root view controller initializes and stores a ManagerHolder instance as an instance property, thus bringing the location manager to life as early as possible. For convenience, I’ll still give our root view controller a locman property, but this will be a computed property that bounces to the ManagerHolder’s location manager instance:

class ViewController: UIViewController, CLLocationManagerDelegate {
    let managerHolder = ManagerHolder()
    var locman : CLLocationManager {
        return self.managerHolder.locman
    }
    required init?(coder aDecoder: NSCoder) {
        super.init(coder:aDecoder)
        self.locman.delegate = self
    }
    // ...
}

Acting as the location manager delegate, the root view controller can now implement locationManager(_:didChangeAuthorizationStatus:) to call the function stored in the ManagerHolder:

func locationManager(_ manager: CLLocationManager,
    didChangeAuthorization status: CLAuthorizationStatus) {
        switch status {
        case .authorizedAlways, .authorizedWhenInUse:
            self.managerHolder.doThisWhenAuthorized?()
            self.managerHolder.doThisWhenAuthorized = nil
        default: break
        }
}

If we now call our ManagerHolder’s checkForLocationAccess before tracking location, everything will work correctly.

Location Tracking

To use the location manager to track the user’s location, configure the location manager (I’ll go into more detail in a moment) and then tell the location manager to startUpdatingLocation.

The location manager will begin calling the delegate’s locationManager(_:didUpdateLocations:) delegate method repeatedly. You’ll deal with each such call as it arrives. In this way, you will be kept more or less continuously informed of where the device is — until you call stopUpdatingLocation. Don’t forget to call it when you no longer need location tracking!

Your delegate should also implement locationManager(_:didFailWithError:) to receive error messages.

The pattern here is common to virtually all uses of the location manager. The location manager can do various kinds of tracking, but they all work the same way: you’ll tell it to start, a corresponding delegate method will be called repeatedly, and ultimately you’ll tell it to stop.

Here are some location manager configuration properties that are useful to set before you start location tracking:

desiredAccuracy

Your choices are:

  • kCLLocationAccuracyBestForNavigation

  • kCLLocationAccuracyBest

  • kCLLocationAccuracyNearestTenMeters

  • kCLLocationAccuracyHundredMeters

  • kCLLocationAccuracyKilometer

  • kCLLocationAccuracyThreeKilometers

It might be sufficient for your purposes to know very quickly but very roughly the device’s location. Highest accuracy may also cause the highest battery drain; indeed, kCLLocationAccuracyBestForNavigation is supposed to be used only when the device is connected to external power. The accuracy setting is not a filter: the location manager will send you whatever location information it has, even if it isn’t as accurate as you asked for, and checking a location’s horizontalAccuracy is then up to you.

distanceFilter

Perhaps you don’t need a location report unless the device has moved a certain distance since the previous report. This property can help keep you from being bombarded with events you don’t need.

activityType

Your choices are (CLActivityType):

  • .fitness

  • .automotiveNavigation

  • .otherNavigation

  • .other

This affects how persistently and frequently updates will be sent, based on the movement of the device. Think of it as an autopause setting. The more we are willing to accept pauses in the sending of updates, the less power we will use. Thus, .automotiveNavigation uses more power than .fitness, but there’s a hope that in the former case the device may be connected to a source of power. You could go even further and set pausesLocationUpdatesAutomatically to false — but don’t.

Here’s a basic example, taking advantage of the authorization strategy described in the previous section:

func doFindMe () {
    self.managerHolder.checkForLocationAccess {
        self.locman.desiredAccuracy = kCLLocationAccuracyBest
        self.locman.activityType = .fitness
        self.locman.startUpdatingLocation()
    }
}

We have a location manager, we are set as the location manager’s delegate, we have requested authorization if needed, and if we have or can get authorization, we have started tracking location. All we have to do now is sit back and wait for our implementation of locationManager(_:didUpdateLocations:) to be called. The second parameter is an array of CLLocation, a value class that encapsulates the notion of a location. Its properties include:

coordinate

A CLLocationCoordinate2D, a struct consisting of two Doubles representing latitude and longitude.

altitude

A CLLocationDistance, which is a Double representing a number of meters.

speed

A CLLocationSpeed, which is a Double representing meters per second.

course

A CLLocationDirection, which is a Double representing degrees (not radians) clockwise from north.

horizontalAccuracy

A CLLocationAccuracy, which is a Double representing meters.

timestamp

A Date.

In this situation, the array that we receive is likely to contain just one CLLocation — and even if it contains more than one, the last CLLocation in the array is guaranteed to be the newest. Thus, it is sufficient for our locationManager(_:didUpdateLocations:) implementation to extract the last element of the array:

let REQ_ACC : CLLocationAccuracy = 10
func locationManager(_ manager: CLLocationManager,
    didUpdateLocations locations: [CLLocation]) {
        let loc = locations.last!
        let acc = loc.horizontalAccuracy
        print(acc)
        if acc < 0 || acc > REQ_ACC {
            return // wait for the next one
        }
        let coord = loc.coordinate
        print("You are at (coord.latitude) (coord.longitude)")
}

It’s instructive to see, from the console logs, how the accuracy improves as the sensors warm up and the GPS obtains a fix:

1285.19869645162
1285.19869645172
1285.19869645173
65.0
65.0
30.0
30.0
30.0
10.0
You are at ...

Where Am I?

A common desire is, rather than tracking location continuously, to get one location once. To do that, a common beginner mistake is to call startUpdatingLocation and implement locationManager(_:didUpdateLocations:) to stop updating as soon as it is called:

func locationManager(_ manager: CLLocationManager,
    didUpdateLocations locations: [CLLocation]) {
        let loc = locations.last!
        let coord = loc.coordinate
        print("You are at (coord.latitude) (coord.longitude)")
        manager.stopUpdatingLocation() // this won't work!
}

That’s unlikely to work. As I demonstrated in the preceding section, the sensors take time to warm up, and many calls to locationManager(_:didUpdateLocations:) may be made before a reasonably accurate CLLocation arrives. The correct strategy is to do just what I did in the preceding section — and then call stopUpdatingLocation at the very end, when a sufficiently accurate location has in fact been received. It’s a lot of work to get just one reading, but it’s what you have to do.

Or rather, it’s what you had to do. Starting in iOS 9, instead of calling startUpdatingLocation, you can call requestLocation:

self.locman.desiredAccuracy = kCLLocationAccuracyBest
self.locman.requestLocation()

Your locationManager(_:didUpdateLocations:) will be called once with a good location, based on the desiredAccuracy you’ve already set:

func locationManager(manager: CLLocationManager,
    didUpdateLocations locations: [CLLocation]) {
        let loc = locations.last!
        let coord = loc.coordinate
        print("You are at (coord.latitude) (coord.longitude)")
}

Keep in mind, however, that calling requestLocation will not magically cause an accurate location to arrive any faster! It’s a great convenience that locationManager(_:didUpdateLocations:) will be called just once, but some considerable time may elapse before that call arrives.

You do not have to call stopUpdatingLocation, though you can do so if you change your mind and decide before the location arrives that it is no longer needed.

Warning

If you call requestLocation soon after calling it previously, locationManager(_:didUpdateLocations:) may be called twice in very rapid succession (with what appear to be cached location values). I regard this as a bug.

Background Location

You can use Core Location when your app is not in the foreground. There are two quite different ways to do this:

Continuous background location

This is an extension of basic location tracking. You tell the location manager to startUpdatingLocation, and updates are permitted to continue even if the app goes into the background. Your app runs in the background in order to receive these updates.

Location monitoring

Your app does not run in the background! Rather, the system monitors location for you. If a significant location event occurs, your app may be awakened in the background (or launched in the background, if it is not running) and notified.

Continuous background location

Use of Core Location to perform continuous background updates is parallel to production of sound in the background (Chapter 14):

  • In your app’s Info.plist, the “Required background modes” key (UIBackgroundModes) should include location; you can set this up easily by checking “Location updates” under Background Modes in the Capabilities tab when editing the target.

  • Starting in iOS 9, you must also set your location manager’s allowsBackgroundLocationUpdates to true. You should do this only at moments when you actually need to start allowing background location updates, and set it back to false as soon as you no longer need to allow background updates.

The result is that if you have a location manager to which you have sent startUpdatingLocation and the user sends your app into the background, your app is not suspended: the use of location services continues, and your delegate keeps receiving location updates. You cannot start tracking locations when your app is already in the background (well, you can try, but in all probability your app will be suspended and location tracking will cease).

What the user sees when you’re tracking location in the background depends on what type of authorization you have:

When In Use

The device will make the user aware that your app is doing background location tracking, through a blue double-height status bar. The user can tap this to summon your app to the front. (If you see the blue bar momentarily as your app goes into the background, that’s because you didn’t do what I said a moment ago: set allowsBackgroundLocationUpdates to true only when you really are going to track location in the background.)

Always

When you track location in the background, the blue double-height status bar doesn’t appear, but the system may present the authorization dialog periodically (every few days).

Background use of location services can cause a power drain, but if you want your app to function as a positional data logger, it may be the only way. You can help conserve power, however, by making judicious choices, such as:

  • By setting a coarse distanceFilter value.

  • By not requiring overly high accuracy.

  • By being correct about the activityType.

  • By operating in deferred mode.

What is deferred mode? It’s an arrangement whereby your app, which is already receiving updates because you’ve called startUpdatingLocation, states that it doesn’t need to receive updates until the user has moved a specified amount or until a fixed time interval has elapsed. This can make sense if your app runs in the background; you don’t need to update your interface constantly because there isn’t any interface to update. Instead, you’re willing to accept updates in occasional batches and plot or record them whenever they happen to arrive. In this way, you conserve the device’s power, for two reasons: the device may be able to power down some of its sensors temporarily, and your app can be suspended in the background between updates.

Deferred mode is dependent on hardware capabilities; use it only if the class method deferredLocationUpdatesAvailable returns true. For this feature to work, the location manager’s desiredAccuracy must be kCLLocationAccuracyBest or kCLLocationAccuracyBestForNavigation, and its distanceFilter must be kCLDistanceFilterNone (the default); basically you’re telling the GPS to run, but you’re also telling it to accumulate readings rather than constantly reporting them to you.

To use deferred mode, call this method:

  • allowDeferredLocationUpdates(untilTraveled:timeout:)

It is reasonable to specify a very large distance or time; in fact, constants are provided for this very purpose — CLLocationDistanceMax and CLTimeIntervalMax. The reason is that, when your app is brought to the foreground, all accumulated updates are then delivered, so that your app can update its interface.

You’ll need to implement these delegate methods:

locationManager(_:didFinishDeferredUpdatesWithError:)

When this method is called, deferred mode ends; if your app is still in the background, and you want another round of deferred mode, call allowDeferredLocationUpdates again.

(It is an error to call allowDeferredLocationUpdates after calling it previously but before locationManager(_:didFinishDeferredUpdatesWithError:) is called; that’s why you want to call it again in this method.)

locationManager(_:didUpdateLocations:)

The locations: parameter in this situation may be an array containing multiple locations; these will be the accumulated updates.

Location monitoring

Location monitoring is not something your app does; it’s something the system does on your behalf. Thus, it doesn’t require your app to run continuously in the background, and you do not have to set the UIBackgroundModes of your Info.plist. Your app still requires a location manager with a delegate, however, and it needs appropriate user authorization; in general, this will be Always authorization.

There are four distinct forms of location monitoring:

Significant location change monitoring

If the class method significantLocationChangeMonitoringAvailable returns true, you can call startMonitoringSignificantLocationChanges. The delegate’s locationManager(_:didUpdateLocations:) will be called whenever the device’s location has changed significantly.

Visit monitoring

By tracking significant changes in your location along with the pauses between those changes, the system decides that the user is visiting a spot. Visit monitoring is basically a form of significant location change monitoring, but requires even less power and notifies you less often, because locations that don’t involve pauses are filtered out.

If the class method significantLocationChangeMonitoringAvailable returns true, you can call startMonitoringVisits. The delegate’s locationManager(_:didVisit:) will be called whenever the user’s location pauses in a way that suggests a visit is beginning, and again whenever a visit ends. The second parameter is a CLVisit, a simple value class wrapping visit data; in addition to coordinate and horizontalAccuracy, you get an arrivalDate and departureDate. If this is an arrival, the departureDate will be Date.distantFuture. If this is a departure and we were not monitoring visits when the user arrived, the arrivalDate will be Date.distantPast.

Region monitoring

Region monitoring depends upon the previous definition of one more regions. A region is a CLRegion, which basically expresses a geofence, an area that triggers an event when the user enters or leaves it (or both). This class is divided into two subclasses, CLBeaconRegion and CLCircularRegion. CLBeaconRegion is used in connection with iBeacon monitoring; I’m not going to discuss iBeacon in this book, so that leaves us with CLCircularRegion. Its initializer is init(center:radius:identifier:); the center: parameter is a CLLocationCoordinate2D, and the identifier: serves as a unique key. You should also set the region’s notifyOnEntry or notifyOnExit to false if you’re interested in just one type of event.

If the class method isMonitoringAvailable(for:) with an argument of CLCircularRegion.self returns true, then you can call startMonitoring(for:) for each region in which you are interested. Regions being monitored are maintained as a set, which is the location manager’s monitoredRegions. A region’s identifier serves as a unique key, so that if you start monitoring for a region whose identifier matches that of a region already in the monitoredRegions set, the latter will be ejected from the set. The following delegate methods may be called:

  • locationManager(_:didEnterRegion:)

  • locationManager(_:didExitRegion:)

  • locationManager(_:monitoringDidFailFor:withError:)

Geofenced local notifications

This is a special case of region monitoring. You only need When In Use authorization. You configure a local notification (UNNotification, Chapter 13) using a request whose trigger is a UNLocationNotificationTrigger. The trigger’s initializer is init(region:repeats:) — and thus you can supply a CLRegion. If repeats: is true, the notification won’t be unscheduled after it fires; rather, it will fire again whenever the user crosses the region boundary in the specified direction again (depending on the CLRegion’s notifyOnEntry and notifyOnExit settings).

Location monitoring is less battery-intensive than full-fledged location tracking. That’s because it relies on cell tower positions to estimate the device’s location. Since the cell is probably working anyway — for example, the device is a phone, so the cell is always on and is always concerned with what cell towers are available — little or no additional power is required. Apple says that the system will also take advantage of other clues (requiring no extra battery drain) to decide that there may have been a change in location: for example, the device may observe a change in the available Wi-Fi networks, strongly suggesting that the device has moved.

Nevertheless, location monitoring is not cost-free. It does use the battery, and over the course of time the user will notice this. Therefore you should use it only during periods when you need it. Every startMonitoring method has a corresponding stopMonitoring method. Don’t forget to call that method when location monitoring is no longer needed! The system is performing this work on your behalf, and it will continue to do so until you tell it not to.

Warning

It is crucial that you remember to stop location monitoring. Apps that don’t remember to do this will drain the battery significantly. The user can figure this out by looking at the Battery screen in Settings, and, having no other way to turn location monitoring off, will have no choice but to delete your app (and will probably give it a bad review in the App Store).

If your app isn’t in the foreground at the time the system wants to send your location manager delegate a location monitoring event, there are two possible states in which your app might find itself:

Your app is suspended in the background

Your app is woken up (remaining in the background) long enough to receive the delegate event and do something with it.

Your app is not running at all

Your app is relaunched (remaining in the background), and your app delegate will be sent application(_:didFinishLaunchingWithOptions:) with the options: dictionary containing UIApplicationLaunchOptionsLocationKey, thus allowing you to discern the special nature of the situation. At this point you probably have no location manager — your app has just launched from scratch. You need one, and you need it to have a delegate, so that you can receive the appropriate delegate events. This is another reason why you should create a location manager and assign it a delegate early in the lifetime of the app.

Heading

For appropriately equipped devices, Core Location supports use of the magnetometer to determine which way the device is facing (its heading). Although this information is accessed through a location manager, you do not need location services to be turned on merely to use the magnetometer to report the device’s orientation with respect to magnetic north; you do need location services to be turned on in order to report true north, as this depends on the device’s location.

As with location, you’ll first check that the desired feature is available (headingAvailable); then you’ll configure the location manager, and call startUpdatingHeading. The delegate will be sent locationManager(_:didUpdateHeading:) repeatedly until you call stopUpdatingHeading (or else locationManager(_:didFailWithError:) will be called).

A heading object is a CLHeading instance; its magneticHeading and trueHeading properties are CLLocationDirection values, which report degrees (not radians) clockwise from the reference direction (magnetic or true north, respectively). If the trueHeading is not available, it will be reported as -1. The trueHeading will not be available unless both of the following are true in the Settings app:

  • Location services are turned on (Privacy → Location Services).

  • Compass calibration is turned on (Privacy → Location Services → System Services).

Beyond that, explicit user authorization is not needed in order to get the device’s heading with respect to true north.

Implement the delegate method locationManagerShouldDisplayHeadingCalibration(_:) to return true if you want the system’s compass calibration dialog to be permitted to appear if needed.

In this example, I’ll use the device as a compass. The headingFilter setting is to prevent us from being bombarded constantly with readings. For best results, the device should probably be held level (like a tabletop, or a compass); we are setting the headingOrientation so that the reported heading will be the direction in which the top of the device (the end away from the Home button) is pointing:

guard CLLocationManager.headingAvailable() else {return} // no hardware
self.locman.headingFilter = 5
self.locman.headingOrientation = .portrait
self.locman.startUpdatingHeading()

In the delegate, I’ll display our heading as a rough cardinal direction in a label in the interface (self.lab). If we have a trueHeading, I’ll use it; otherwise I’ll use the magneticHeading:

func locationManager(_ manager: CLLocationManager,
    didUpdateHeading newHeading: CLHeading) {
        var h = newHeading.magneticHeading
        let h2 = newHeading.trueHeading // -1 if no location info
        if h2 >= 0 {
            h = h2
        }
        let cards = ["N", "NE", "E", "SE", "S", "SW", "W", "NW"]
        var dir = "N"
        for (ix, card) in cards.enumerated() {
            if h < 45.0/2.0 + 45.0*Double(ix) {
                dir = card
                break
            }
        }
        if self.lab.text != dir {
            self.lab.text = dir
        }
}

Acceleration, Attitude, and Activity

Acceleration results from the application of a force to the device, and is detected through the device’s accelerometer, supplemented by the gyroscope if it has one. Gravity is a force, so the accelerometer always has something to measure, even if the user isn’t consciously applying a force to the device; thus the device can report its attitude relative to the vertical.

Acceleration information can arrive in two ways:

As a prepackaged UIEvent

You can receive a UIEvent notifying you of a predefined gesture performed by accelerating the device. At present, the only such gesture is the user shaking the device.

With the Core Motion framework

You instantiate CMMotionManager and then obtain information of a desired type. You can ask for accelerometer information, gyroscope information, or device motion information (and you can also use Core Motion to get magnetometer information); device motion combines the gyroscope data with data from the other sensors to give you the best possible description of the device’s attitude in space.

Shake Events

A shake event is a UIEvent (Chapter 5). Receiving shake events involves the notion of the first responder. To receive shake events, your app must contain a UIResponder which:

  • Returns true from canBecomeFirstResponder

  • Is in fact first responder

This responder, or a UIResponder further up the responder chain, should implement some or all of these methods:

motionBegan(_:with:)

Something has started to happen that might or might not turn out to be a shake.

motionEnded(_:with:)

The motion reported in motionBegan is over and has turned out to be a shake.

motionCancelled(_:with:)

The motion reported in motionBegan wasn’t a shake after all.

It might be sufficient to implement motionEnded(_:with:), because this arrives if and only if the user performs a shake gesture. The first parameter will be the event subtype, but at present this is guaranteed to be .motionShake, so testing it is pointless.

The view controller in charge of the current view is a good candidate to receive shake events. Thus, a minimal implementation might look like this:

override var canBecomeFirstResponder : Bool {
    return true
}
override func viewDidAppear(_ animated: Bool) {
    super.viewDidAppear(animated)
    self.becomeFirstResponder()
}
override func motionEnded(_ motion: UIEventSubtype, with e: UIEvent?) {
    print("hey, you shook me!")
}

By default, if some other object is first responder, and is of a type that supports undo (such as a UITextField), and if motionBegan(_:with:) is sent up the responder chain, and if you have not set the shared UIApplication’s applicationSupportsShakeToEdit property to false, a shake will be handled through an Undo or Redo alert. Your view controller might not want to rob any responders in its view of this capability. A simple way to prevent this is to test whether the view controller is itself the first responder; if it isn’t, we call super to pass the event on up the responder chain:

override func motionEnded(_ motion: UIEventSubtype, with e: UIEvent?) {
    if self.isFirstResponder {
        print("hey, you shook me!")
    } else {
        super.motionEnded(motion, with: e)
    }
}

Raw Acceleration

If the device has an accelerometer but no gyroscope, you can learn about the forces being applied to it, but some compromises will be necessary. The chief problem is that, even if the device is completely motionless, its acceleration values will constitute a normalized vector pointing toward the center of the earth, popularly known as gravity. The accelerometer is thus constantly reporting a combination of gravity and user-induced acceleration. This is good and bad. It’s good because it means that, with certain restrictions, you can use the accelerometer to detect the device’s attitude in space. It’s bad because gravity values and user-induced acceleration values are mixed together. Fortunately, there are ways to separate these values mathematically:

With a low-pass filter

A low-pass filter will damp out user acceleration so as to report gravity only.

With a high-pass filter

A high-pass filter will damp out the effect of gravity so as to detect user acceleration only, reporting a motionless device as having zero acceleration.

In some situations, it is desirable to apply both a low-pass filter and a high-pass filter, so as to learn both the gravity values and the user acceleration values. A common additional technique is to run the output of the high-pass filter itself through a low-pass filter to reduce noise and small twitches. Apple provides some nice sample code for implementing a low-pass or a high-pass filter; see especially the AccelerometerGraph example, which is also very helpful for exploring how the accelerometer behaves.

The technique of applying filters to the accelerometer output has some serious downsides, which are inevitable in a device that lacks a gyroscope:

  • It’s up to you to apply the filters; you have to implement boilerplate code and hope that you don’t make a mistake.

  • Filters mean latency. Your response to the accelerometer values will lag behind what the device is actually doing; this lag may be noticeable.

Reading raw accelerometer values with Core Motion is really a subset of how you read any values with Core Motion; in some ways it is similar to how you use Core Location:

  1. You start by instantiating CMMotionManager; retain the instance somewhere, typically as an instance property. There is no reason not to initialize the property directly:

    let motman = CMMotionManager()
  2. Confirm that the desired hardware is available.

  3. Set the interval at which you wish the motion manager to update itself with new sensor readings.

  4. Call the appropriate start method.

  5. You probably expect me to say now that the motion manager will call into a delegate. Surprise! A motion manager has no delegate. You have two choices:

    • Poll the motion manager whenever you want data, asking for the appropriate data property. The polling interval doesn’t have to be the same as the motion manager’s update interval; when you poll, you’ll obtain the motion manager’s current data — that is, the data generated by its most recent update, whenever that was.

    • If your app’s purpose is to collect all the data, then instead of calling a start method, you can call a start...Updates(to:withHandler:) method with a function that will be called back, preferably on a background thread managed by an OperationQueue (Chapter 24).

  6. Don’t forget to call the corresponding stop method when you no longer need data.

In this example, I will simply report whether the device is lying flat on its back. I start by configuring my motion manager; then I launch a repeating timer to trigger polling:

guard self.motman.isAccelerometerAvailable else { return }
self.motman.accelerometerUpdateInterval = 1.0 / 30.0
self.motman.startAccelerometerUpdates()
self.timer = Timer.scheduledTimer(
    timeInterval:self.motman.accelerometerUpdateInterval,
    target: self, selector: #selector(pollAccel),
    userInfo: nil, repeats: true)

My pollAccel method is now being called repeatedly. In it, I ask the motion manager for its accelerometer data. This arrives as a CMAccelerometerData object, which is a timestamp plus a CMAcceleration; a CMAcceleration is simply a struct of three values, one for each axis of the device, measured in Gs. The positive x-axis points to the right of the device. The positive y-axis points toward the top of the device, away from the Home button. The positive z-axis points out the front of the screen.

The two axes orthogonal to gravity, which are the x- and y-axes when the device is lying more or less on its back, are much more accurate and sensitive to small variation than the axis pointing toward or away from gravity. So our approach is to ask first whether the x and y values are close to zero; only then do we use the z value to learn whether the device is on its back or on its face. To keep from updating our interface constantly, we implement a crude state machine; the state property (self.state) starts out at .unknown, and then switches between .lyingDown (device on its back) and .notLyingDown (device not on its back), and we update the interface only when there is a state change:

guard let data = self.motman.accelerometerData else {return}
let acc = data.acceleration
let x = acc.x
let y = acc.y
let z = acc.z
let accu = 0.08
if abs(x) < accu && abs(y) < accu && z < -0.5 {
    if self.state == .unknown || self.state == .notLyingDown {
        self.state = .lyingDown
        self.label.text = "I'm lying on my back... ahhh..."
    }
} else {
    if self.state == .unknown || self.state == .lyingDown {
        self.state = .notLyingDown
        self.label.text = "Hey, put me back down on the table!"
    }
}

This works, but it’s sensitive to small motions of the device on the table. To damp this sensitivity, we can run our input through a low-pass filter. The low-pass filter code comes straight from Apple’s own examples, and involves maintaining the previously filtered reading as a set of properties:

func add(acceleration accel:CMAcceleration) {
    let alpha = 0.1
    self.oldX = accel.x * alpha + self.oldX * (1.0 - alpha)
    self.oldY = accel.y * alpha + self.oldY * (1.0 - alpha)
    self.oldZ = accel.z * alpha + self.oldZ * (1.0 - alpha)
}

Our polling code now starts out by passing the data through the filter:

guard let data = self.motman.accelerometerData else {return}
self.add(acceleration: data.acceleration)
let x = self.oldX
let y = self.oldY
let z = self.oldZ
// ... and the rest is as before ...

As I mentioned earlier, instead of polling, you can receive callbacks to a function. This approach is useful particularly if your goal is to receive every update or to receive updates on a background thread (or both). To illustrate, I’ll rewrite the previous example to use this technique; to keep things simple, I’ll ask for my callbacks on the main thread (the documentation advises against this, but Apple’s own sample code does it). We now start our accelerometer updates like this:

self.motman.startAccelerometerUpdates(to: .main) { data, err in
    guard let data = data else {
        print(err)
        self.stopAccelerometer()
        return
    }
    self.receive(acceleration:data)
}

receive(acceleration:) is just like our earlier pollAccel, except that we already have the accelerometer data:

func receive(acceleration data:CMAccelerometerData) {
    self.add(acceleration: data.acceleration)
    let x = self.oldX
    let y = self.oldY
    let z = self.oldZ
    // ... and the rest is as before ...
}

In this next example, the user is allowed to slap the side of the device into an open hand — perhaps as a way of telling it to go to the next or previous image or whatever it is we’re displaying. We pass the acceleration input through a high-pass filter to eliminate gravity (again, the filter code comes straight from Apple’s examples):

func add(acceleration accel:CMAcceleration) {
    let alpha = 0.1
    self.oldX = accel.x - ((accel.x * alpha) + (self.oldX * (1.0 - alpha)))
    self.oldY = accel.y - ((accel.y * alpha) + (self.oldY * (1.0 - alpha)))
    self.oldZ = accel.z - ((accel.z * alpha) + (self.oldZ * (1.0 - alpha)))
}

What we’re looking for, in our polling routine, is a high positive or negative x value. A single slap is likely to consist of several consecutive readings above our threshold, but we want to report each slap only once, sο we take advantage of the timestamp attached to a CMAccelerometerData, maintaining the timestamp of our previous high reading as a property and ignoring readings that are too close to one another in time. Another problem is that a sudden jerk involves both an acceleration (as the user starts the device moving) and a deceleration (as the device stops moving); thus a left slap might be preceded by a high value in the opposite direction, which we might interpret wrongly as a right slap. We can compensate crudely, at the expense of some latency, with delayed performance:

func pollAccel(_: Any!) {
    guard let data = self.motman.accelerometerData else {return}
    self.add(acceleration: data.acceleration)
    let x = self.oldX
    let thresh = 1.0
    if x < -thresh {
        if data.timestamp - self.oldTime > 0.5 || self.lastSlap == .right {
            self.oldTime = data.timestamp
            self.lastSlap = .left
            self.canceltimer?.invalidate()
            self.canceltimer = .scheduledTimer(
                withTimeInterval:0.5, repeats: false) { _ in
                    print("left")
            }
        }
    } else if x > thresh {
        if data.timestamp - self.oldTime > 0.5 || self.lastSlap == .left {
            self.oldTime = data.timestamp
            self.lastSlap = .right
            self.canceltimer?.invalidate()
            self.canceltimer = .scheduledTimer(
                withTimeInterval:0.5, repeats: false) { _ in
                    print("right")
            }
        }
    }
}

The gesture we’re detecting is a little tricky to make: the user must slap the device into an open hand and hold it there; if the device jumps out of the open hand, that movement may be detected as the last in the series, resulting in the wrong report (left instead of right, or vice versa). And the latency of our gesture detection is very high.

Of course we might try tweaking some of the magic numbers in this code to improve accuracy and performance, but a more sophisticated analysis would probably involve storing a stream of all the most recent CMAccelerometerData objects and studying the entire stream to work out the overall trend.

Tip

Starting in iOS 9, some devices may be capable of recording accelerometer data for later analysis. You’ll want to look into the CMSensorRecorder class (along with CMSensorDataList and CMRecordedAccelerometerData).

Gyroscope

The inclusion of an electronic gyroscope in the panoply of onboard hardware in some devices has made a huge difference in the accuracy and speed of gravity and attitude reporting. A gyroscope has the property that its attitude in space remains constant; thus it can detect any change in the attitude of the containing device. This has two important consequences for accelerometer measurements:

  • The accelerometer can be supplemented by the gyroscope to detect quickly the difference between gravity and user-induced acceleration.

  • The gyroscope can observe pure rotation, where little or no acceleration is involved and so the accelerometer would not have been helpful. The extreme case is constant attitudinal rotation around the gravity axis, which the accelerometer alone would be completely unable to detect (because there is no user-induced force, and gravity remains constant).

It is possible to track the raw gyroscope data: make sure the device has a gyroscope (isGyroAvailable), and then call startGyroUpdates. What we get from the motion manager is a CMGyroData object, which combines a timestamp with a CMRotationRate that reports the rate of rotation around each axis, measured in radians per second, where a positive value is counterclockwise as seen by someone whose eye is pointed to by the positive axis. (This is the opposite of the direction graphed in Figure 3-9.) The problem, however, is that the gyroscope values are scaled and biased. This means that the values are based on an arbitrary scale and are gradually increasing (or decreasing) over time at a roughly constant rate. Thus there is very little merit in the exercise of dealing with the raw gyroscope data.

What you are likely to be interested in is a combination of at least the gyroscope and the accelerometer. The mathematics required to combine the data from these sensors can be daunting. Fortunately, there’s no need to know anything about that. Core Motion will happily package up the calculated combination of data as a CMDeviceMotion instance, with the effects of the sensors’ internal bias and scaling already factored out. CMDeviceMotion consists of the following properties, all of which provide a triple of values corresponding to the device’s natural 3D frame (x increasing to the right, y increasing to the top, z increasing out the front):

gravity

A CMAcceleration expressing a vector with value 1 pointing to the center of the earth, measured in Gs.

userAcceleration

A CMAcceleration describing user-induced acceleration, with no gravity component, measured in Gs.

rotationRate

A CMRotationRate describing how the device is rotating around its own center. This is essentially the CMGyroData rotationRate with scale and bias accounted for.

magneticField

A CMCalibratedMagneticField describing (in its field, a CMMagneticField) the magnetic forces acting on the device, measured in microteslas. The sensor’s internal bias has already been factored out. The accuracy is one of the following (CMMagneticFieldCalibrationAccuracy):

  • .uncalibrated

  • .low

  • .medium

  • .high

attitude

A CMAttitude, descriptive of the device’s instantaneous attitude in space. When you ask the motion manager to start generating updates, you can specify a reference frame (CMAttitudeReferenceFrame) for the attitude (having first called the class method availableAttitudeReferenceFrames to ascertain that the desired reference frame is available on this device). In every case, the negative z-axis points at the center of the earth:

.xArbitraryZVertical

The x-axis and y-axis, though orthogonal to the other axes, could be pointing anywhere.

.xArbitraryCorrectedZVertical

The same as in the previous option, but the magnetometer is used to maintain accuracy (preventing drift of the reference frame over time).

.xMagneticNorthZVertical

The x-axis points toward magnetic north.

.xTrueNorthZVertical

The x-axis points toward true north. This value will be inaccurate unless you are also using Core Location to obtain the device’s location.

The attitude value’s numbers can be accessed through various CMAttitude properties corresponding to three different systems, each being convenient for a different purpose:

pitch, roll, yaw

The device’s angle of offset from the reference frame, in radians, around the device’s natural x-axis, y-axis, and z-axis respectively.

rotationMatrix

A CMRotationMatrix struct embodying a 3×3 matrix expressing a rotation in the reference frame.

quaternion

A CMQuaternion describing an attitude. (Quaternions are commonly used in OpenGL.)

In this example, we turn the device into a simple compass/clinometer, merely by asking for its attitude with reference to magnetic north and taking its pitch, roll, and yaw. We begin by making the usual preparations; notice the use of the showsDeviceMovementDisplay property, intended to allow the runtime to prompt the user if the magnetometer needs calibration:

guard self.motman.isDeviceMotionAvailable else { return }
let ref = CMAttitudeReferenceFrame.xMagneticNorthZVertical
let avail = CMMotionManager.availableAttitudeReferenceFrames()
guard avail.contains(ref) else { return }
self.motman.showsDeviceMovementDisplay = true
self.motman.deviceMotionUpdateInterval = 1.0 / 30.0
self.motman.startDeviceMotionUpdates(using: ref)
let t = self.motman.deviceMotionUpdateInterval * 10
self.timer = Timer.scheduledTimer(timeInterval:t,
    target:self, selector:#selector(pollAttitude),
    userInfo:nil, repeats:true)

In pollAttitude, we wait until the magnetometer is ready, and then we start taking attitude readings (converted to degrees):

guard let mot = self.motman.deviceMotion else {return}
let acc = mot.magneticField.accuracy.rawValue
if acc <= CMMagneticFieldCalibrationAccuracy.low.rawValue {
    return // not ready yet
}
let att = mot.attitude
let to_deg = 180.0 / .pi
print("(att.pitch * to_deg), (att.roll * to_deg), (att.yaw * to_deg)")

The values are all close to zero when the device is level (flat on its back) with its x-axis (right edge) pointing to magnetic north, and each value increases as the device is rotated counterclockwise with respect to an eye that has the corresponding positive axis pointing at it. So, for example, a device held upright (top pointing at the sky) has a pitch approaching 90; a device lying on its right edge has a roll approaching 90; and a device lying on its back with its top pointing north has a yaw approaching -90.

There are some quirks in the way Euler angles operate mathematically:

  • roll and yaw increase with counterclockwise rotation from 0 to π (180 degrees) and then jump to -π (-180 degrees) and continue to increase to 0 as the rotation completes a circle; but pitch increases to π/2 (90 degrees) and then decreases to 0, then decreases to -π/2 (-90 degrees) and increases to 0. This means that attitude alone, if we are exploring it through pitch, roll, and yaw, is insufficient to describe the device’s attitude, since a pitch value of, say, π/4 (45 degrees) could mean two different things. To distinguish those two things, we can supplement attitude with the z-component of gravity:

    let g = mot.gravity
    let whichway = g.z > 0 ? "forward" : "back"
    print("pitch is tilted (whichway)")
  • Values become inaccurate in certain orientations. In particular, when pitch is ±90 degrees (the device is upright or inverted), roll and yaw become erratic. (You may see this effect referred to as “the singularity” or as “gimbal lock.”) I believe that, depending on what you are trying to accomplish, you can solve this by using a different expression of the attitude, such as the rotationMatrix, which does not suffer from this limitation.

This next (simple and very silly) example illustrates a use of CMAttitude’s rotationMatrix property. Our goal is to make a CALayer rotate in response to the current attitude of the device. We start as before, except that our reference frame is .xArbitraryCorrectedZVertical; we are interested in how the device moves from its initial attitude, without reference to any particular fixed external direction such as magnetic north. In pollAttitude, our first step is to store the device’s current attitude in a CMAttitude property, self.ref:

guard let mot = self.motman.deviceMotion else {return}
let att = mot.attitude
if self.ref == nil {
    self.ref = att
    return
}

That code works correctly because on the first few polls, as the attitude-detection hardware warms up, att is nil, so we don’t get past the return call until we have a valid initial attitude. Our next step is highly characteristic of how CMAttitude is used: we call the CMAttitude instance method multiply(byInverseOf:), which transforms our attitude so that it is relative to the stored initial attitude:

att.multiply(byInverseOf: self.ref)

Finally, we apply the attitude’s rotation matrix directly to a layer in our interface as a transform. Well, not quite directly: a rotation matrix is a 3×3 matrix, whereas a CATransform3D, which is what we need in order to set a layer’s transform, is a 4×4 matrix. However, it happens that the top left nine entries in a CATransform3D matrix constitute its rotation component, so we start with an identity matrix and set those entries directly:

let r = att.rotationMatrix
var t = CATransform3DIdentity
t.m11 = CGFloat(r.m11)
t.m12 = CGFloat(r.m12)
t.m13 = CGFloat(r.m13)
t.m21 = CGFloat(r.m21)
t.m22 = CGFloat(r.m22)
t.m23 = CGFloat(r.m23)
t.m31 = CGFloat(r.m31)
t.m32 = CGFloat(r.m32)
t.m33 = CGFloat(r.m33)
let lay = // whatever
CATransaction.setAnimationDuration(1.0/10.0)
lay.transform = t

The result is that the layer apparently tries to hold itself still as the device rotates. The example is rather crude because we aren’t using OpenGL to draw a three-dimensional object, but it illustrates the principle well enough.

There is a quirk to be aware of in this case as well: over time, the transform has a tendency to drift. Thus, even if we leave the device stationary, the layer will gradually rotate. That is the sort of effect that .xArbitraryCorrectedZVertical is designed to help mitigate, by bringing the magnetometer into play.

Here are some additional considerations to be aware of when using Core Motion:

  • Your app should create only one CMMotionManager instance.

  • Use of Core Motion is legal while your app is running in the background. To take advantage of this, however, your app would need to be running in the background for some other reason; there is no Core Motion UIBackgroundModes setting in an Info.plist. For example, you might run in the background because you’re using Core Location, and take advantage of this to employ Core Motion as well.

  • Core Motion requires that various sensors be turned on, such as the magnetometer and the gyroscope. This can result in some increased battery drain, so try not to use any sensors you don’t have to, and remember to stop generating updates as soon as you no longer need them.

Tip

Newer devices tend to have more hardware. For example, the iPhone 6 and iPhone 6 Plus have a barometer; you can get altitude information using the CMAltimeter and CMAltitudeData classes.

Motion Activity

Some devices have a motion coprocessor chip with the ability to detect, analyze, and keep a record of device motion even while the device is asleep and with very little drain on power. This is not, in and of itself, a form of location determination; it is an analysis of the device’s physical motion and attitude in order to draw conclusions about what the user has been doing while carrying or wearing the device. You can learn that the user is walking, or walked for an hour, but not where the user was walking.

Interaction with the motion coprocessor is through a CMMotionActivityManager instance. There is no reason not to initialize an instance property with it:

let actman = CMMotionActivityManager()

The device must actually have a motion coprocessor; call the class method isActivityAvailable. The user must also grant authorization, and, having granted it, can later deny it (in the Settings app, under Privacy → Motion & Fitness). Like location services, the user can turn off this feature in general (Fitness Tracking) as well as denying it to your app in particular. There are no authorization methods; the technique is to “tickle” the activity manager by trying to query it and seeing if you get an error. In this example, I have a Bool property, self.authorized, which I set based on the outcome of trying to query the activity manager:

guard CMMotionActivityManager.isActivityAvailable() else { return }
let now = Date()
self.actman.queryActivityStarting(from:now, to:now, to:.main) { arr, err in
    let notauth = Int(CMErrorMotionActivityNotAuthorized.rawValue)
    if err != nil && (err! as NSError).code == notauth {
        self.isAuthorized = false
    } else {
        self.isAuthorized = true
    }
}

On the first run of that code, the system puts up the authorization request alert. The completion function is not called until the user deals with the alert, so the outcome tells you what the user decided. On subsequent runs, that code reports the current authorization status.

There are two approaches to querying the activity manager:

Real-time updates

This is similar to getting motion manager updates with a callback function. You call this method:

  • startActivityUpdates(to:withHandler:)

Your callback function is called periodically. When you no longer need updates, call stopActivityUpdates.

Historical data

The motion coprocessor records about a week’s-worth of data. You ask for a chunk of that recorded data by calling this method:

  • queryActivityStarting(from:to:to:withHandler:)

I’ll illustrate querying for historical data. In this example, I fetch the data for the past 24 hours. I have prepared an OperationQueue property, self.queue:

let now = Date()
let yester = now - (60*60*24)
self.actman.queryActivityStarting(
    from: yester, to: now, to: self.queue) { arr, err in
        guard var acts = arr else {return}
        // ...
}

We now have an array of CMMotionActivity objects representing every change in the device’s activity status. This is a value class. It has a startDate, a confidence (a CMMotionActivityConfidence, .low, .medium, or .high) ranking the activity manager’s faith in its own categorization of what the user was doing, and a bunch of Bool properties actually categorizing the activity:

  • stationary

  • walking

  • running

  • automotive

  • cycling

  • unknown

A common first response to the flood of data is to pare it down (sometimes referred to as smoothing). To help with this, I’ve extended CMMotionActivity with a utility method that summarizes its Bool properties as a string:

extension CMMotionActivity {
    private func tf(_ b:Bool) -> String {
        return b ? "t" : "f"
    }
    func overallAct() -> String {
        let s = tf(self.stationary)
        let w = tf(self.walking)
        let r = tf(self.running)
        let a = tf(self.automotive)
        let c = tf(self.cycling)
        let u = tf(self.unknown)
        return "(s) (w) (r) (a) (c) (u)"
    }
}

So, as a straightforward way of paring down the data, I remove every CMMotionActivity with no definite activity, with a low degree of confidence, or whose activity is the same as its predecessor. Then I set a property, and my data are ready for use:

let blank = "f f f f f f"
acts = acts.filter {act in act.overallAct() != blank}
acts = acts.filter {act in act.confidence == .high}
for i in (1..<acts.count).reversed() {
    if acts[i].overallAct() == acts[i-1].overallAct() {
        acts.remove(at:i)
    }
}
DispatchQueue.main.async {
    self.data = acts
}

There is also a CMPedometer class; before using it, check the isStepCountingAvailable class method. Some devices can deduce the size of the user’s stride and compute distance (isDistanceAvailable); some devices can use barometric data to estimate whether the user mounted a flight of stairs (isFloorCountingAvailable). Starting in iOS 9, you can also ask for instantaneous cadence (isCadenceAvailable) and pace (isPaceAvailable). Pedometer data is queried just like motion activity data; you can either ask for constant updates or you can ask for the stored history. Each bit of data arrives as a CMPedometerData object. The pedometer may work reliably under circumstances where Core Location doesn’t.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.81.58