Chapter 6. Monitoring events and actions

 

This chapter covers

  • The SDK’s event modeling
  • How events and actions differ
  • Creating simple event- and action-driven apps

 

In the previous chapter, you learned how to create the basic view controllers that fulfill the controller role of an MVC architectural model. You’re now ready to start accepting user input, because you can send users to the correct object. Users can interact with your program in two ways: by using the low-level event model or by using event-driven actions. In this chapter, you’ll learn the difference between the two types of interactions and how to implement them. Then we’ll look at notifications, a third way that your program can learn about user actions.

Of these three models, events provide the lowest-level detail and ultimately underlie everything else (they’re essential for sophisticated programs), so we’ll begin with events.

6.1. An introduction to events

We briefly touched on the basics of event management in chapter 2. But as we said at the time, we wanted to put off a complete discussion until we could cover events in depth; we’re now ready to tackle that job.

The fundamental unit of user input is the touch: a user puts a finger on the screen. This may be built into a multitouch or a gesture, but the touch remains the building block on which everything else is constructed. It’s the basic unit that we’ll examine in this chapter. In this section, we’ll look at how touches and events are related. Let’s start by examining the concept of a responder chain.

6.1.1. The responder chain

When a touch occurs in an SDK program, you have to worry about what responds to the event. That’s because SDK programs are built of tens—perhaps hundreds—of different objects. Almost all of these objects are subclasses of the UIResponder class, which means they contain all the functionality required to respond to an event. What gets to respond?

The answer is embedded in the concept of the responder chain. This is a hierarchy of different objects that are each given the opportunity, in turn, to answer an event message.

Figure 6.1 shows an example of how an event moves up the responder chain. It starts out at the first responder of the key window, which is typically the view where the event occurred—where the user touched the screen. As we’ve already noted, this first responder is probably a subclass of UIResponder—which is the class reference you’ll want to look to for a lot of responder functionality.

Figure 6.1. Events are initially sent to the first responder but then travel up the responder chain until they’re accepted.

Any object in the chain may accept an event and resolve it; when that doesn’t occur, the event moves farther up the list of responders. From a view, an event goes to its superview and then its superview, until it eventually reaches the UIWindow object, which is the superview of everything in your application. It’s useful to note that from the UIWindow downward, the responder chain is the view hierarchy turned on its head; when you’re building hierarchies, they do double duty.

Although figure 6.1 shows a direct connection from the first responder to the window, there can be any number of objects in this gap in a real-world program.

Often, the normal flow of the responder chain is interrupted by delegation. A specific object (usually a view) delegates another object (usually a view controller) to act for it. You already saw this put to use in your table view in chapter 5, but you now understand that delegation occurs as part of the normal movement up the responder chain.

 

First responders and keyboards

Before we leave the topic of responders, we’d like to mention that the first responder is an important concept. Because this first responder is the object that can accept input, it sometimes takes a special action to show its readiness for input. This is particularly true for text objects like UITextField and UITextView, which (if editable) pop up a keyboard when they become the first responder. This has two immediate consequences.

If you want to pop up a keyboard for the text object, you can do so by turning it into the first responder:

[myText becomeFirstResponder];

Similarly, if you want to get rid of a keyboard, you must tell your text object to stop being the first responder:

[myText resignFirstResponder];

We’ll discuss these ideas more when you encounter your first editable text object toward the end of this chapter.

 

If an event gets all the way up through the responder chain to the window and it can’t deal with an event, then it moves up to the UIApplication, which most frequently punts the event to its own delegate: the application delegate, an object that you’ve been using in every program to date.

Ultimately, you, the programmer, must decide what in the responder chain will respond to events in your program. You should keep two factors in mind when you make this decision: how classes of events can be abstracted together at higher levels in your chain, and how you can build your event management using the concepts of MVC.

At the end of this section, we’ll address how you can subvert this responder chain by further regulating events, but for now let’s build on its standard setup.

6.1.2. Touches and events

Now that you know a bit about how events find their way to the appropriate object, we can dig into how they’re encoded by the SDK. First, we want to offer a caveat: usually you won’t need to worry about this level of detail because the standard UIKit objects generally convert low-level events into higher-level actions for you, as we discuss in the second half of this chapter. With that said, let’s look at the nuts and bolts of event encoding.

The SDK abstracts events by combining a number of touches (which are represented by UITouch objects) into an event (which is represented by a UIEvent object). An event typically begins when the first finger touches the screen and ends when the last finger leaves the screen. In addition, it should generally include only those touches that happen in the same view.

In this chapter, you’ll work mainly with UITouches (which make it easy to parse single-touch events) and not with UIEvents (which are more important for parsing multitouch events). Let’s lead off with a more in-depth look at each.

Uitouch Reference

A UITouch object is created when a finger is placed on the screen, moves on the screen, or is removed from the screen. A handful of properties and instance methods can give you additional information on the touch, as detailed in table 6.1.

Table 6.1. Additional properties and methods can tell you precisely what happened during a touch event.

Method or property

Type

Summary

phase Property Returns a touch phase constant, which indicates whether touch began, moved, ended, or was canceled
tapCount Property The number of times the screen was tapped
timestamp Property When the touch occurred or changed
view Property The view where the touch began
window Property The window where the touch began
locationInView: Method The current location of the touch in the specified view
previousLocationInView: Method The previous location of the touch in the specified view

Together, the methods and properties shown in table 6.1 offer considerable information about a touch, including when and how it occurred.

Only the phase property requires additional explanation. It returns a constant that can be set to one of five values: UITouchPhaseBegan, UITouchPhaseMoved, UITouch-PhaseStationary, UITouchedPhaseEnded, or UITouchPhaseCancelled. You’ll often want to have different event responses based on exactly which phase a touch occurred in, as you’ll see in the event example.

Uievent Reference

To make it easy to see how individual touches occur as part of more complex gestures, the SDK organizes UITouches into UIEvents. Figure 6.2 shows how these two sorts of objects interrelate.

Figure 6.2. UIEvent objects contain a set of related UITouch objects.

Just as with the UITouch object, the UIEvent object contains a number of properties and methods that you can use to figure out more information about your event, as described in table 6.2.

Table 6.2. The encapsulating event object has a number of methods and properties that let you access its data.

Method or property

Type

Summary

timestamp Property The time of the event
allTouches Method All event touches associated with the receiver
touchesForView: Method All event touches associated with a view
touchesForWindow: Method All event touches associated with a window

The main use of a UIEvent method is to give you a list of related touches that you can break down by several means. If you want to get a list of every touch in an event, or if you want to specify just gestures on a certain part of the screen, then you can do that with UIEvent methods. This ends our discussion of event containers in this chapter.

Note that all of these methods compact their touches into an NSSet, which is an object defined in the Foundation framework. You can find a good reference for the NSSet at Apple’s developer resources site.

The Responder Methods

How do you access touches and/or events? You do so through a series of four different UIResponder methods, which are summarized in table 6.3.

Table 6.3. The UIResponder methods are the heart of capturing events.

Method

Summary

touchesBegan:withEvent: Reports UITouchPhaseBegan event when fingers touch the screen
touchesMoved:withEvent: Reports UITouchPhaseMoved events when fingers move across the screen
touchesEnded:withEvent: Reports UITouchPhaseEnded events when fingers leave the screen
touchesCancelled:withEvent: Reports UITouchPhaseCancelled events when the phone is put up to your head, or other events that might cause an external cancellation

Each of these methods has two arguments: an NSSet of touches that occurred during the phase in question and a UIEvent that provides a link to the entire event’s worth of touches. You can choose to access either one, as you prefer; as we’ve said, we’ll be playing with the bare touches. We’re now ready to dive into an example that demonstrates how to capture touches in a real-life program.

6.2. A touching example: the event reporter

The sample application for events is an event reporter, which offers a variety of responses depending on how and when the device screen is touched. The sample program has two goals.

First, we want to show you a cool and simple application that you can write using events—one that should get you thinking about everything you can do.

Second, we want to show some of the low-level details of how events work in a visual form. If you take the time to code and compile this program, you’ll gain a better understanding of how the various phases work as well as how tapping works.

You’ll kick off this development process by creating a project named eventreporter that uses the View-Based Application template. That means you’ll start with a view controller already in place. We’ll also use this example to show how an MVC program can be structured.

6.2.1. Setting things up in Interface Builder

For this program, you’ll create three new objects: two button-shaped objects that float around the screen to mark the beginning and end of touches, plus a status bar to go at the bottom of the screen and describe a few other events when they occur.

Because you want all your new objects to lie beneath the view controller in the view hierarchy, you call up the view controller’s own .xib file, eventreporterViewController.xib. As usual, you’ll add your new objects to the Main Display window that represents the view controller’s view.

All this work is graphical, so we can’t show the code of this programming process. But we’ve included a quick summary of the actions you should take (the results are shown in figure 6.3):

  • Set the background color of the UIView to an attractive aluminum color. You do this in the Attributes Inspector, as you do most of your work in this project.
  • Create a UILabel, stretch it across the bottom of the screen, and set the color to be steel. Also, clear its text so it doesn’t display anything at startup.
  • Create two UITextFields. This class of objects is generally used to accept input, but we opted to use the objects for pure display purposes because we like their look. (Don’t worry; we’ll show how to use the full functionality of a UIText-Field toward the end of this chapter.)
  • Place each UITextField at the center of the screen using the handy positioning icons as guides. Set this location’s coordinates to 159, 230; and set its origin to middle.
  • For each UITextField, input text that lists its starting position; this will later be updated by the program as the text field moves. Deselect the user interaction–enabled option for each UITextField so that users can’t manipulate them.
Figure 6.3. Two UITextFields (one of them hidden) and one UILabel, set against an aluminum-colored background on the iPhone, complete the object creation you need for your eventreporter project.

The process takes longer to explain than it takes to accomplish. You’ll have a working interface in a couple of minutes.

Because you’ll modify all three of these objects during the course of your program’s runtime, you need to link them to variables. You should link everything to your controller, because it takes care of updates, as is appropriate under the MVC model.

The tricky thing here is that the view controller doesn’t seem to appear in your eventreporterViewController.xib file—at least not by that name. Fortunately, there’s a proxy for it. Because the view controller is what loads up the .xib, it appears as the file’s owner in the nib document window. You can therefore connect objects to the view controller by linking them to the file’s owner proxy. This is a common situation, because view controllers frequently load additional .xib files for you.

The following is your view controller’s header file, eventreportViewController.h, following the addition of these IBOutlets. The code also contains a declaration of a method that you’ll use later in this project:

@interface eventreporterViewController : UIViewController {
    IBOutlet UITextField *startField;
    IBOutlet UITextField *endField;
    IBOutlet UILabel *bottomLabel;
}
- (void)manageTouches:(NSSet *)touches;
@end

To finish this process, connect your interface objects to the IBOutlets, using the procedures described in chapter 4.

6.2.2. Preparing a view for touches

Touch events can be captured only by UIView objects. Unfortunately, as of this writing, there’s no way to automatically delegate those touches to a view controller. Therefore, in order to manage touch events using the MVC model, you typically need to subclass a UIView, capture the events there, and then send messages to the view controller.

In this project, you create a new object class, reportView, which is a subclass of UIView. You then link, visually, that new class into the view controller’s existing view. Open eventreporterViewController.xib, go to the Identity tab for the view object you’ve been using, and change its name from UIView to reportView, as you did in chapter 5 when you created a table view controller subview.

Any new methods you write into reportView, including methods that capture touch events, will be now reflected in your view. To clarify this setup, figure 6.4 shows the view hierarchy that you’ve built for your even-treporter project.

Figure 6.4. You’ll connect six objects that you’ll use to report events.

With a brand-new UIView subclass in hand, you can now write methods into it to capture touch events and forward them to its controller. This code, which appears in reportView.m, is as follows:

- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
    [self.nextResponder manageTouches:touches];
}
- (void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
    [self.nextResponder manageTouches:touches];
}
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
    [self.nextResponder manageTouches:touches];
}

This code is pretty simple. You’re filling in standard methods so that your program will have the responses you want when those messages are sent. The overall structure of these methods reminds us of several important facts about events.

First, as promised, there are a variety of responder methods. Each of them reports only the events for its specific phase. So, for example, the touchesBegan:withEvent: method only has UITouchPhaseBegan touches in it. In forwarding these touches, you could keep the different phases distinct, but instead you throw everything together and sort it out on the other side.

Second, we’ll comment one final time that these methods send you two pieces of information: a set of touches and an event. They’re partially redundant, and which one you work with will probably depend on the work you’re doing. If you’re not doing complex multitouch events, then the NSSet of touches will probably be sufficient.

 

An aside on the text fields and label

If you were to code in this example, you’d discover that the program correctly responds to touch events even when the touches occur atop one of the text fields or the label at the bottom of the page. How does the program manage that when you built event response into only the reportView?

The answer is this: it uses the responder chain. The text fields and the label don’t respond to the event methods themselves. As a result, the events are passed up the responder chain to the reportView, which does leap on those events, using the code you’ve just seen.

 

Third, note that you’re sending the touches to the view controller by way of the next-Responder method. As you’ll recall, the responder chain is the opposite of the view hierarchy at its lower levels, which means in this case the nextResponder of report-View is the UIViewController. We would have preferred to have the UIView-Controller naturally respond to the touches’ messages, but we made use of the responder chain in the next-best way. As of this writing, the compiler warns that next-Responder may not know about the manageTouches method, but it will; you can ignore this warning.

You’ll see some other ways to use the nextResponder method toward the end of our discussion of events.

6.2.3. Controlling your events

Intercepting touches and forwarding them up to the view controller may be the toughest part of this code. After the events get to the view controller, they run through a simple method called manageTouches:, as in the following listing, which shows the view controller implementation file.

Listing 6.1. manageTouches, which accepts inputs and changes views

Touches are sent as an NSSet, which can be broken apart in a number of ways, as described in the NSSet class reference. Here, you use a simple for ... in construction that lets you look at each touch in turn.

When you get a touch, the first thing you do is determine what phase it arrived in. Originally, you could have determined this information based on which method a touch arrived through, but because you combined everything you have to fall back on the phase property. Fortunately, it’s easy to use. You match it up to one of three constants , and that determines which individual actions your program undertakes.

Having different responses based on the phase in which a touch arrives is common—which is why the event methods are split up in the first place. The example demonstrates this with some distinct responses: you move your start field when touches begin, you move your end field when touches end, and you update the bottom label in both the moved and ended phases.

In the UITouchPhaseBegan response, you delve further into your touches’ data by using the locationInView: method to figure out the precise coordinates where a touch occurred. You’re then able to use that data to reposition your text field and to report the coordinates in the text field. You later do the same thing in the UITouch-PhaseEnded response.

Finally, you look at the tapCount in the UITouchPhaseEnded response. This is generally the best place to look at taps because the device now knows that the user’s finger has come off the screen. As you can see, it’s easy to both run a command based on the number of taps and to report that information.

Figure 6.5 shows the event responder in action. Imagine a finger that touches down on the space where the Begin text field is and that is currently moving across the screen.

Figure 6.5. Your event responder uses a few graphical elements to report events as they occur.

And with that, your event reporter is complete. In addition to illustrating how a program can respond to touches, we’ve highlighted how the MVC model can be used in a real application.

The project contains four views: a reportView, a UILabel, and two UITextFields. It’s tempting to process events in the reportView, especially because you had to create a subclass anyway, but instead you pushed the events up to the view controller and in doing so revealed why you want to do MVC modeling.

Because it takes on the controller role, you give the view controller access to all of its individual objects, and therefore you don’t have to try to remember what object knows about what other object. Tying things into the view controller, rather than scattering them randomly across your code, makes the project that much more readable and reusable, which is what most architectural and design patterns are about.

6.3. Other event functionality

Before we complete our discussion of events, we’d like to cover a few more topics of interest. We’ll explore how to regulate the report of events in a variety of ways and then describe some deficiencies in the event model.

6.3.1. Regulating events

As we mentioned earlier, there are some ways that you can modify how events are reported (and whether they are at all). As you’ll see, three different objects give you access to this sort of regulation: UIResponder, UIView, and UIApplication. We’ve listed all the notable options we’ll discuss in table 6.4.

Table 6.4. Properties in various objects allow for additional control of when events are monitored.

Method or property

Type

Summary

nextResponder UIResponder method Returns the next responder in the chain by default but can be modified
hitTest:withEvent: UIView method Returns the deepest subview containing a point by default but can be modified
exclusiveTouch UIView property A Boolean set to NO by default; controls whether other views in the same window are blocked from receiving events
multipleTouchEnabled UIView property A Boolean set to NO by default; controls whether multitouches after the first are thrown out
beginlgnoringlnteractionEvents UIApplication method Turns off touch event handling
endlgnoringlnteractionEvents UIApplication method Turns on touch event handling
isIgnoringlnteractionEvents UIApplication method Tells whether the application is ignoring touch events

Because UIView is a subclass of UIResponder, you generally have access to the methods from both classes in most UIKit objects. You’ll need to do some additional work to access the UIApplication methods.

Uiresponder Regulation

You’ve already seen that UIResponder is the source of the methods that let you capture events; as shown here, it’s also the home of the methods that control how the responder chain works.

Most of the responder chain–related methods aren’t directly used by your code; instead, they typically appear deep in frameworks. becomeFirstResponder and resignFirstResponder (which control who the first responder is) and canBecome-FirstResponder, canResignFirstResponder, and isFirstResponder (which return Booleans related to the information in question) all typically fall into this category.

The last UIResponder method, nextResponder, may be of use in your programs. As defined by UIResponder, nextResponder returns the next responder, per the normal responder chain. You used it in the example to pass your touches up.

If you want to change the normal order of the responder chain, you can do so by creating your own nextResponder function in a subclass. This new function overrides its parent method and allows your program to take a different path up your responder chain.

Uiview Regulation

When you move into the UIView class methods, you can take the opposite approach by overriding hitTest:withEvent:. This method is passed a CGPoint and an event, and by default it returns the deepest subview that contains the point. By writing a new method, you can cause your responder chain to start at a different point.

The two UIView properties that we noted both work as you’d expect. exclusive-Touch declares that the view in question is the only one that can receive events (which is an alternative way you could have managed the eventreporter example, where you didn’t want anything but the reportView to accept events). Meanwhile, multiple-TouchEnabled starts reporting of multitouch events, which are otherwise ignored.

Uiapplication Regulation

Finally, we come to the UIApplication methods. These lie outside the normal hierarchy of objects, and thus you can’t get to them from your view objects. Instead, you need to call them directly from the UIApplication object, as shown here:

[[UIApplication sharedApplication] beginIgnoringInteractionEvents];

As you may recall from chapter 3, sharedApplication is a UIApplication class method that provides a reference to the application object. Typically, you use its return as the receiver for the beginIgnoringInteractionEvents message.

Each of the three methods listed under UIApplication works as you’d expect when you know the secret to accessing them.

6.3.2. Other event methods and properties

We’ve spent a lot of time on events, but at the same time we’ve only scratched the surface. Events give you low-level access to user input, but you won’t use events much. Instead, you’ll use the device’s many control objects (and thus actions) in order to accept almost all user input.

As a result, this chapter offers you a compromise: a solid look at how events work that should suffice for those times when you do need to descend to touch management, but not all of the intricacies. The thing that we’ve most clearly left out is how to work with multitouch events. For that, we point you, as usual, to the Apple iOS SDK developer website. A good tutorial on multitouch events is available as part of the iOS Programming Guide; you should read if you’re one of that smaller percentage of developers—such as programmers creating games and novelties—who may need access to multitouches and more complex gestures.

6.4. An introduction to actions

If you won’t usually be programming directly with events, how will you access user input? The answer is by using actions. You’ll typically depend on preexisting text views, buttons, and other widgets to run your programs. When using these objects, you don’t have to worry about raw events. Instead, you can build programs around control events and actions that are generated by UIControls. Let’s look at the UIControl object first; then, we’ll examine the relationship between control events and actions and how to hook them up.

6.4.1. The UIControl object

When you were working with events, you found that the UIResponder class held many of the methods critical for event control. Similarly, you can access a lot of the methods important to SDK controls through the UIControl class.

UIControl is a child of UIView (and thus UIResponder). It’s the parent of important user interface controls such as UIButton, UISwitch, UIPageControl, UISegmented-Control, UISlider, and UITextField. It’s not used for some other control-looking objects such as UISearchBar, so you should check the Apple class references before trying to use its functionality. Also note that the higher-level UIControl class can’t be used on its own; it defines the common methods used by its children.

The UIControl class contains several properties that control its basic setup, such as enabled (which determines whether it’s on), highlighted (which determines its visual state), and selected (which sets Boolean state for appropriate sorts of controls, such as switches). You can also directly access a control’s touch events with beginTracking-WithTouch:withEvent:, continueTrackingWithTouch:withEvent:, and endTrackingWithTouch:withEvent:, methods that work in a similar way to the event response functions that you played with in UIResponder. But you won’t be using these methods, because they don’t represent the simple advantages that you’ll see when using control objects. For that, we turn to UIControl’s action-target mechanism.

6.4.2. Control events and actions

The UIControl object introduces a new event-handling infrastructure that takes touch events of the sort that you might have directly handled in the previous section and (eventually) converts them into simple actions, without your having to worry about the specifics of how a user accessed a control. The complete sequence of events is outlined in figure 6.6.

Figure 6.6. UIControl objects take standard touch events and turn them into actions that are dispatched by UIApplication.

When a touch event arrives at a UIControl object (via normal dispatching along the responder chain), the control does something unique. Inside the standard UIResponder methods that you used in the previous section (such as touches-Began:withEvent:), a UIControl object turns standard touch events into special control events.

These control events broadly describe how the user has interacted with the controls rather than recording gestures. For example, they may report that a button has been pushed or a slider moved. They’re divided into three categories: touch events, editing events, and a slider event. The touch events describe how a user’s finger interacted with the control, the editing events describe changes to a UITextField, and the UIControlEventValueChanged event describes changes to a UISlider.

These control events are all enumerated in a bitmask that’s defined in the UIControl object. An almost-complete listing of them—including some composite control events—can be found in table 6.5. We’ve left out only a few reserved values.

Table 6.5. UIControl objects recognize a number of special events.

Value

Summary

UIControlEventTouchDown A finger touch.
UIControlEventTouchDownRepeat A repeated finger touch (with tapCount > 1).
UIControlEventTouchDraglnside A finger movement ending inside the control.
UlControlEventTouchDragOutside A finger movement ending just outside the control.
UIControlEventTouchDragEnter A finger movement that enters the control.
UlControlEventTouchDragExit A finger movement that exits the control.
UlControlEventTouchUpInside A finger removed from the screen inside the control.
UlControlEventTouchUpOutside A finger removed from the screen outside the control.
UlControlEventTouchCancel A system event canceled a touch.
UlControlEventValueChanged A slider (or other si milar) object changed its value.
UIControlEventEditingDidBegin Editing began in a UITextField.
UIControlEventEditingChanged Editing changed in a UITextField.
UIControlEventEditingDidEnd Editing ended in a UITextField due to a touch outside the object.
UIControlEventEditingDidEndOnExit Editing ended in a UITextField due to a touch.
UIControlEventAllTouchEvents Composite for all the touch-related events.
UIControlEventAllEditingEvents Composite for the editing-related events.
UIControlEventAllEvents Composite for all events.

After a standard event has been turned into a control event, a sequence of additional methods is called, as shown in figure 6.6. First, the UIControl object calls send-ActionsForControlEvents:. That in turn breaks down the events it’s been sent and calls sendAction:to:forEvent: once per event. Here, the control event is turned into an action, which is a specific method that’s going to be run in a specific target object. Finally, the UIApplication method sendAction:to:fromSender:forEvent: is called by the control, again once per event.

This is another situation where the application object does big-picture controlling of messaging. The application sends the action to the target object. But there’s one catch: if the target that the action is being sent to has been listed as nil, the action is sent to the first responder instead and from there moves up the responder chain.

That process can be slightly exhausting, and fortunately you shouldn’t normally need to know its details. For your purposes, you should be aware that a UIControl object turns a touch event first into a control event and then into an action with a specific recipient. Even better, you need to code only the last part of that conversion, from control event into targeted action.

6.4.3. Using addTarget:action:forControlEvents:

A UIControl object maintains an internal dispatch table that correlates control events with target-action pairs. This table says which method should be run by which object when a specified event occurs. You can add entries to this table with the UIControl object’s addTarget:action:forControlEvents: method. The following example shows how it works:

[controlObject addTarget:recipientObject action:@selector(method)
    forControlEvents:UIControlEvents];

The first argument, addTarget:, says where the message will be sent. It’s frequently set to self, which usually refers to a view controller that instantiated the control object.

The second argument, action:, is the trickiest. First, note that it uses the @ syntax that we mentioned in chapter 2. The selector should identify the name of the method that’s going to be run in the target object. Second, be aware that you can send the action argument either without a colon (method) or with a colon (method:). In the latter case, the ID of the controlObject is sent as an argument. Be sure your receiving method is correctly defined to accept an argument if you include that colon in your selector.

The third argument, forControlEvents:, is a bitmasked list of possible control events, taken from table 6.5.

With all these puzzle pieces in place, you’re ready to write some code that uses actions (and this method). As a simple example, you’ll expand the functionality to your event reporter by adding a reset button.

6.5. Adding a button to an application

The simplest use of an action is probably adding a button to an application and then responding to the press of that button. As you’ll see, this turns out to be a lot easier than digging through individual touches.

We’ve opted to show you how to work with a button in two ways: first by using the addTarget:action:forControlEvents: method that we just introduced and then visually by using an IBAction declaration.

Both of these examples begin with your existing eventreporter program. You’ll add a simple UIButton to it visually. Place the button atop the label at the bottom of your page and use the attributes tag to label it Reset. With it in place and defined, it’s ready to be linked into your program by one of two different ways.

Both examples will call a method called resetPage:, which restores the three changeable objects in your eventreporter to their default states. It’s in eventreporterViewController.m, and as you can see it’s entirely elementary:

- (void)resetPage:(id)sender {
    startField.text = @"Begin: 159,230";
    startField.center = CGPointMake(159,230);
    endField.text = @"Begin: 159,230";
    endField.center = CGPointMake(159,230);
    bottomLabel.text = [NSString string];
}

We can now look at the two ways you can call this method.

6.5.1. Using addTarget:action:forControlEvents: with a button

On the one hand, you may wish to add actions to your button programmatically. This could be the case if you created your button from within Xcode or if you created your button visually but want to change its behavior during runtime.

Your first step is bringing your button into Xcode. If you created your button visually, as we suggested earlier, you need to create an IBOutlet for the button, which should be old hat by now. If you didn’t create your button visually, you can do so programmatically in Xcode. This probably means using the factory class method button-WithType:, which lets you create either a rounded rectangle button or one of a few special buttons, like the info button. By either means, you should now have a button object available in Xcode.

Your second step is to send the addTarget:action:forControlEvents: message as part of your application’s startup. Assuming that you’re having your view controller manage the button’s action, this message should be sent from the view controller’s loadView method (if your controller was created in Xcode) or in its viewDidLoad method (if you created the controller in Interface Builder).

Here’s what the viewDidLoad method of your view controller looks like when applied to a button called myButton:

- (void)viewDidLoad {
    [myButton addTarget:self action:@selector(resetPage:)
        forControlEvents:UIControlEventTouchUpInside];
    [super viewDidLoad];
}

This real-life example of addTarget:action:forControlEvents: looks much like the sample in the previous section. You’re sending a message to your button that tells it to send the view controller a resetPage: message when the user takes their finger off the screen while touching the button.

That single line of code is all that’s required; from there on out, your button will connect to your resetPage: method whenever it’s pushed (and released).

6.5.2. Using an IBAction with a button

The other way you can link up actions to methods is to do everything visually. This is the preferred choice if you’ve created your object visually (as we’ve suggested) and you’re not planning to change its behavior at runtime.

When you use this procedure, you don’t need to make your button into an IBOutlet. It’s effectively invisible from Xcode, which is fine, because all you care about is what happens when the button is pushed. You also don’t use the somewhat complex addTarget:action:forControlEvents: method that we just ran through; instead, you connect things via intuitive visual means.

For the purposes of this example, start with a clean slate: with a button freshly crafted inside the interface pane and no connections yet built.

To link an interface object to an action, you must declare the method you’re using as having a return of IBAction. This means adding the following declaration to the header file of your view controller:

- (IBAction)resetPage:(id)sender;

The implementation of the method should share the same return.

Afterward, you can go into Interface Builder and create a connection, as shown in figure 6.7.

Figure 6.7. With an IBAction, there’s no code, just a link.

As shown, when you’re connecting a control, you’re given access to the entire palette of possible control events. You select the one (or ones) that you want to connect to IBActions, and then you drag over to the top-level object containing your IBAction. In this case, that’s once again the file’s owner object, which represents your view controller. As usual, a menu pops up, this time showing possible IBActions to which you can link your control event.

The results are almost magical. With that single graphical link, you replace the addTarget:action:forControlEvents: call and any code of any type. The button now links to the targeted action automagically.

What we’ve described so far covers the broad strokes of actions; everything else lies in the details. If we spent less time on actions than events, it’s not because actions are less important than events, but because they’re a lot simpler.

From here on, your challenge in using controls will be figuring out how individual controls work. See appendix A for an overview of classes and the Apple Class References for specifics. But there are a few controls that we’d like to give more attention to because they vary from the norm.

6.6. Other action functionality

In this section we’ll look at two controls that report back different signals than the simple button-up or button-down control events. The first is the UITextField, the prime control for entering text, and the second is the relatively simple (but unique) UISlider. In the process, we’ll also explore the other text-based entry formats, because they share some unique issues with UITextField.

6.6.1. Accepting text input with UITextField

You have four ways to display pure text in the SDK: the UILabel, the UISearchBar, the UITextView, and the UITextField. Each has a slightly different purpose. The UILabel and the UISearchBar are intended for short snippets of text; the UITextView is intended for multiple lines. Each of those text objects except the UILabel is editable, but only the UITextField is a UIControl subclass with its own control events already defined.

If the UITextField sounds familiar, that’s because you used it in the eventreporter example. If you go back and look at the screenshots, you’ll see that the Begin and End buttons are displayed in ovals that look a lot like input boxes. As we mentioned at the time, we liked the way they looked, but they also gave us a good excuse to familiarize you with the object without getting into its details.

Usually, a UITextField will accept user input. It’s intended to be used mainly for accepting short user input. The trickiest thing about using a UITextField is getting it to relinquish control of your device after you call up a keyboard. The following code shows the two steps needed to resolve this problem. We’re assuming that you’re working with a myText UITextField object visually and instantiated inside a view controller:

- (void)viewDidLoad {
    myText.returnKeyType = UIReturnKeyDone;
    [super viewDidLoad];
}
- (BOOL)textFieldShouldReturn:(UITextField *)textField {
  [textField resignFirstResponder];
  return YES;
}

Your setup of an interface object begins, pretty typically, inside its controller’s view-DidLoad method. Here you turn the text field keyboard’s Return key into a bright blue Done key, to make it clear that’s how you get out. You accomplish this by using part of the UITextInputTraits protocol, which defines a couple of common features for objects that use keyboards.

To do anything else, you need to declare a delegate for the UITextField that follows the UITextFieldDelegate protocol. This can be done either by setting the text field’s delegate property programmatically or by drawing a delegate link visually. (This sample code presumes you’ve taken the easier solution of doing so visually.) After you’ve done that, you can modify the textFieldShouldReturn: delegate method. We’re assuming that the view controller has been set as the delegate, which would be typical, and which allows you to do this work in the same view controller class file.

Finally, you enter two standard lines of code into this delegate method. They tell the text field to let go of first-responder status (which, as we’ve previously noted, is what’s necessary to make a keyboard go away) and return a YES Boolean.

With this code in place, a user can get in and out of a UITextField. To use the text field afterward, you need to monitor the text field’s special control events (especially UIControlEventEditingDidEnd) and also look at its text property.

In a moment, we’ll provide a sample of how that works. First, let’s examine a few other text objects that aren’t controls but that you might use to accept text entry.

Uilabel

The UILabel isn’t user editable.

Uisearchbar

The UISearchBar looks an awful lot like a UITextField with some nuances, such as a button to clear the field and a bookmark button. Despite the similarities in style, the UISearchBar isn’t a UIControl object but instead follows an entirely different methodology.

To use a UISearchBar, set its delegate to be the object of your choice, likely your view controller. Then, respond to the half-dozen messages described in UISearch-BarDelegate. The most important of these is the searchBarSearchButtonClicked: method. Be sure to include resignFirstResponder in order to clear the keyboard; then, you can take actions based on the results. There’s an example of a UISearchBar in chapter 9, section 9.2.3.

Uitextview

A UITextView works like a UITextField, except that it allows users to enter many lines of text. The biggest gotcha here is that you can’t use the Return key as your Done button, because users will likely want to hit Returns in their text. Instead, you must have a Done button somewhere near the top of your screen, where it can be seen when the keyboard is up. When that button is clicked, you can set the text view to resignFirst-Responder. Beyond that, you must set the UITextView’s delegate property; then you can watch for delegate messages, most importantly textViewDidEndEditing:.

With the quick digression into this variety of text objects out of the way, we can now return to the other UIControl object that we wanted to discuss: UISlider.

6.6.2. Allowing value selection with UISlider

The slider is a simple object, but we’ve singled it out because it’s the one other class that has its own control event, UIControlEventValueChanged. If you target this event, you’ll find that it gets called whenever the slider moves, but the control event won’t tell you what the new value is. To get that information, your action method must query the slider’s properties.

Three properties are of particular note: value shows a slider’s current value, minimumValue shows the bottom of its scale, and maximumValue shows the top of its scale. You can use value without modification if you’ve set your slider to return a reasonable number (as described in the class reference); or if you prefer, you can use all three properties together to determine the percentage that the slider is moved over—which is what you’ll do in one final control example.

6.6.3. A TextField/Slider mashup

Because we want to examine two UIControl objects more closely, it makes sense to quickly mash up an example that takes advantage of both of them. You’ll do this in the View-Based RGB Application, which sets the background color of a view based on the word you type into a UITextField and the selected position of a UISlider.

As usual, you create all of these objects visually. Then, go hog wild linking objects to your view controller. In all, you should create five links: an outlet each for your text field and slider, an action link for the important text field and the slider events, and a delegate link for the text field. Figure 6.8 shows what the view controller’s Connections tab looks like after these have all been done.

Figure 6.8. A heavily connected view controller will be a pretty normal sight as you gain experience in creating objects visually.

As shown, the actions from both of the controls link into a single method, called changeColor:. Whenever either control is changed, this method adjusts the color of the screen accordingly. The following listing shows how.

Listing 6.2. Accessing a text field and a slider

The hardest part of working with a UITextField is setting it up, which you did earlier. Now that you have input coming back, all you need to do is access the text property and do with it as you will .

Meanwhile, by working with your three slider values, you’re able to easily generate a value from 0 to 1 . Putting that together with the color you generated from your text field input results in a background color that you can change in two ways. Figure 6.9 takes a final look at this new program.

Figure 6.9. A text field and a slider conspire to set the color of the iPhone’s background.

Would it be better to do this with a UISegmentedControl and a UISlider? Probably. But as is, it offers a quick example of how a text field works. Furthermore, it shows how you can combine action management by letting multiple controls point to a single method, a technique that will be useful in more complex programs.

As usual, more information about both of these controls is available in the Apple class references, including lots of methods and properties that we didn’t talk about.

6.6.4. Actions made easy

Throughout the latter half of this chapter, you’ve seen controls that are tied to the fully fledged target-action mechanism. In the next chapter, that will change when you see the same idea in a somewhat simplified form.

Sometimes, buttons or other controls are built into other classes of objects (such as the button that can be built into the navigation bar). These controls have special methods that allow them to automatically create a target-action pair. As a result, you don’t have to go through the nuisance of calling the addTarget:action:forControl-Events: method separately.

We’ll point out this technique when we encounter it as part of the navigation controller.

6.6.5. Actions in use

There are numerous control objects that we’ve opted not to cover here, mainly because they use the same general principles as those we’ve talked about. Nonetheless, they’ll remain an important factor throughout the rest of this book.

In particular, controls represent one of the main ways that users can offer input to your programs, and we’ll discuss them when we talk about data in chapter 9. We’ll also offer more complex programs that use a variety of controls from chapter 9 on. Through those examples, the majority of the UI controls will receive some coverage in this book.

6.7. Introducing notifications

As we mentioned in chapter 1, there’s one other way that a program can learn about events: through notifications. When directly manipulating events or actions, as you have throughout this chapter, individual objects receive events because the events occurred in their view, because the events occurred in a subview, or because the events occurred in a view that has been delegated to them.

Notifications step outside this paradigm. Now, an object registers to receive notice when certain events occur. These are often events that lie beyond the standard view hierarchy, such as information when a network connection closes or when the device’s orientation changes. Notably, these notifications are also broadcast messages: many different objects can be notified when the event occurs.

All notifications occur through the NSNotificationCenter. You must create a copy of this shared object to use it:

[NSNotificationCenter defaultCenter]

Afterward, you may use the addObserver:selector:name:object: method to request a certain notification. The Observer: is the object that receives the notification method (usually, self), the selector: is the method that is called in the observer, name: is the name of the notification (which is in the class reference), and the object: can be used if you want to restrict which objects you receive notification from (but it’s usually set to nil).

For example, to receive the UIDeviceOrientationDidChangeNotification notification that we’ll talk about in chapter 10, you might use the following code:

[[NSNotificationCenter defaultCenter] addObserver:self
  selector:@selector(deviceDidRotate:)
  name:@"UIDeviceOrientationDidChangeNotification" object:nil];

Overall, notification programming tends to have four steps:

  1. You learn that there’s a notification by reading the appropriate class reference (UIDevice in this case).
  2. You may need to explicitly turn on the notification (as is indeed the case for UIDeviceOrientationDidChangeNotification).
  3. You write a method that will respond to the notification (in this case, device-DidRotate:).
  4. You connect the notification to the method with the NSNotificationCenter.

There is considerably more power in the notification system. Not only can you set up multiple observers, but you can also post your own notifications. If you want more information on these advanced features, you should read the class references on NSNotificationCenter, NSNotification, and NSNotificationQueue.

6.8. Summary

The iOS includes an extensive set of frameworks that takes care of a lot of details for you, making your programming as painless as possible. You’ve seen this to date in everything you’ve done, as sophisticated objects appear on screen with almost no work.

The same applies to the iPhone’s and iPad’s event system. There is a complex underlying methodology. It centers on a responder chain and granular reporting of touches and allows you to follow precise user gestures. You may occasionally have to manipulate events via these more complex means.

But the iPhone and iPad also support a higher-level iOS action system that lets programs respond to specific actions applied to controls rather than more freeform gestures. We’ve explained how to use both, but it’s the target-action mechanism that you’re more likely to rely on when programming.

With actions and events out of the way, we’re ready to look at the final fundamental building block of the SDK. We’ve already discussed views, controls, and basic view controllers, but another category of object is critical for most SDK programming: the advanced view controller that allows for navigation over multiple screens of content.

That’s the basis of the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.67.40