We've given XNA plenty of opportunity to tell us what it is thinking. Now let's even things up and let it know what's going on inside our human brains. Input is, of course, an essential part of any game and will add interactivity and excitement to our projects.
Microsoft has made some fairly strict decisions about the inputs that can be featured on Windows Phone 7 devices. All devices will have a touch screen and this will naturally form the primary input mechanism that we will use for gaming. The screens all support multitouch and are required to be able to track a minimum of four simultaneous touch points.
In terms of buttons, devices must have exactly three front-facing buttons: a Back button, a Windows button, and a Search button. They all have predefined expected roles, and we must observe them if we want our application to be approved for publication. We cannot do anything at all in response to the Windows or Search buttons being pressed, and can in fact expect our game to disappear from view when either of them is used. The only button we can legitimately hook into is the Back button. If we are in a subscreen of a game (for example, an options page or a highscore table), this button must return to the previous game page (the main game). Otherwise, the game must exit when this button is pressed.
We do have some other inputs available to us, however. The most useful of them for gaming is the accelerometer, which lets us work out exactly which way up the device is being held. This has lots of interesting uses for gaming. In this chapter, we'll examine these input mechanisms and explore how they can be used to control the games that you create.
All touch input from the screen is obtained using a class within XNA called TouchPanel
. This provides two mechanisms with which input information can be obtained: raw touch point data and the Gestures API.
The raw touch point data provides a collection of all current input points on the screen ready to be interpreted in whichever way the program desires. Each touch point identifies whether its touch is new, moved, or released; and allows its previous coordinate to be obtained if appropriate, but nothing more is provided. It is therefore up to the application to interpret this data and handle it as appropriate for the game.
The advantage of the touch point data is that it gives us the most flexible access to multitouch inputs. It also offers the most straightforward method for reading simple input if nothing more complex is required.
The Gestures API recognizes a series of types of input that the user might want to use, such as tap, double tap, or drag. By telling XNA which gestures we are interested in, it will look for these movement patterns and report back to us whatever it finds.
This greatly simplifies many types of input that would be more difficult to recognize and manage using the raw touch point data. The main disadvantage of using gestures is that they are primarily single-touch in nature (with the exception of the pinch gesture which uses two touch points). If you need to be able to gain full access to multiple simultaneous input points, the raw touch point data might be more suitable.
Let's look at how each of these systems is used in detail.
When reading the current touch points from the screen, we poll for information: we make a call out to the device and ask it for its current state.
This is in contrast with the way you might be used to obtaining data in desktop applications (and indeed it is in contrast with how we will obtain input data in Silverlight applications when we get to them later in the book), which tends to be event driven: each time the operating system detects that something has happened, it queues up an event.
Event-driven systems tend not to "miss" any events, whereas polling systems can skip inputs if they happen too quickly. If the user taps the screen so quickly that the input state is not polled while the screen contact is made, the button press will be missed entirely.
The advantage of polling, however, is that it is fast, easy to use, and provides information at exactly the point where we need it. When you want to see if your player should shoot, you simply check to see if a screen location is currently being touched. This check can take place anywhere in your game code rather than having to be placed within an event handler.
To read the touch points, we simply call the TouchPanel.GetState
function. It returns a TouchCollection
, inside which we will find a TouchLocation
object for each touch point that is currently active. If the collection is empty, the user is not currently touching the screen. If multiple location objects are present, multiple touches are active simultaneously.
Listing 4-1 shows a simple piece of code in a game class's Update
method that reads the current touch points and displays the number of touch points returned to the debug window.
Example 4.1. Retrieving the current TouchPanel state
protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // Get the current screen touch points TouchCollection touches = TouchPanel.GetState(); // Output the touch count to the debug window System.Diagnostics.Debug.WriteLine("Touch count: " + touches.Count.ToString()); base.Update(gameTime); }
When developing with the Windows Phone emulator, you can, of course, simulate touching the screen by clicking the emulator window with the mouse cursor. If you are running Windows 7 and have a touch-sensitive monitor, the emulator has full support for genuine touch input—including multitouch if your monitor is capable of supporting it. However if you lack a multitouch monitor, it is very hard to develop multitouch applications within the emulator, and using a real device is the only way to properly test your game in this case.
When the user touches the screen, we find a TouchLocation
object in the collection returned by TouchPanel.GetState
. Contained within this location object are the following properties:
Id
returns a unique identification value for this touch point. As long as the user maintains contact with the screen, location objects with the same Id
value will continue to be sent to the application. Each time a new touch is established, a new Id
value will be generated. This is very useful for multitouch input because it helps tell which point is which, but for single-touch input we can ignore it.
Position
is a Vector2 structure inside which the touch point coordinate is stored.
State
stores a value that helps us determine whether the touch point is new or has been released by the user.
When the state is polled when a new touch point has been established, the State
of that touch point will be set to the enumeration value TouchLocationState.Pressed
. If you are interested only in when contact is established, check for this state in your location objects. This is the equivalent of a MouseDown
event in a WinForms environment.
When the state is polled and a previously reported touch point is still active, its state will be set to Moved
. Note that it doesn't matter whether the point actually has moved or not, this state simply means that this is an established touch point that is still present. You will see how we can determine whether it really has moved in a moment.
Finally when the state is polled and a previously reported touch point has been released, it will be present within the touch collection for one final time with a state of Released
. This will always be present once the touch point is released, so you can rely on the fact that every Pressed
point will have a corresponding Released
state. If the screen is tapped very quickly, it is entirely possible to see a point go straight from Pressed
to Released
without any Moved
states in between.
Because XNA ensures that all released points are reported, it is theoretically possible for there to be more points within the TouchCollection
than the device is actually able to read. If four touch points were in contact with the screen during one poll, and all four had been released and retouched during the next poll, the collection would contain eight points (four with a State
of Released
and four more with a State
of Pressed
).
For TouchLocation
objects whose State
is Moved
, we can ask the TouchLocation
for the point's previous location by calling its TryGetPreviousLocation
method. This will return a Boolean value of true
if a previous position is available or false
if it is not (which should be the case only if the State
value is Pressed
). The method also expects a TouchLocation
to be passed as an output parameter, and into this the touch point's previous location will be placed.
Listing 4-2 shows a simple Update
function that displays the state, current position, and previous position of the first detected touch point.
Example 4.2. Retrieving a touch point's previous location
protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // Get the current screen touch points TouchCollection touches = TouchPanel.GetState(); // Is there an active touch point? if (touches.Count >= 1) { // Read the previous location TouchLocation prevLocation; bool prevAvailable = touches[0].TryGetPreviousLocation(out prevLocation); // Output current and previous information to the debug window System.Diagnostics.Debug.WriteLine("Position: " + touches[0].Position.ToString() + " Previous position: " + prevLocation.Position.ToString()); } base.Update(gameTime); }
Note that TryGetPreviousLocation
gives back a TouchLocation
object, not a Vector2
, so we can interrogate its other properties, too. It gives access to its State
property, which allows a touch point to tell whether it is receiving its first Move
state (this will be the case if its previous location's state is Pressed
). It is not possible to obtain additional historical information by further calling TryGetPreviousLocation
on the previous location object: it will always return false
.
If you want to experiment with this, take a look at the TouchPanelDemo
example project. This project provides a simple TouchPanel
display that places a circular object at the position of each detected touch point. It supports multitouch input and will display the first touch point in white, the second in red, the third in blue, and the fourth in green. Any additional touch points will display in white. When the touch point is released, the circle will fade away into nothing.
All the while this is running, information about the first returned touch point will be written to the debug window. This includes the point's Id
, State
, Position
, and previous Position
. From this information you will be able to see the sequence of State
values that are returned, see the previous Position
for each touch point, and observe the behavior of the previous Position
for Pressed
and Released
touch points.
If you need to know how many simultaneous touch points are available, you can ask the TouchPanel
for this information. Its GetCapabilities
method returns a TouchPanelCapabilities
object, from which the MaximumTouchCount
property can be queried.
All devices are required to support a minimum of four touch points, so anything you write should be able to rely on being able to read this many points at once. In practice, this number is probably quite sufficient for most games.
The only other useful capability property is the IsConnected
property, which tells you whether a touch panel is currently connected to the device. For Windows Phone 7 games, it will always return true
, but the property might return other values for XNA games running on other platforms (Windows or an Xbox 360, for example).
It's very useful being able to read the touch screen coordinate, but what happens if the game is running in a landscape orientation? Do we need to swap over the x and y coordinate values to find out where the touch point is relative to our game coordinates?
You will be pleased to find that the answer is no; XNA automatically takes care of this for us. We don't need to pay any attention at all to screen orientation because the screen coordinates will be translated into the same coordinate system that we are using for rendering.
Taking this concept even further, XNA does exactly the same thing when scaling is in effect. If the back buffer is set to a size of 240 × 400, for example, the touch point coordinates will be automatically scaled into exactly the same range. No special processing is required for these configurations at all.
Try changing the back buffer size in the TouchPanelDemo
game class constructor to experiment. You will see that both rotated and scaled output provides correspondingly rotated and scaled input values. This is a very useful feature that greatly simplifies working with touch coordinates under these conditions.
TouchPanel.GetState
returns simple information about how the user is touching the screen; in many cases, this information will be perfectly sufficient for games that you might want to write. TouchPanel
offers an alternative high-level way to read input, however, called Gestures.
The Gestures API recognizes a series of common movement types that the user might make when touching the screen and reports them back by telling you what type of movement has been detected as well as the relevant screen coordinates for the movement.
The recognized gestures are as follows:
Tap
: The user has briefly pressed and released contact with the screen.
DoubleTap
: The user has quickly tapped the screen twice in the same location.
Hold
: The user has made sustained contact with the same point on the screen for a small period of time.
VerticalDrag
: The user is holding contact with the screen and moving the touch point vertically.
HorizontalDrag
: The user is holding contact with the screen and moving the touch point horizontally.
FreeDrag
: The user is holding contact and moving the touch point around the screen in any direction.
DragComplete
: Indicates that a previous VerticalDrag
, HorizontalDrag
, or FreeDrag
has concluded, and contact with the screen has been released.
Flick
: The user has moved a contact point across the screen and released contact while still moving.
Pinch
: Two simultaneous touch points have been established and moved on the screen.
PinchComplete
: Indicates that a previous Pinch
has concluded, and contact with the screen has been released.
This list contains some very useful input mechanisms that your user will no doubt be familiar with. Using them saves a lot of effort tracking previous positions and touch points, and allows the gestures system to do all the work for us.
The Gestures API is currently implemented only in XNA for Windows Phone. If you intend to port your game to the Windows or Xbox 360 platforms in the future, you will need to find an alternative way of reading input for those platforms.
Let's look at the Gestures API and each of the supported gesture types in more detail and see exactly how they all work and how they can be used.
Before you can use gestures you must tell XNA which of the gestures you are interested in being notified about. It is potentially able to track all of them at once, but it is likely that certain gestures are going to be unwanted in any given situation. Enabling only those gestures that you need improves the performance of the gesture recognition engine and also reduces the chance that a gesture will not be interpreted in the way that you need.
All the gestures are disabled by default. Attempting to read gesture information in this state will result in an exception.
To enable the appropriate gestures, logically OR together the required values from the GestureType
enumeration and then provide the result to the TouchPanel.EnabledGestures
property. For example, the code in Listing 4-3 enables the tap, hold, and free drag gestures.
Example 4.3. Enabling gestures required for the game
// Enable the gestures that we want to be able to respond to TouchPanel.EnabledGestures = GestureType.Tap | GestureType. Hold | GestureType.FreeDrag;
The enabled gestures can be set or changed at any stage in your game. If you find that you are moving from the main game into a different area of functionality (such as an options screen or a high-score table) and you need to change the gestures that are to be processed, simply reassign the EnabledGestures
property as needed.
Once the required gestures have been enabled, you can begin waiting for them to occur in your game's Update
function. Unlike reading the raw touch data, gesture information is fed via a queue, and it is important that this queue is fully processed and emptied each update. Without this it is possible for old events to be picked up and processed some time after they actually took place, giving your game a slow and laggy sensation.
To check to see whether there are any gestures in the queue, query the TouchPanel.IsGestureAvailable
property. This can be used as part of a while
loop to ensure that all waiting gesture objects within the queue are processed.
If IsGestureAvailable
returns true
, the next gesture can be read (and removed) from the queue by calling the TouchPanel.ReadGesture
function. This returns a GestureSample
object containing all the required details about the gesture. Some of the useful properties of this object include the following:
GestureType
: This property indicates which of the enabled gestures has resulted in the creation of this object. It will contain a value from the same GestureType
enumeration that was used to enable the gestures, and can be checked with a switch
statement or similar construct to process each gesture in the appropriate way.
Position
: A Vector2
that contains the location on the screen at which the gesture occurred.
Position2
: For the Pinch
gesture, this property contains the position of the second touch point.
Delta
: A Vector2
containing the distance that the touch point has moved since the gesture was last measured.
Delta2
: For the Pinch
gesture, this property contains the delta of the second touch point.
A typical loop to process the gesture queue might look something like the code shown in Listing 4-4.
Example 4.4. Processing and clearing the gestures queue
while (TouchPanel.IsGestureAvailable) { // Read the next gesture GestureSample gesture = TouchPanel.ReadGesture(); switch (gesture.GestureType) { case GestureType.Tap: Shoot(gesture.Position); break; case GestureType.FreeDrag: Move(gesture.Position); break; } }
The Tap
gesture fires when you briefly touch and release the screen without moving the touch point. The DoubleTap
gesture fires when you quickly touch, release, and then touch the screen again without any movement taking place. If both of these gestures are enabled, a Tap
and DoubleTap
gesture will be reported in quick succession.
Note that repeat rapid taps of the screen are not quite as responsive through the Gestures API as they are by reading the raw touch information. If you need to be very responsive to lots of individual screen taps, you might find raw touch data more appropriate.
The Hold
gesture fires after stationary contact has been maintained for a brief period of time (about a second).
If the touch point moves too far from the initial contact position, the hold gesture will not fire. This means that, although it is quite possible for a Hold
to fire after a Tap
or DoubleTap
, it is less likely after one of the drag gestures.
The three drag gestures can be used independently or together, though using FreeDrag
at the same time as one of the axis-aligned drags can be awkward because once XNA has decided the direction of movement, it doesn't change. Beginning a horizontal drag and then moving vertically will continue to be reported as a horizontal drag. For this reason, it is generally better to stick to either axis-aligned drags or free drags, but not mix the two.
In addition to reporting the position within the returned GestureSample
object, XNA also returns the Delta
of the movement—the distance that the touch point has moved on the x and y axes since the last measurement. This can be useful if you want to scroll objects on the screen because it is generally more useful than the actual touch position itself. For VerticalDrag
and HorizontalDrag
, only the relevant axis value of the Delta
structure will be populated; the other axis value will always contain 0.
Once a drag has started, it will continually report the touch position each time it moves. Unlike when reading raw input, no gesture data will be added to the queue if the touch point is stationary. When the touch point is released and the drag terminates, a DragComplete
gesture type will be reported.
Flick
gestures are triggered when the user releases contact with the screen while still moving the touch point. This tends to be useful for initiating kinetic scrolling, in which objects continue moving after touch is released in the direction that the user had been moving. We will look at how you can implement this in your games in the "Initiating Object Motion" section later in this chapter.
To tell how fast and in which direction the flick occurred, read the GestureSample.Delta
property. Unlike drag gestures, however, this property contains the movement distance for each axis measured in pixels per second, rather than pixels since the previous position measurement.
To scale this to pixels per update to retain the existing motion, we can multiply the Delta
vector by the length of time of each update, which we can retrieve from the TargetElapsedTime
property. The scaled delta value calculation is shown in Listing 4-5.
Example 4.5. Scaling the Flick delta to represent pixels-per-Update rather than pixels-per-second
Vector2 deltaPerUpdate = gesture.Delta * (float)TargetElapsedTime.TotalSeconds;
One piece of information that we unfortunately do not get from the Flick
gesture is the position from which it is being flicked, which is instead always returned as the coordinate (0, 0). To determine where the flick originated, we therefore need to remember the position of a previous gesture, and the only gestures that will reliably provide this information are the drag gestures. It is therefore likely that you will need to have a drag gesture enabled for this purpose.
When the user makes contact with the screen with two fingers at once, a Pinch
gesture will be initiated and will report on the position of both touch points for the duration of the contact with the screen. As with the drag gestures, updates will be provided only if one or both of the touch points has actually moved.
XNA will ensure that the same point is reported in each of its position and delta properties (Position
, Position2
, Delta
, and Delta2
), so you don't need to worry about them swapping over unexpectedly.
Once either of the contacts with the screen ends, a PinchComplete
gesture is added to the queue to indicate that no further updates from this gesture will be sent. If the remaining touch point continues to be held, it will initiate a new gesture once it begins to move.
Just as with multitouch data from the raw touch API, testing pinch gestures on the emulator is impossible unless you have a suitable touch screen and Windows 7. This gesture is therefore best tested on a real device.
Just as with the raw touch data coordinates, positions from the Gestures API are automatically updated to match the rotation and scaling that is active on the screen, so no special processing is required if these features are in use.
The GesturesDemo
example projectwill help you experiment with all the gestures we have discussed in this section. It is similar to the TouchPanelDemo
from the previous section, but uses different icons for each of the recognized gestures. The icons are shown in Figure 4-1.
This project deliberately displays the icons a little above and to the left of the actual touch point so that they can be seen when touching a real phone (otherwise they appear directly beneath your fingers and are impossible to see). This looks a little odd in the emulator, however, as their positions don't directly correspond to the mouse cursor position, so don't be surprised by this.
By default, the project is set to recognize the Tap
, DoubleTap
, FreeDrag
, Flick
and Hold
gestures. Try enabling and disabling each of the gesture types and experiment with the movement patterns needed to initiate each. You can also use this as a simple way to see how the gestures relate to one another (for example, try enabling all three of the drag gestures and see how XNA decides which one to use).
A very common requirement for games will be to tell whether the player has touched one of the objects onscreen. We know where the objects all are and we know the point that the user has touched, so how can we tell if they coincide?
There are several approaches that we can use, each with different characteristics. Some of the different mechanisms that can be used are the following:
Checking against the sprite bounding box. This is very simple and quick, but as we saw in the last chapter it doesn't take rotation into account and is therefore not very accurate. For sprites that have not been rotated, this is the best approach to use.
Rectangular hit tests are similar to the bounding box test but properly take the sprite rotation into account. This test requires a little more calculation, but can accurately reflect whether the point falls within the rendered sprite rectangle.
Elliptical hit tests are good for sprites whose shape is essentially round. They perform a test by finding the distance from the touch point to the center of the sprite and checking whether this is within the area of the ellipse.
Let's see how each of these approaches can be implemented.
The easiest but least flexible mechanism for detecting whether a sprite has been touched is to see whether the sprite's bounding box contains the touch point. This can be achieved as shown in Listing 4-6.
Example 4.6. A simple hit test using the bounding box
bool IsPointInObject (Vector2 point) { Rectangle bbox; // Retrieve the bounding box bbox = BoundingBox; // See whether the box contains the point return bbox.Contains((int)point.X, (int)point.Y); }
The Rectangle
structure conveniently performs this check for us, though it is really just a simple matter of checking that the x coordinate falls between the rectangle's left and right edges, and that the y coordinate falls between the top and bottom edges.
As the BoundingBox
property already takes notice of scaling and custom sprite origins, this is all that we need to do for this simple check. If we need to be able to work with rotated rectangles, though, we need something a little more sophisticated...
There are various ways that we could test a point within a rotated rectangle. The easiest to conceptualize is taking the four corners of the rectangle and seeing whether the point falls inside them. However, there are simpler and more efficient ways to achieve this in code.
A more efficient way to achieve this is to imagine that we have rotated the rectangle back around its origin until its angle is zero, and correspondingly rotate the test point by the same angle. Once we have done this, we can simply perform a simple aligned rectangle check, just as we did in Listing 4-6.
In Figure 4-2, two images are shown of some test points and a rectangle. The rectangle has been scaled so that it is longer along its x axis, and rotated by about 15 degrees. Looking at Figure 4-2(a), it is obvious visually that test point 1 is within the rectangle, and test point 2 is not. In order for our code to determine this, we imagine rotating the sprite back until its angle is 0, and we rotate the two points by exactly the same amount. Of course, we don't actually draw it like this or even update the sprite's properties; we just perform the calculations that would be required for this rotation. If we were to draw the rotation, we would end up with the arrangement shown in Figure 4-2(b).
Having arranged the points as shown in Figure 4-2(b), we can now perform a simple check to see whether each point is within the left-right and top-bottom boundaries, just as we did with the bounding box test. This is a very simple calculation and gives us exactly the results we are looking for.
The code to perform this check is fairly straightforward. The main focus of the calculation is to perform the rotation of the test point around the rectangle's origin. We don't need to actually perform any calculation on the rectangle at all; we just need to rotate the points and then check them against the rectangle's unrotated width and height, which is already returned to us from the BoundingBox
property.
When we rotate a point in space, it always rotates around the origin— the point at coordinate (0, 0). If we want to rotate around the rectangle's origin, we therefore need to find the distance from the rectangle origin to the test point. The calculation can then be performed in coordinates relative to the rectangle, not the screen.
We can do this simply by subtracting the origin position from the test point position, as shown in Figure 4-3. In Figure 4-3(a), we see the coordinates specified as screen coordinates—the actual pixel position on the screen that forms the origin of the rectangle and the user's touch points. In Figure 4-3(b), these coordinates are specified relative to the rectangle origin. As you can see, this has simply subtracted 200 from the x values and 100 from the y values because they are the rectangle's origin coordinate.
These modified coordinates are considered as being in object space rather than in the normal screen space as they are now measured against the object (the rectangle) rather than the screen. We can now rotate these points around the origin, and as long as we remember that we are measuring their position in object space rather than screen space, we will find the new positions that we saw in Figure 4-2(b).
If at any time we want to map the coordinates back into screen space, all we need to do is re-add the rectangle's origin that we have subtracted. If we move a point to object space (by subtracting the object's origin coordinate), rotate it, and then move it back to screen space (by re-adding the object's origin coordinate), we will have rotated around the object's origin even though it is not at the screen's origin coordinate.
Having obtained the coordinate in object space, we now need to rotate it to match the rectangle's angle. The rectangle in the figures we have been looking at is rotated 15degrees in a clockwise direction. As you can see in Figure 4-2(b), to reset the rectangle back to its original angle we therefore need to rotate it back by the same angle—in other words 15 degrees counterclockwise. We can achieve this by negating the rotation angle.
The calculation to rotate a point around the origin is as follows:
x′ = x cos θ − v sin θ
v′ = x sin θ + v cos θ
The code to perform this calculation is shown in Listing 4-7.
Example 4.7. Rotating the point variable to calculate the new rotatedPoint variable
// Rotate the point by the negative angle sprite angle to cancel out the sprite rotation rotatedPoint.X = (float)(Math.Cos(-Angle) * point.X - Math.Sin(-Angle) * point.Y); rotatedPoint.Y = (float)(Math.Sin(-Angle) * point.X + Math.Cos(-Angle) * point.Y);
Now we have the coordinate relative to the unrotated object's origin. We can therefore simply move the bounding box into object space (by once again subtracting the rectangle position) and then see whether the point is contained within the bounding box. If so, the point is a hit; if not, it is a miss.
Table 4-1 shows the calculations that we have described for each of the touch points shown in Figure 4-3. The sprite in question is 64 × 64 pixels and has been scaled to be double its normal width, resulting in a rectangle of 128 × 64 pixels.
Table 4.1. Calculation steps to determine whether a test point is within a rotated scaled rectangle
Test Point 1 | Test Point 2 | |
---|---|---|
Screen coordinate | (230, 130) | (260, 160) |
Object-space coordinate | (30, 30) | (60, 60) |
Rotated coordinate | (36.7, 21.2) | (73.5, 42.4) |
Rectangle top-left/bottom-right in object coordinates | (−64, −32) / (64, 32) | |
Point contained within rectangle | Yes | No |
As this table shows, the rotated test point 1 coordinate is inside the rectangle's object coordinates (its x coordinate of 36.7 is between the rectangle x extent of - 64 to 64, and its y coordinate of 21.2 is within the rectangle y extent of - 32 to 32), and the rotated test point 2 coordinate is not.
The complete function to perform this calculation is shown in Listing 4-8. This code is taken from the SpriteObject
class, and so has direct access to the sprite's properties.
Example 4.8. Checking a test point to see whether it is within a rotated and scaled sprite rectangle
protected bool IsPointInObject_RectangleTest(Vector2 point) { Rectangle bbox; float width; float height; Vector2 rotatedPoint = Vector2.Zero; // Retrieve the sprite's bounding box bbox = BoundingBox; // If no rotation is applied, we can simply check against the bounding box if (Angle == 0) return bbox.Contains((int)point.X, (int)point.Y); // Get the sprite width and height width = bbox.Width; height = bbox.Height; // Subtract the sprite position to retrieve the test point in // object space rather than in screen space point -= Position; // Rotate the point by the negative angle of the sprite to cancel out the sprite
// rotation rotatedPoint.X = (float)(Math.Cos(-Angle) * point.X - Math.Sin(-Angle) * point.Y); rotatedPoint.Y = (float)(Math.Sin(-Angle) * point.X + Math.Cos(-Angle) * point.Y); // Move the bounding box to object space too bbox.Offset((int)-PositionX, (int)-PositionY); // Does the bounding box contain the rotated sprite? return bbox.Contains((int)rotatedPoint.X, (int)rotatedPoint.Y); }
Although rectangular hit tests are appropriate in some cases, in others it might be useful to test against a round sprite shape. To facilitate this, we can perform an elliptical hit test.
The ellipse that will be tested will completely fill the rectangular region occupied by the sprite, as shown in Figure 4-4.
Of course, ellipses, unlike circles, are affected by rotation, so we need to take this into account when working out whether a test point falls inside the ellipse. In Figure 4-5(a), we can see a rotated ellipse whose scale is such that its width is twice its height. Also marked in the figure are two test points, the first of which is within the ellipse, whereas the second is not (though it is within the bounds of the sprite rectangle).
The approach that we take to determine whether the points are within the ellipse starts off the same as that used for the rectangle: performing the calculation to rotate the points and the ellipse back to an angle of zero. Once that has been done we can ignore the rotation and concentrate just on the elliptical shape. Again, we don't actually draw the sprite with the angle reset to zero or update the sprite's properties; we just perform the calculations that would be required for this rotation. If we were to draw the rotation, we would end up with the arrangement shown in Figure 4-5(b).
Having obtained the coordinates relative to an unrotated ellipse, we can now determine whether the points are within the ellipse or not. For a circle this would be easy: we would find the radius of the circle and we would find the distance from the test point to the center of the circle. If the point distance is less than the radius, the point is inside the circle.
For an ellipse, this process is more complex, however. An ellipse doesn't have a radius because the distance from its center to its edge varies as the edge is traversed.
Fortunately, there is a very easy way to resolve this. We know how the sprite has been scaled, so we can divide the width of the ellipse by the scaled sprite width, and divide the height of the ellipse by the scaled sprite height. This will result in a new ellipse that is exactly one unit wide and one unit high. The size is less important than the fact that this resulting size is now that of a circle (with a radius of 0.5) rather than an ellipse, meaning that we can perform calculations against it very easily. Instead of scaling the sprite in this way, we can scale the test point and then see whether its distance from the circle center is less than 0.5. If so, the point is a hit; otherwise, it's a miss.
The steps required for the whole procedure are as follows; they are just like the steps for the rectangular hit test:
Move the touch point to be in object space rather than in screen space.
Rotate the point back by the sprite rotation angle.
Move the point to be relative to the center of the ellipse.
Divide the point's x position by the ellipse width and its y position by the ellipse height to scale down relative to a unit-width circle.
Test the point distance from the circle center to see whether it is within the circle's radius of 0.5.
Table 4-2 shows each of these calculations for each of the touch points shown in Figure 4-5. The sprite in question is 64 × 64 pixels and has been scaled to be double its normal width, resulting in an ellipse with a width of 128 pixels and a height of 64 pixels. Its center (and origin) is at the coordinate (200, 100).
Table 4.2. Calculation steps to determine whether a test point is within a rotated scaled ellipse
Test Point 1 | Test Point 2 | |
---|---|---|
Screen coordinate | (224, 117) | (248, 134) |
Object-space coordinate | (24, 17) | (48, 34) |
Rotated coordinate | (27.6, 10.2) | (55.2, 20.4) |
Ellipse width/height | 128 pixels by 64 pixels | |
Rotated coordinate scaled by width and height | (0.216, 0.159) | (0.432, 0.318) |
Distance from circle center at (0, 0) | 0.268 | 0.536 |
Point contained within rectangle (distance <= 0.5) | Yes | No |
As this table shows, the test point 1 coordinate is inside the ellipse (its calculated distance is less than 0.5), and the test point 2 coordinate is not.
The complete function to perform this calculation is shown in Listing 4-9. This code is taken from the SpriteObject
class, so it has direct access to the sprite's properties.
Example 4.9. Checking a test point to see if it is within a rotated and scaled sprite ellipse
protected bool IsPointInObject_EllipseTest(Microsoft.Xna.Framework.Vector2 point) { Rectangle bbox; Vector2 rotatedPoint = Vector2.Zero; // Retrieve the basic sprite bounding box bbox = BoundingBox; // Subtract the ellipse's top-left position from the test point so that the test // point is relative to the origin position rather than relative to the screen point -= Position; // Rotate the point by the negative angle of the sprite to cancel out the sprite // rotation rotatedPoint.X = (float)(Math.Cos(-Angle) * point.X - Math.Sin(-Angle) * point.Y); rotatedPoint.Y = (float)(Math.Sin(-Angle) * point.X + Math.Cos(-Angle) * point.Y); // Add back the origin point multiplied by the scale. // This will put us in the top-left corner of the bounding box. rotatedPoint += Origin * Scale; // Subtract the bounding box midpoint from each axis. // This will put us in the center of the ellipse. rotatedPoint -= new Vector2(bbox.Width / 2, bbox.Height / 2); // Divide the point by the width and height of the bounding box. // This will result in values between −0.5 and +0.5 on each axis for
// positions within the bounding box. As both axes are then on the same // scale we can check the distance from the center point as a circle, // without having to worry about elliptical shapes. rotatedPoint /= new Vector2(bbox.Width, bbox.Height); // See if the distance from the origin to the point is <= 0.5 // (the radius of a unit-size circle). If so, we are within the ellipse. return (rotatedPoint.Length() <= 0.5f); }
Checking touch points against game objects to see whether they have been selected is an operation that will be common to many games. To save each game from having to reimplement this logic, we will build these checks into the game framework.
This procedure starts off as an abstract function in GameObjectBase
called IsPointInObject
, as shown in Listing 4-10. It expects a Vector2
parameter to identify the position on the screen to test and returns a boolean value indicating whether that point is contained within the object.
Example 4.10. The abstract declaration for IsPointInObject contained inside GameObjectBase
/// <summary> /// Determine whether the specified position is contained within the object /// </summary> public abstract bool IsPointInObject(Vector2 point);
To implement the IsPointInObject
function for sprites, it is overridden within SpriteObject
. We will enable our sprites to support testing against both the rectangular and elliptical tests that we have described, and to allow the game to specify which type of test to use a new property is added to the class, named AutoHitTestMode
. The property is given the AutoHitTestModes
enumeration as its type, allowing either Rectangle
or Ellipse
to be selected.
The SpriteObject
implementation of IsPointInObject
checks to see which of these hit modes is selected and then calls into either IsPointInObject_RectangleTest
(as shown in Listing 4-8) or IsPointInObject_EllipseTest
(as shown in Listing 4-9). Any game object can thus have its AutoHitTestMode
property set at initialization and can then simply test points by calling the IsPointInObject
function.
For sprites that need to perform some alternative or more complex processing when checking for hit points (perhaps just as simple as only allowing a hit to take place under certain conditions or perhaps implementing entirely new region calculations), the IsPointInObject
can be further overridden in derived game object classes.
Another common function will be to identify the sprites that are contained in a specific location or the frontmost sprite at a specific location. Once again, we can add functions for both of these operations to the GameHost
class.
The first, GetSpritesAtPoint
, loops through all the game objects looking for those that can be found at the specified position. These are added to an array and returned to the calling procedure. The code for this function is shown in Listing 4-11.
Example 4.11. Finding all the objects at a specified position
public SpriteObject[] GetSpritesAtPoint(Vector2 testPosition) { SpriteObject spriteObj; SpriteObject[] hits = new SpriteObject[GameObjects.Count]; int hitCount = 0; // Loop for all of the SelectableSpriteObjects foreach (GameObjectBase obj in GameObjects) { // Is this a SpriteObject? if (obj is SpriteObject) { // Yes... Cast it to a SelectableSpriteObject spriteObj = (SpriteObject)obj; // Is the point in the object? if (spriteObj.IsPointInObject(testPosition)) { // Add to the array hits[hitCount] = spriteObj; hitCount += 1; } } } // Trim the empty space from the end of the array Array.Resize(ref hits, hitCount); return hits; }
The second function, GetSpriteAtPoint
, returns just a single sprite and attempts to find the frontmost sprite at the specified location. It does this by keeping track of the LayerDepth
value for each matching sprite. When subsequent sprites are ready to be checked, they are compared against the LayerDepth
of the previous matching sprite and ignored if the value is higher (remember that lower values appear in front of higher values).
If LayerDepth
values are found to be equal, the check is still made, and the later sprite will supersede the earlier sprite if it also matches the hit point. Because XNA will normally draw sprites in the order requested when LayerDepth
values match, later objects in the GameObjects
collection will appear in front of earlier objects with a matching depth. This check therefore allows us to find the frontmost object even if LayerDepths
are not being used.
The GetSpriteAtPoint function is shown in Listing 4-12.
Example 4.12. Finding the frontmost sprite at a specified position
public SpriteObject GetSpriteAtPoint(Vector2 testPosition) { SpriteObject spriteObj; SpriteObject ret = null; float lowestLayerDepth = float.MaxValue; // Loop for all of the SelectableSpriteObjects
foreach (GameObjectBase obj in GameObjects) { // Is this a SpriteObject? if (obj is SpriteObject) { // Yes... Cast it to a SelectableSpriteObject spriteObj = (SpriteObject)obj; // Is its layerdepth the same or lower than the lowest we have seen so far? // If not, previously encountered objects are in front of this one // and so we have no need to check it. if (spriteObj.LayerDepth <= lowestLayerDepth) { // Is the point in the object? if (spriteObj.IsPointInObject(testPosition)) { // Mark this as the current frontmost object // and remember its layerdepth for future checks ret = spriteObj; lowestLayerDepth = spriteObj.LayerDepth; } } } } return ret; }
Two example projects demonstrating the hit test functionality can be found in the downloads for this chapter. The first, HitTesting
, provides a demonstration of the accuracy of the hit testing functions that we have added to the SpriteObject
class. A screenshot from this project can be seen in Figure 4-6.
This example project creates a number of randomly positioned sprites, some of which have a square texture, whereas others have a circular texture. The sprites are rotated and scaled such that they form rectangles and ellipses. The objects can be touched to select them; all the objects that fall under the touch position will be selected and highlighted in red.
The image in Figure 4-6 is taken from the emulator, and the mouse cursor can be seen selecting some of the shapes that are displayed. The emulator is a great way to accurately explore the edges of the shapes because the mouse cursor is much more precise than a finger on a real device touch screen. You will see that the object really is selected only when it is touched inside its displayed shape and that the algorithms we are using for hit testing work exactly as required.
Inside the HitTestingGame.Update
method you will find that there are actually two possible calls to object selection functions, one of which is in use (calling SelectAllMatches
); the other is commented out (calling SelectFrontmost
). SelectAllMatches
finds all the objects at the touch point using the GetSpritesAtPoint
function and selects all of them, whereas SelectFrontmost
uses GetSpriteAtPoint
and selects just the one sprite returned (if there is one).
Try swapping these over so that SelectFrontmost
is called instead. You will find now that it is always the object in front that is selected when multiple objects overlap, as described in the previous section.
The project defines a new game object class called SelectableSpriteObject
and adds to the basic SpriteObject
functionality a new boolean property called Selected
. It also overrides the SpriteColor
property and returns a red color when the sprite is selected or the underlying sprite color if it is not. This simple class provides a useful mechanism for selecting sprites and visually indicating which are selected. We will use this same approach in the "Initiating Object Motion" section coming up in a moment.
The second example project, Balloons, turns the hit testing into a simple interactive game. Colored balloons gently float up the screen, and the player must pop them by touching them. This can be quite relaxing until too many balloons start to reach the top of the screen, at which point trying to pop them all becomes somewhat more frantic! A screenshot from the project is shown in Figure 4-7.
This project brings together a number of the things that you have learned during this and the previous chapters: it uses tinting to display the balloons in different colors, layer depths to render the balloons so that the smaller (more distant) balloons appear behind the larger (closer) balloons, raw TouchPanel
input to determine when and where the user has touched the screen, and the GetSpriteAtPoint
function to determine the frontmost balloon each time the screen is touched (this time using the layer depth to ensure that the frontmost balloon is selected).
This project has very little code and was very quick to put together, but it forms the basis of what could be an enjoyable game with a little more work.
You should now feel comfortable with reading user input from the touch screen. Before we finish examining screen input, let's discuss a couple of common movement patterns that you might want to include in your games: dragging and flicking.
The code required to drag objects is very straightforward. If a touch point is held, we simply need to find the movement distance between its current and previous locations and add it to the position of the objects that are being dragged.
The first part of dragging some objects is to allow them to be selected. Sometimes the selection will be a separate input from the drag (where the objects are tapped and then dragged afterward), but in most cases a drag will include object selection when contact with the screen is first established.
This is easy to do when using raw input as we can look for a TouchLocation.State
value of Pressed
. When it is detected, the object selection can be established ready for the objects to be dragged.
If we are using gestures, though, we have a problem: there is no gesture that is triggered when contact is first established with the screen. The tap gesture fires only when contact is released, and the drag gestures fire only once the touch point has moved far enough to be considered as actually dragging. So how do we perform the object selection?
The answer is to once again use raw input for this. Raw input and gestures can be mixed together so that the initial screen contact for object selection comes from raw input, and the dragging comes from a gesture.
Once the objects are selected, we can update them in response to the touch point moving around the screen. When using gestures, we simply look for one of the drag gestures and read out the Delta
property of the GestureSample
object. This contains the distance that the touch point has moved on each axis, which is exactly what we need.
Don't forget that the HorizontalDrag
and VerticalDrag
gestures will provide only delta values for the appropriate axis. There is no need to cancel out or ignore the other movement axis because XNA takes care of this automatically.
To calculate the delta using raw input, we obtain the previous touch position using the TryGetPreviousLocation
function and subtract that position from the current position. The result is the movement distance. The code for this is shown in Listing 4-13.
Example 4.13. Calculating the drag delta when using raw touch input
if (touches[0].State == TouchLocationState.Moved) { // Drag the objects. Make sure we have a previous position TouchLocation previousPosition; if (touches[0].TryGetPreviousLocation(out previousPosition)) { // Calculate the movement delta Vector2 delta = touches[0].Position - previousPosition.Position; ProcessDrag(delta); } }
Whichever method we used to calculate the delta, we now simply add the delta value to the position of all the selected sprites. They will then follow the touch location as it is moved around the screen.
Two example projects are provided to demonstrate this: DragAndFlick
is a gesture-based implementation, whereas DragAndFlickRaw
achieves the same effect using raw touch data. Both projects contain a SelectableSpriteObject
class based on the one from the HitTesting
project and contain identical functions for selecting the sprites at a point (SelectAllMatches
), deselecting the sprites (DeselectAllObjects
), and dragging the selected objects (ProcessDrag
). There are some additional properties present, but we will look at them in the next section.
Try running both of the examples and see how they work. You'll notice that they don't feel exactly the same, even though they do essentially the same thing. The gesture-based project has a delay between when you move the touch point and when the objects actually respond to the movement. The reason for this is that the gesture system waits for the touch point to move a small distance before it considers a drag gesture to have started. As a result, it feels a little less responsive.
The raw touch input assumes that all movement is part of a drag, so there is no delay at all. As a result, it feels a lot more responsive. Bear this difference in mind when considering the input options that are available when you are coding your games.
With the object movement under our control, it is sometimes useful to allow the user to flick or throw them across the screen. This is often known as kinetic movement, and it consists of retaining the velocity at which the object is moving when the touch point is released and continuing to move the object in the same direction, gradually decreasing the speed to simulate friction.
To control the movement of the object, some new code has been added to the SelectableSpriteObject
class. This code consists of a new Vector2
property called KineticVelocity
, which tracks the direction and speed of movement; a float
property called KineticFriction
, which controls how strong the friction effect is (as a value between 0 and 1), and an Update
override that applies the movement and the friction.
The Update
code simply adds the velocity to the position, and then multiplies the velocity by the friction value. This function is shown in Listing 4-14. Notice how it uses the MathHelper.Clamp
function to ensure that the friction is always kept between 0 and 1 (values outside of this range would cause the object to accelerate, which is probably undesirable, though perhaps it might be useful in one of your games!).
Example 4.14. Updating the SelectableSpriteObject to allow it to observe kinetic movement
public override void Update(GameTime gameTime) { base.Update(gameTime); // Is the movement vector non-zero? if (KineticVelocity != Vector2.Zero) { // Yes, so add the vector to the position Position += KineticVelocity; // Ensure that the friction value is within range KineticFriction = MathHelper.Clamp(KineticFriction, 0, 1); // Apply 'friction' to the vector so that movement slows and stops KineticVelocity *= KineticFriction; } }
With the help of this code, the objects can respond to being flicked, so now we need to establish how to provide them with an initial KineticVelocity
in response to the user flicking them. The example projects both contain a function called ProcessFlick
, which accepts a delta vector as a parameter and provides it to all the selected objects.
To calculate this flick delta using the gesture input system is very easy. We have already looked at the Flick
gesture and seen how to translate its pixels-per-second Delta
value into pixels-per-update. We can do this now and provide the resulting Vector2
value to the ProcessFlick
function, as shown in Listing 4-15.
Example 4.15. Initiating object flicking using gesture inputs
while (TouchPanel.IsGestureAvailable)
{
GestureSample gesture = TouchPanel.ReadGesture();
switch (gesture.GestureType)
{
case GestureType.Flick:
// The object has been flicked
ProcessFlick(gesture.Delta * (float)TargetElapsedTime.TotalSeconds);
break;
[... handle other gestures here...]
}
}
Unfortunately, using raw input is a little more work. If we calculate the delta of just the final movement, we end up with a fairly unpredictable delta value because people tend to involuntarily alter their finger movement speed as they release contact with the screen. This is coupled with the fact that the Released
state always reports the same position as the final Moved
state, meaning that it alone doesn't provide us with any delta information at all.
To more accurately monitor the movement delta, we will build an array containing a small number of delta vectors (five is sufficient), and will add to the end of this array each time we process a Moved
touch state. At the point of touch release, we can then calculate the average across the whole array and use it as our final movement delta.
This is implemented using three functions: ClearMovementQueue
, AddDeltaToMovementQueue
, and GetAverageMovementDelta
. The first of these clears the array by setting all its elements to have coordinates of float.MinValue
. We can look for this value when later processing the array and ignore any elements that have not been updated. ClearMovementQueue
is called each time a new touch point is established with the screen.
AddDeltaToMovementQueue
shifts all existing array elements down by one position and adds the provided delta to the end, as shown in Listing 4-16. This ensures that we always have the most recent delta values contained within the array, with older values being discarded. AddDeltaToMovementQueue
is called each time we receive a touch point update with a state of Moved
, with the delta vector calculated as described in the previous section.
Example 4.16. Adding new delta values to the movement queue
private void AddDeltaToMovementQueue(Vector2 delta) { // Move everything one place up the queue for (int i = 0; i < _movementQueue.Length - 1; i++) { _movementQueue[i] = _movementQueue[i + 1]; } // Add the new delta value to the end _movementQueue[_movementQueue.Length - 1] = delta; }
Finally, the GetAverageMovementDelta
calculates the average of the values stored within the array, as shown in Listing 4-17. Any items whose values are still set to float.MinValue
are ignored. The returned vector is ready to be passed into the ProcessFlick
function. Of course, the movement array is storing deltas in distance-per-update format, so we have no need to divide by the update interval as we did for gestures. GetAverageMovementDelta
is called (along with ProcessFlick
) when a touch point is detected with a state of Released
.
Example 4.17. Calculating the average of the last five delta values
private Vector2 GetAverageMovementDelta() { Vector2 totalDelta = Vector2.Zero; int totalDeltaPoints = 0; for (int i = 0; i < _movementQueue.Length; i++) { // Is there something in the queue at this index? if (_movementQueue[i].X > float.MinValue) { // Add to the totalMovement totalDelta += _movementQueue[i]; // Increment to the number of points added totalDeltaPoints += 1; } } // Divide the accumulated vector by the number of elements // to retrieve the average return (totalDelta / totalDeltaPoints); }
The main Update loop for the raw input example is shown in Listing 4-18. You will see here the situations that cause it to deselect objects, select objects, and reset the movement queue(when a new touch point is made), drag the objects and add their deltas to the movement queue (when an existing touch point is moved), and calculate the average and process the object flick (when a touch point is released).
Example 4.18. The update code for selecting, dragging, and flicking objects using raw touch data
// Get the raw touch input TouchCollection touches = TouchPanel.GetState(); // Is there a touch? if (touches.Count > 0) { // What is the state of the first touch point? switch (touches[0].State) { case TouchLocationState.Pressed: // New touch so select the objects at this position. // First clear all existing selections DeselectAllObjects(); // The select all touched sprites SelectAllMatches(touches[0].Position); // Clear the movement queue ClearMovementQueue(); break; case TouchLocationState.Moved: // Drag the objects. Make sure we have a previous position TouchLocation previousPosition; if (touches[0].TryGetPreviousLocation(out previousPosition)) { // Calculate the movement delta Vector2 delta = touches[0].Position - previousPosition.Position; ProcessDrag(delta); // Add the delta to the movement queue AddDeltaToMovementQueue(delta); } break; case TouchLocationState.Released: // Flick the objects by the average queue delta ProcessFlick(GetAverageMovementDelta()); break; } }
Try flicking the objects in each of the two DragAndFlick
projects. The behavior of this operation is much more consistent between the two than it was for dragging. Also try experimenting with different friction values and see how this affects the motion of the objects when they are flicked.
When designing the input mechanisms for your game, always be aware that people will use their fingers to control things. Unlike stylus input that was commonly used on earlier generations of mobile devices, fingers are inherently inaccurate when it comes to selecting from small areas on the screen.
With a little planning, you can help the user to have a comfortable experience despite this limitation; without any planning, you can turn your game into an exercise in frustration! If you have lots of objects that can be selected in a small area, give some thought to how you can help the user to select the object they actually desire rather than having them continually miss their target.
One option is to allow users to hold their finger on the screen and slide around to select an object rather than simply tapping an object. As they slide their finger, a representation of the selected object can be displayed nearby to highlight the current selection (which, of course, will be obscured by the finger). Once users have reached the correct place, they can release contact, happy that they have picked the object they desired.
Another possibility is to magnify the area of the screen surrounding the touch point, making all the objects appear larger. Users can then easily select the object they want, at which point the magnified area disappears.
Finger-friendly input options don't need to involve a lot of additional work, especially if they are planned and implemented early in a game's development, and it is definitely a good idea to avoid putting off your target audience with fiddly and unpredictable input mechanisms wherever possible.
There are two different areas where we might consider reading some form of keyboard input from the user: for controlling a game (by using the cursor keys, for example) and for text input (perhaps to enter a name in a high-score table).
The first of these requires the presence of a hardware keyboard. Keyboards can make a huge difference to some applications such as when taking notes or writing e-mail, and they can be useful for gaming, too.
Some of your users will have such a keyboard, and others (probably the majority) will not. For this reason, it is strongly advised not to make having a hardware keyboard a requirement of your game. By all means allow the keyboard to enhance the gaming experience, but please do ensure that it still works for those users who have only a touch screen for control.
For text input, users can type on a hardware keyboard if they have one, or use the onscreen keyboard known as the Soft Input Panel (SIP) if they do not. The methods both produce the same end result from the perspective of your game: it can ask for some text input, which it receives from the user. Exactly how the user enters it is not something that your game needs to worry about.
Let's take a look at how to interact with hardware keyboards and how to get the user to enter some text into your game.
Just as XNA provides the TouchPanel
object to allow us to read input from the touch screen, it also provides the Keyboard
object to allow keyboard input to be read. It provides a single method, GetState
, which provides a snapshot of the current keyboard activity. Just as with the TouchPanel
, this object allows us to poll the keyboard state rather than use an event-based model such as the one you might be familiar with if you have spent time in WinForms development.
GetState
returns a KeyboardState
object from which we can read whatever information we need to control a game. There are three methods that can be called on the KeyboardState
object:
GetPressedKeys
returns an array of Keys
values from which the complete set of current pressed keys can be read. If you want to allow a large range of keys to be used (such as to read the input when the user is typing, for example), this is probably the best method for querying the keyboard. Note that the array contains simple keycodes and nothing more: no information about pressed or released states is contained within this data.
IsKeyDown
returns a boolean indicating whether a specific key (provided as a parameter) is currently pressed down.
IsKeyUp
is the reverse of IsKeyDown, checking to see whether a specific key is not currently pressed.
All these functions operate using the XNA-provided Keys
enumeration. This enumeration includes a huge range of keys that might potentially be pressed, even though some of them won't exist on any given target device. The alphabetical characters have values in the enumeration with names from A
to Z
; because the enumeration deals only with pressed keys rather than typed characters, there is no provision for lowercase letters. The numeric digits are represented by the names D0
to D9
(enumerations do not allow names starting with digits, so a prefix had to be applied to these items to make their names valid). The cursor keys are represented by the values named Up
, Down
, Left
, and Right
.
If you are unsure about which enumeration item corresponds to a key on the keyboard, add some code to your Update
function that waits for GetPressedKeys
to return one or more items and then set a breakpoint when this condition is met. You can then interrogate the contents of the Keys
array to see which keycode has been returned.
The example project KeyboardInput
provides a very simple implementation of moving a sprite around the screen using the cursor keys. The code to perform this, taken from the Update
function, is shown in Listing 4-19.
Example 4.19. Using the keyboard to move a sprite
// Move the sprite? if (Keyboard.GetState().IsKeyDown(Keys.Up)) sprite.PositionY -= 2; if (Keyboard.GetState().IsKeyDown(Keys.Down)) sprite.PositionY += 2; if (Keyboard.GetState().IsKeyDown(Keys.Left)) sprite.PositionX -= 2; if (Keyboard.GetState().IsKeyDown(Keys.Right)) sprite.PositionX += 2;
If you are lucky enough to have a device with a hardware keyboard, you can try the example and see how it responds. If you don't have such a device, you can use the emulator to experiment instead. But you will notice something that seems to be a bit of a problem here: pressing the cursor keys in the emulator has no effect; the sprite doesn't move at all.
The reason for this is that the emulator disables keyboard input by default. There are three keys that can be pressed on the PC keyboard to change this: the Page Up
key will enable keyboard input, Page Down
will disable it, and the Pause/Break
key will toggle its enabled state. By default, keyboard input is disabled.
Press the Page Up
key and then try the cursor keys again. The sprite should spring into life (if it doesn't, click within the screen area of the emulator and try pressing Page Up
again). Knowing how to enable the keyboard is useful in other areas within the emulator, too—it makes it much easier to use the web browser, for example!
Polling for input in this way is ideal for games. There is no keyboard repeat delay between the first report of a key being pressed and subsequent reports for the same key, and no repeat speed to worry about that would cause delays between reports of a held key even after the initial delay had expired. Polling gives us a true and accurate picture of each key state every time we ask for it.
It also allows us to easily check for multiple keys pressed together. If you try pressing multiple cursor keys, you will see that the sprite is happy to move diagonally. This is perfect for gaming, in which pressing multiple keys together is a common requirement.
If you need to monitor for the point in time where the user has just pressed or released a key, XNA's Keyboard
object doesn't provide any information to this effect for you to use. It is easy to work this out with a little extra code, however.
Once the keyboard state has been read, the returned KeyboardState
structure keeps its values even after the keyboard state has moved on. By keeping a copy of the previous state and comparing it with the current state, we can tell when a key has been pressed or released: if it was up last time but is down now, the key has just been pressed; if it was down last time but is up now, it has just been released.
We can easily use this approach to look for individual keys or can loop through the array returned from GetPressedKeys
in order to look for all keys that were pressed or released since the last update. Listing 4-20 shows how details of all pressed and released keys can be printed to the Debug window. This code can also be found within the KeyboardInput
example project.
Example 4.20. Checking for pressed and released keys
// Read the current keyboard state currentKeyState = Keyboard.GetState(); // Check for pressed/released keys. // Loop for each possible pressed key (those that are pressed this update) Keys[] keys = currentKeyState.GetPressedKeys(); for (int i = 0; i < keys.Length; i++) { // Was this key up during the last update? if (_lastKeyState.IsKeyUp(keys[i])) { // Yes, so this key has been pressed System.Diagnostics.Debug.WriteLine("Pressed: " + keys[i].ToString()); } } // Loop for each possible released key (those that were pressed last update) keys = _lastKeyState.GetPressedKeys(); for (int i = 0; i < keys.Length; i++) { // Is this key now up? if (currentKeyState.IsKeyUp(keys[i])) { // Yes, so this key has been released System.Diagnostics.Debug.WriteLine("Released: " + keys[i].ToString()); } } // Store the state for the next loop _lastKeyState = currentKeyState;
There are two important things to remember when monitoring for pressed and released keys. First, you must check them during every single update if you want to avoid missing key state updates. Second, you should query the keyboard state only once per update and should store this retrieved state data away for use during the next update. Without following this approach, the state might change between the individual calls to GetState
resulting in key state changes being overwritten and lost.
The Keyboard
object provides a simple way to read the keyboard for controlling a game, but if you want your user to enter text it is generally not the best approach. The two main reasons are that it takes quite a lot of code to process all the keyboard state changes to build up the user's text string, and (more importantly) that users without keyboards will be completely unable to enter any text at all.
We can prompt the user to enter text using the SIP, which resolves both of the issues with reading the keyboard directly: the code is simple to develop and use, and the onscreen keyboard means that users relying on the touch screen for text entry can still continue to play your game. An example of this input dialog can be found in the SoftInputPanel
example project.
To initiate text entry, we use the XNA Guide
class and call its static BeginShowKeyboardInput
method. This will cause the screen to be taken over by a text input box with the SIP displayed for touch screen users. We can provide a title for the input dialog, a message for the user, and a default value to display within the text input area. A screenshot of the input screen can be seen in Figure 4-8.
The code required to initiate the input panel shown in Figure 4-8 is shown in Listing 4-21. It first ensures that the keyboard is not already visible and then opens the input window for the user to use.
Example 4.21. Displaying the text entry dialog window
// Make sure the input dialog is not already visible if (!(Guide.IsVisible)) { // Show the input dialog to get text from the user Guide.BeginShowKeyboardInput(PlayerIndex.One, "High score achieved", "Please enter your name", "My name", InputCallback, null) }
From left to right, the parameters for BeginShowKeyboardInput
are as follows:
player
. This is the number of the player for whom the dialog is to be displayed. Because we have only single-player support on the phone, this will always be set to PlayerIndex.One
.
title
. The title will be displayed at the top of the input dialog.
description
. The description will be shown below the title in smaller text.
defaultText
. An initial value to display in the input field.
callback
. The address of a function that will be called once the input dialog is complete.
state
. A user-provided object for the input dialog. This can be passed as null
.
When the input dialog completes (by the user entering some text and clicking the OK button, or by clicking the Cancel button), XNA will call into the function specified in the callback
parameter. This function must be declared with void
return type, and with a single parameter of type IAsyncResult
.
When the function is called, it can read the user-entered string by calling the Guide.EndShowKeyboardInput
method, passing in the IAsyncResult
object. This will return either a string containing the entered string or null
if the input dialog was canceled. Listing 4-22 shows the implementation of the callback function from the SoftInputPanel
example.
Example 4.22. A callback function for the text entry dialog window
void InputCallback(IAsyncResult result) { string sipContent = Guide.EndShowKeyboardInput(result); // Did we get some input from the user? if (sipContent != null) { // Store it in the text object ((TextObject)GameObjects[0]).Text = "Your name is " + sipContent; } else { // The SIP was canceled ((TextObject)GameObjects[0]).Text = "Name entry was canceled."; } }
One thing to be aware of when using the input dialog is that it is not synchronous. You might expect that your game will stop running while the dialog is open, but this is not the case: the game continues to run in the background the whole time.
There might be some useful aspects to this—for example, it will allow you to keep your music and sound effects generating (a subject we will be covering in the next chapter). In terms of the game, however, you might want to have this pause while the dialog is open.
We can achieve this very easily by checking the Guide.IsVisible
property (which you already saw in Listing 4-21). If this returns true
, skip updating the game objects or any other game logic during that call to Update
. Once the function returns false
, the dialog has closed, and updates can be resumed once again.
An accelerometer is a device contained within the phone that can report the device's current orientation or position. In other words, it can tell if the device is lying flat on a desk, being held upright, rotated onto its side, or is in any position in between. Accelerometers have become common in mobile devices over the last couple of years and are a required component of all Windows Phone 7 devices, so they can be used in games as another interesting input device.
This information presents all sorts of opportunities for games. If we can tell the angle of the device, we can use it as a control mechanism. Instead of touching the screen or pressing a button to move objects on the screen, the player can simply tilt the device in whatever direction is needed to affect the gameplay.
In this section we will investigate how to read and interpret the data from the Windows Phone 7 device accelerometer. The code presented here can be found in the Accelerometer example project, an image from which is shown in Figure 4-9.
The classes that provide access to the accelerometer are not actually part of XNA, but are instead provided via one of the standard Windows Phone 7 libraries: Microsoft.Devices.Sensors
. In order to access it, you must first add a reference to it.
Once the reference has been added, we can add a using
directive, as shown in Listing 4-23, to save having to fully qualify the namespace each time we want to refer to the sensor classes.
Example 4.23. Adding a using directive for the Microsoft.Devices.Sensors namespace
using Microsoft.Devices.Sensors;
Next we declare a class-level variable to hold an instance of the Accelerometer
object, as shown in Listing 4-24. This will be created during initialization and will remain for the duration of the game.
Example 4.24. Declaring a class variable to hold the Accelerometer object instance
private Accelerometer _accelerometer;
With these code sections in place, we can now instantiate and initialize the Accelerometer
object inside the game's Initialize
method. The code required for this is shown in Listing 4-25. After creating an instance, the code adds an event handler for the accelerometer's ReadingChanged
event. Unlike touch panel and keyboard input, the accelerometer provides data updates using an event rather than allowing us to poll it. In the event we can store the most recent reading, however, and then query this on demand whenever we want, which gives just the same effect from the perspective of our game code.
Once the object has been created and set up, we call its Start
method so that it begins feeding information to us.
Example 4.25. Instantiating and initializing the accelerometer object
// Instantiate the accelerometer _accelerometer = new Accelerometer(); // Add an event handler _accelerometer.ReadingChanged += AccelerometerReadingChanged; // Start the accelerometer _accelerometer.Start();
Finally we need to provide the AccelerometerReadingChanged event handler that we provided to the accelerometer object. This is very simple and is shown in Listing 4-26. The accelerometer provides three values, consisting of a reading for the X, Y, and Z axes. These are stored into a Vector3
structure stored in the class-level variable _accelerometerData
.
Example 4.26. Storing the data from the accelerometer each time it provides an updated reading.
void AccelerometerReadingChanged(object sender, AccelerometerReadingEventArgs e) { AccelerometerData = new Vector3((float)e.X, (float)e.Y, (float)e.Z); }
The Vector3
structure is very similar to the Vector2
structure that we've been using during the last few chapters, except that it stores an additional Z component to represent the third dimension. We will be using Vector3
structures a lot more in the next chapter once we start working with 3D graphics.
We can now read the x, y, and z axis readings from the accelerometer. Together they provide a reading of the acceleration of the device relative to freefall. What exactly does that mean?
First, let's look at the vector itself. It contains three properties that can be used to interrogate the device orientation: X
, Y
, and Z
. Each of them is a float
value that represents the movement of the device in the real world in the appropriate axis. If the device is lying flat and face up on a table, the values returned for the vector will be approximately as follows:
X = 0, Y = 0, Z = −1
The value - 1 represents the full force of gravity applied along the appropriate axis. The x axis represents the direction between the left and right edges of the device, the y axis the direction between the top and bottom of the device, and the z axis the direction between the front and back of the device. As gravity is pulling the device toward its back while it lies flat on the table, the accelerometer shows a value of - 1 on the z axis (-1 on this axis represents the back of the device, whereas +1 represents the front of the device, and this is what would appear if the device were put face down on the table).
This z value reading is very useful because it means we always get a movement reading relative to the force of gravity, even when the device is not in motion. By working out which of the x, y, and z axes the reading applies to, we can therefore work out which way up the device is.
As you've seen, with the device face up, we get a negative reading on the z axis. With the device upright, the accelerometer returns a value of - 1 on the y axis (and upright but upside down, it returns the opposite value, 1). Turn the device on its side and you'll get a value between - 1 and 1 on the x axis, depending on which way the device has rotated. All orientations between these extremes return values spread across the three axes.
Because our screen is only two-dimensional, we can for the most part ignore the value on the z axis. We can instead read out the x and y values and apply them as acceleration to objects in the game. When the device is flat on the desk, x and y are 0, so our objects don't move at all. Tip the device up, and the x and y values change based on the tilt angle, providing acceleration for our objects—the steeper the tilt, the faster the acceleration.
The Accelerometer
project in the accompanying downloads includes all the code required to move a ball around on the screen under control of the accelerometer. It also displays the vector values on the screen, so you can easily see the data coming back from the accelerometer as you change the orientation of your device.
The project contains a single game object: BallObject
, which is mostly just the same as the objects we have looked at in earlier projects. The ball offers a Velocity
vector, and in its Update
method it adds this to the ball position, bouncing if the edges of the screen are hit.
The one new addition is the use of the accelerometer data within the Update
code. It retrieves the data from the game class, adds the accelerometer's x axis reading to its horizontal velocity and subtracts its y axis reading from its vertical velocity, as shown in Listing 4-27. This is what makes the ball move in response to the device being rotated. As you can see, we observe only the x and y axis readings because the z axis doesn't do anything useful in this 2D environment.
Example 4.27. Applying the accelerometer data to the ball velocity
// Add the accelerometer vector to the velocity Velocity += new Vector2(_game.AccelerometerData.X, -_game.AccelerometerData.Y);
You now have all the code that is required to simulate a ball rolling around a virtual desktop. Try running the project on a device and see how the ball reacts to the device being tilted. Observe the way the ball moves faster when the device is tilted at a steeper angle.
There are a couple of additional things to be aware of when using the accelerometer. First, unlike touch input, the returned vector is not automatically rotated to match the orientation of the screen. The values will be completely unaffected by rotating the device, so you will have to compensate for this in your game code if your game plays in a non-portrait orientation.
Second, if your game has been configured to allow multiple orientations (as we discussed back in Chapter 2), XNA will automatically rotate your display whenever it detects that the device has been rotated to a different orientation. This might be useful for non-accelerometer-based games, but if the screen flips upside down every time the player tries to roll a ball toward the top of the screen, it will quickly become very annoying. To cure this, ensure that you explicitly specify a single supported orientation when working with the accelerometer.
If you want to work with an accelerometer in the Windows Phone 7 emulator, there is an obvious problem: the emulator has no access to an accelerometer so always returns a vector containing the values (0, 0, - 1). This makes it very hard to test your game unless on a real device.
There have been various clever workarounds for this problem suggested on the Internet (including hooking a Nintendo Wii controller up to the PC and using its accelerometer), but they involve a fairly considerable amount of effort to get working. We can take advantage of a couple of much simpler options: use the touch screen to simulate accelerometer data or use the keyboard to simulate rotation of the device.
Either way, we want to ensure that this happens only when running on the emulator, not on a real device. Microsoft has provided a way to determine whether we are running in the emulator, and it can be accessed by adding a reference to the Microsoft.Phone
DLL. Once this has been added, we can query the Microsoft.Devices.Environment.DeviceType
property, which will return either Device
or Emulator
as appropriate.
Having determined that we are running in the emulator, we can now apply either of the two accelerometer simulation methods described. The first of these, using the touch screen, is shown in Listing 4-28. It calculates the position of the touch point across the width and height of the screen and uses this position to derive a value for the AccelerometerData
vector. It sets just the x and y axis values, leaving the z value set permanently at 0.
Example 4.28. Simulating the accelerometer using touch screen input
void AccelerometerReadingChanged(object sender, AccelerometerReadingEventArgs e) { if (Microsoft.Devices.Environment.DeviceType == Microsoft.Devices.DeviceType.Device) { AccelerometerData = new Vector3((float)e.X, (float)e.Y, (float)e.Z); } else { // Use the touch screen to simulate the accelerometer float x, y; TouchCollection touches; touches = TouchPanel.GetState(); if (touches.Count > 0) { x = (touches[0].Position.X - Window.ClientBounds.Width / 2) / (Window.ClientBounds.Width / 2); y = -(touches[0].Position.Y - Window.ClientBounds.Height / 2) / (Window.ClientBounds.Height / 2); AccelerometerData = new Vector3(x, y, 0);
} } }
This code provides an intuitive method for providing simulated accelerometer data, but has the problem that it requires touch screen interaction, which could interfere with other parts of your game that rely on the touch screen. The second method avoids this problem by using the keyboard cursor keys. They are much less likely to be used in a game and so reduce the likelihood of interference.
The problem with using the cursor keys like this is that it is much harder to keep track of which way the simulated accelerometer vector is pointing. For this reason it is very useful to add a text object to the game and use it to display the content of the AccelerometerData
property on the screen. You can then refer to this in order to get your bearings. The keyboard-based simulation code is shown in Listing 4-29.
Example 4.29. Simulating the accelerometer using touch keyboard input
void AccelerometerReadingChanged(object sender, AccelerometerReadingEventArgs e) { if (Microsoft.Devices.Environment.DeviceType == Microsoft.Devices.DeviceType.Device) { AccelerometerData = new Vector3((float)e.X, (float)e.Y, (float)e.Z); } else { // Use the cursor keys on the keyboard to simulate the accelerometer Vector3 accData = AccelerometerData; if (Keyboard.GetState().IsKeyDown(Keys.Left)) accData.X -= 0.05f; if (Keyboard.GetState().IsKeyDown(Keys.Right)) accData.X += 0.05f; if (Keyboard.GetState().IsKeyDown(Keys.Up)) accData.Y += 0.05f; if (Keyboard.GetState().IsKeyDown(Keys.Down)) accData.Y -= 0.05f; // Ensure that the data stays within valid bounds of −1 to 1 on each axis accData.X = MathHelper.Clamp(accData.X, −1, 1); accData.Y = MathHelper.Clamp(accData.Y, −1, 1); // Put the vector back into the AccelerometerData property AccelerometerData = accData; } // Display the accelerometer data in a text object _accText.Text = "Accelerometer data: " + AccelerometerData.X.ToString("0.000") + ", " + AccelerometerData.Y.ToString("0.000") + ", " + AccelerometerData.Z.ToString("0.000"); }
Both of these mechanisms are present in the Accelerometer
example project, though the keyboard mechanism is commented out. Try swapping between them to see how each one feels.
So now that we are comfortable with all the options for reading input from the user, let's use them to add to the Cosmic Rocks game that we started building in the last chapter. The rest of this chapter will focus on using input techniques to turn the project into an actual playable game.
There are three actions that we need to be able to support: shooting in a specified direction, firing the ship thrusters to move the spaceship forward, and hitting the hyperspace button to randomly transport the player to another location on the screen.
There are various touch-based mechanisms that we could use to implement these actions. It would seem sensible to make tapping the screen the instruction for the spaceship to shoot. Thrusting has various options, which include allowing the user to drag the ship or to hold a point on the screen to indicate that the ship should fly toward that position. Hyperspace needs to be easily accessible, but not interfere with either of the other controls.
After some experimentation, the following controls were found to be the most natural-feeling:
Shoot:Tap the screen. The spaceship will rotate toward the touch point and shoot.
Thrust:Hold contact with the screen. The spaceship will rotate toward the touch point and thrust forward.
Hyperspace:Pinch the screen. Using multitouch allows a clear indication of the player's intention without having to worry about distinguishing hyperspace requests from moving or shooting.
Let's see how each of these controls is implemented and then build the rest of the game. There's quite a lot of new code involved, and not all of it is featured here for space reasons. The full project can be found in the CosmicRocksPartII
example project from the accompanying downloads for this chapter.
Tapping the screen actually causes two things to happen: the ship will begin to rotate to face the tapped position and it will shoot a "bullet" (or an energy bolt or whatever we decide to make it). It will shoot in whichever direction it is facing at the time the screen is tapped, which might not be toward the touch point, but repeated taps of the screen will ultimately get the bullets going in the direction the player wants.
As we observed back in the gestures discussion, using the Tap
gesture is not quite as responsive as using raw touch data because it misses some of the user's touches. Because it is important for our game to feel as responsive as possible, we will bypass gestures and use raw touch data instead. We can easily tell that the user has touched the screen by waiting for a touch point with a state of Pressed
.
The code that takes care of this can be found within the SpaceshipObject.Update
method, and the relevant portion of it is shown in Listing 4-30.
Example 4.30. Detecting and handling taps on the screen
// Is the player tapping the screen? TouchCollection tc = TouchPanel.GetState(); if (tc.Count == 1) { // Has the first touch point just been touched? if (tc[0].State == TouchLocationState.Pressed) { // Yes, so rotate to this position and fire RotateToFacePoint(tc[0].Position); // Shoot a bullet in the current direction FireBullet(); // Note the time so we can detect held contact _holdTime = DateTime.Now; } }
We are reading input values in the SpaceshipObject
class because this is where we need to actually process it. You can process input wherever you like, but you need to ensure that all TouchPanel
inputs are retrieved just once per update, and all gestures are processed just once per update, too. Reading either of these more than once per update will result in inputs being retrieved in one place but not the other, which can be very confusing to track down and debug.
On detection of a new touch, it calls two functions: RotateToFacePoint
and FireBullet
. It also puts the time into a class variable called _holdTime
, but you'll look at that in more detail when we discuss ship movement in the next section.
Each of the two called functions is simple in concept, but deserve a little more exploration about its implementation.
To rotate to face a particular angle, we need to use some trigonometry. For those of you not of a mathematical persuasion, don't panic! This really isn't too complicated, thanks in part to some of the functionality provided by XNA.
In order to rotate to a point, we first need to find the distance of the point from the spaceship's own position. We have a Vector2
for the spaceship and another for the touch point, so to find the distance we can simply subtract the spaceship position from the touch point. This is the first thing that the RotateToFacePoint
function does, the beginning of which is shown in Listing 4-31. If it finds that the touch point is exactly the same as the spaceship position, it returns without doing anything because there is no real way to know which direction we should turn toward in order to look exactly at ourselves.
Example 4.31. Finding the direction of the touch point relative to the spaceship position
private void RotateToFacePoint(Vector2 point) { // Find the angle between the spaceship and the specified point. // First find the position of the point relative to the position of the spaceship point -= Position; // If the point is exactly on the spaceship, ignore the touch if (point == Vector2.Zero) return;
Now we are ready to find the angle toward which we need to face. Before doing this, the code ensures that the current sprite Angle
property is greaterorequalto 0 radians, and less than 2 PI radians (360 degrees). The reason is that we can actually end up rotating outside of this range, as you will see shortly.
Once this has been established, the sprite angle is converted into degrees. Personally I find working in radians quite uncomfortable and I much prefer degrees, so this makes the code much easier to understand. There is, of course, a small performance hit for performing this conversion, so if you can work in radians (or convert your code back to using radians once you have it working), it will run a little faster as a result.
Now for the trigonometry. XNA provides a Math
class function called Atan2
, which returns the angle through which we need to rotate from 0 radians in order to face the specified point—exactly what we need. Because we are now storing the touch point in object space, we can simply find the angle needed to face the point and we are there—well, nearly, anyway. To keep things using the same measurements, we convert the return value from Atan2
into degrees. The code is shown in Listing 4-32.
Example 4.32. Finding the angle required to face the touch point
// Ensure that the current angle is between 0 and 2 PI while (Angle < 0) { Angle += MathHelper.TwoPi; } while (Angle > MathHelper.TwoPi) { Angle -= MathHelper.TwoPi; } // Get the current angle in degrees float angleDegrees; angleDegrees = MathHelper.ToDegrees(Angle); // Calculate the angle between the ship and the touch point, convert to degrees float targetAngleDegrees; targetAngleDegrees = MathHelper.ToDegrees((float)Math.Atan2(point.Y, point.X));
We have a little more work to do, however. XNA considers 0 degrees to point straight upward, and we generally expect it to range from 0 to 360 degrees. Atan2
returns an angle in the range of - 180 to 180 degrees, however, with 0 degrees pointing to the left instead of up.
To map this angle into the same space that XNA uses, we can first add 90 to the angle we have calculated. This puts the 0 degree angle pointing up again, and results in a range of - 90 to 270 degrees. To get back to the positive degree range, we check to see whether the value is less than 0, and add 360 if it is. This finally results in an XNA-aligned angle in the range of 0 to 360. Listing 4-33 contains the calculations for this.
Example 4.33. Aligning the Atan2 angle with the angle system used by XNA
// XNA puts 0 degrees upwards, whereas Atan2 returns it facing left, so add 90 // degrees to rotate the Atan2 value into alignment with XNA targetAngleDegrees += 90; // Atan2 returns values between −180 and +180, so having added 90 degrees we now // have a value in the range −90 to +270. In case we are less than zero, add // 360 to get an angle in the range 0 to 360. if (targetAngleDegrees < 0) targetAngleDegrees += 360;
So do we now have an angle that we can turn to face? Well, yes we do, but making the spaceship jump immediately toward the touch point feels very unnatural. It is much nicer to get it to rotate toward the touch point, so let's transition from the current angle to the target angle.
To do this we will simply check to see whether the target angle is less than or greater than the current sprite angle. If they are not equal, we will move the sprite angle toward the target angle until they meet, at which point we have finished rotating.
There is a final complication here, however. If the current spaceship angle is at 350 degrees, and the calculated target angle is at 10 degrees, the approach that we have just discussed will cause the spaceship to rotate all the way around through 340 degrees in a counterclockwise direction, whereas it would be much more efficient for it to rotate just 20 degrees clockwise. This is illustrated in Figure 4-10. In practice, having the ship rotate like this is very jarring and will be very frustrating for the player who asked for only a minor rotation!
To prevent this from happening, we will check to see whether the target angle is more than 180 degrees away from the current spaceship angle. If it is over 180 degrees above the spaceship angle, we subtract 360 from the target angle so that the spaceship rotates the other way (which will be less than 180 degrees and is thus the shorter angle). If it is over 180 degrees below the spaceship angle, we do the reverse and add 360 degrees to the target angle.
These will ensure that we always take the short route to the desired angle. It might also result in a target angle that is above 360 degrees or below 0 degrees, however. This itself doesn't cause any problems, but is the reason we ensure that the Angle
value is between 0 and 2 PI back in Listing 4-32.
Listing 4-34 shows the remainder of the RotateToFacePoint function. Once the target angle has been calculated as described, it is converted back to radians and stored in the class-level _targetAngle
variable.
Example 4.34. Ensuring that the rotation always takes the short route, and storing the target angle ready for use
// Is the target angle over 180 degrees less than the current angle? if (targetAngleDegrees < angleDegrees - 180) { // Yes, so instead of rotating the whole way around to the left, // rotate the smaller distance to the right instead. targetAngleDegrees += 360; } // Is the target angle over 180 degrees more than the current angle? if (targetAngleDegrees > angleDegrees + 180) { // Yes, so instead of rotating the whole way around to the right, // rotate the smaller distance to the left instead. targetAngleDegrees -= 360; } // Store the calculated angle, converted back to radians _targetAngle = MathHelper.ToRadians(targetAngleDegrees); }
The end result of this is that the target angle has been calculated, taking the current angle into account, but the spaceship hasn't actually rotated at all. The rotation is performed in the SpaceshipObject.Update
function, as shown in Listing 4-35. It rotates by 20 percent of the remaining angle difference, with the result that it rotates quickly at first and then more slowly as it approaches the desired angle. This gives a pleasingly smooth movement without too much delay getting to the target angle, even if the rotation is large.
The second function of tapping the screen is to fire a bullet. This is fairly simple to implement. We need a sprite that will initially appear at the same position and angle as the spaceship and will travel in the direction that the spaceship is facing.
The bullets are implemented in a new game object class, BulletObject
. In addition to the standard SpriteObject
properties, it also stores a movement vector that will be added to the position each update, and keeps track of the number of times the bullet has moved so that it can expire after it has traveled a certain distance.
To avoid having to create and destroy bullet objects each time one is fired or expired, we use the same approach as described for the ParticleObject
in the previous chapter: we keep all the bullet objects that we create and mark them as inactive when they are no longer required. When a new object is required, we first look for an existing inactive object and create a new one only if no existing object can be found.
For this reason, all the volatile properties of the bullet are initialized in a function named InitializeBullet
rather than in the class constructor because the constructor cannot be used when a bullet object is recycled.
InitializeBullet expects the bullet position and angle to be passed as its parameters. They are placed directly into its Position
and Angle
properties. It then needs to calculate its movement vector so that the bullet travels forward each update. This is easily calculated from the bullet angle using some more basic trigonometry: the Sine of the angle will provide the distance to move horizontally, whereas the Cosine will return the vertical distance. The code to perform this initialization is shown in Listing 4-36.
Example 4.36. Initializing a bullet object
internal void InitializeBullet(Vector2 Position, float Angle) { // Initialize the bullet properties this.Position = Position; this.Angle = Angle; // Calculate the velocity vector for the bullet _velocity = new Vector2((float)Math.Sin(Angle), -(float)Math.Cos(Angle)); // Mark the bullet as active IsActive = true; // Reset its update count _updates = 0; }
Updating the bullet requires a few simple tasks. First, the _velocity
vector is added to the sprite position (it is actually multiplied by 10 to make the bullet move faster). Then the position is checked against the edge of the window and moved to the opposite edge if it goes off the screen. The _updates
count is incremented, and if it reaches the lifetime of the bullet (defined as 40 updates), it is expired by setting its IsActive
property to false
.
Finally, the bullet position is checked against each of the rocks. This is similar to the spaceship collision detection described in the last chapter, but is actually a little easier: because the bullet is very small, we can consider it as being just a single point rather than a rectangle. The collision check is therefore simply a matter of seeing whether the distance from the rock position to the bullet position is less than the rock size. The CheckForCollision
function is shown in Listing 4-37.
Example 4.37. Checking to see if the bullet object has collided with a rock
private void CheckForCollision() { int objectCount; GameObjectBase gameObj; RockObject rockObj; float rockSize; float rockDistance; // Loop backwards through the rocks as we may modify the collection when a rock is // destroyed objectCount = _game.GameObjects.Count; for (int i = objectCount - 1; i >= 0; i--) { // Get a reference to the object at this position gameObj = _game.GameObjects[i]; // Is this a space rock? if (gameObj is RockObject) { // It is... Does its bounding rectangle contain the bullet position? rockObj = (RockObject)gameObj; if (rockObj.BoundingBox.Contains((int)Position.X, (int)Position.Y)) { // It does.. See if the distance is small enough for them to collide. // First calculate the size of the object rockSize = rockObj.SpriteTexture.Width / 2.0f * rockObj.ScaleX; // Find the distance between the two points rockDistance = Vector2.Distance(Position, rockObj.Position); // Is the distance less than the rock size? if (rockDistance < rockSize) { // Yes, so we have hit the rock rockObj.DamageRock(); // Destroy the bullet IsActive = false; } } } } }
Note how the loop across the game objects runs backward; this is because we can remove rock objects from the collection when they are destroyed. This leads to a reduction in the collection size, which means that accessing the elements at the end would result in an out of bounds index. Looping backward ensures that the rocks we remove will always affect the indexes of only the objects that we have already processed, removing the need to worry about this situation.
The BulletObject
class now contains all the code it needs to be initialized, to move, to collide with rocks, and to expire when it has traveled a set distance. All that is left is the code to create the bullet in response to the player tapping the screen.
This is handled in the SpaceshipObject.FireBullet
function, shown in Listing 4-38. It tries to retrieve a bullet object using the GetBulletObject
function (which we will examine in a moment); if one is obtained, it calls its InitializeBullet
function as already detailed in Listing 4-36.
Example 4.38. Firing a bullet from the player's ship
private void FireBullet() { BulletObject bulletObj; // Try to obtain a bullet object to shoot bulletObj = GetBulletObject(); // Did we find one? if (bulletObj == null) { // No, so we can't shoot at the moment return; } // Initialize the bullet with our own position and angle bulletObj.InitializeBullet(Position, Angle); }
GetBulletObject
uses exactly the same approach that we saw for obtaining ParticleObject
instances in the previous chapter: it looks for an existing bullet object whose IsActive
value is false
. If one is found, it is returned. If no such object is found, it creates a new object. However, to make the game a little more challenging, we allow the player to have only four bullets active at any time. If these bullets are already present, GetBulletObject
returns null to prevent any further bullets from being fired. The code for this function is shown in Listing 4-39.
Example 4.39. Finding or creating a bullet object for the player to fire
private BulletObject GetBulletObject() { int objectCount; int bulletCount = 0; GameObjectBase gameObj; BulletObject bulletObj = null; // Look for an inactive bullet objectCount = _game.GameObjects.Count; for (int i = 0; i < objectCount; i++) { // Get a reference to the object at this position gameObj = _game.GameObjects[i]; // Is this object a bullet?
if (gameObj is BulletObject) { // Count the number of bullets found bulletCount += 1; // Is it inactive? if (((BulletObject)gameObj).IsActive == false) { // Yes, so re-use this bullet return (BulletObject)gameObj; } } } // Did we find a bullet? if (bulletObj == null) { // No, do we have capacity to add a new bullet? if (bulletCount < MaxBullets) { // Yes, so create a new bullet bulletObj = new BulletObject(_game, _game.Textures["Bullet"]); _game.GameObjects.Add(bulletObj); return bulletObj; } } // No more bullets available return null; }
When the player holds a point on the screen for a brief period, we will use that as the signal that the ship should fire its thrusters and move toward the point the user is touching. This gives a simple and intuitive mechanism for moving the ship around the screen.
We could have used the Hold
gesture to initiate this, but it has a drawback in this situation: the time between initiating the hold and the Hold
gesture triggering is too long—about a second, which is just too slow for a fast-moving game such as this one.
Instead, Cosmic Rocks uses the raw touch API to implement its own version of the hold gesture. The process is actually very simple:
When a touch point with a state
of Pressed
is received, it stores the current time in a class-level variable.
Each time the touch point returns a state
of Moved
, the current time is compared to the stored time. If sufficient time has elapsed between the two, the touch point is considered as being held and the thrust processing is executed.
The code from SpaceshipObject.Update
required to perform this (which also includes the bullet firing check from the previous section) is shown in Listing 4-40. The code waits for 300 milliseconds (0.3 seconds) before it starts thrusting the ship. This figure seems to work well; increasing it makes the thrust control unresponsive, whereas lowering it makes it possible for the user to accidentally thrust when they were intending only to shoot.
Example 4.40. Detecting a held touch point using raw touch data
// Is the player tapping the screen? TouchCollection tc = TouchPanel.GetState(); if (tc.Count == 1) { // Has the first touch point just been touched? if (tc[0].State == TouchLocationState.Pressed) { // Yes, so rotate to this position and fire RotateToFacePoint(tc[0].Position); // Shoot a bullet in the current direction FireBullet(); // Note the time so we can detect held contact _holdTime = DateTime.Now; } if (tc[0].State == TouchLocationState.Moved) { // Has sufficient time passed to start thrusting? if (DateTime.Now.Subtract(_holdTime).TotalMilliseconds > 300) { // Yes, so thrust towards this position RotateToFacePoint(tc[0].Position); Thrust(); } } }
Once the hold is established, the code calls into a function called Thrust
to initiate the movement of the spaceship. This is achieved in just the same method as for initializing the bullet: the ship's Angle
is used to determine a movement vector. This vector is then added to the ship's existing velocity so that its existing movement is taken into account, too. This means that continuing to thrust in the same direction will cause the ship to accelerate, whereas thrusting in the opposite direction to movement will cause it to slow down and eventually stop.
The Thrust
code is shown in Listing 4-41. In the game it also adds some particle objects to represent the thruster exhaust, but this is omitted from the listing here for brevity.
Example 4.41. Thrusting—adding to the ship's velocity
private void Thrust() { Vector2 shipFacing; // Calculate the vector towards which the ship is facing shipFacing = new Vector2((float)Math.Sin(Angle), -(float)Math.Cos(Angle)); // Scale down and add to the velocity _velocity += shipFacing / 10; }
The _velocity
is applied in the SpaceshipObject.Update
code, just as it has been for the bullets and rocks. Note that we don't apply any kind of friction to the spaceship; it is floating in a vacuum after all, so once the player has started moving it is quite a challenge to stop again!
The final input control left to handle is for hyperspace. Hyperspace can be used as a last resort emergency measure when the player cannot escape from the approaching rocks. It makes the playership disappear for a few seconds before reappearing at a random location somewhere on the screen. This can be a life saver, but it can also cause the ship to reappear right on top of a rock so it needs to be used with caution.
We use the Pinch gesture as a trigger for hyperspace. This is easily accessed by the player, but is not something that is likely to be triggered accidentally.
Because using multitouch on the device emulator presents a problem in most development environments, we will also provide a keyboard function to trigger the hyperspace. This is unlikely to be useful on a real device, but makes testing much easier. Pressing H (for hyperspace) will be the trigger used to test in the emulator.
The input processing code relating to hyperspace is shown in Listing 4-42, taken from SpaceshipObject.Update
.
Example 4.42. Checking the user input for hyperspace
// Is the player pinching? while (TouchPanel.IsGestureAvailable) { GestureSample gesture = TouchPanel.ReadGesture(); switch (gesture.GestureType) { case GestureType.Pinch: Hyperspace(); break; } } // Did the player press 'H' on the keyboard? // (Allows us to hyperspace on the emulator with no multitouch) if (Keyboard.GetState().IsKeyDown(Keys.H)) Hyperspace();
Hyperspacing is implemented using two class-level variables: _hyperspaceZoom
and _hyperspaceZoomAdd
. Normally these are both set to zero, but when hyperspace is active the _hyperspaceZoomAdd
variable is set to a value that is added to _hyperspaceZoom
each update. While _hyperspaceZoom
is greater than zero, the ship is in hyperspace.
Once _hyperspaceZoom
reaches a certain level, _hyperspaceZoomAdd
is negated so that it starts reducing the value of _hyperspaceZoom
back toward zero. Once it reaches zero, the hyperspace is finished: both variables are set back to zero, and the spaceship update process returns to its normal state.
The Hyperspace
function, shown in Listing 4-43, therefore simply sets the _hyperspaceZoomAdd
variable to have a value of 5.
Example 4.43. Initiating hyperspace
private void Hyperspace() { // Initiate the hyperspace by setting the zoom add _hyperspaceZoomAdd = 5; }
In Update, the code checks to see whether the _hyperspaceZoomAdd
value is non-zero. If so, it is applied as has been described. When _hyperspaceZoom
reaches its maximum level (defined in the listing as 150), the spaceship is put into a random new position, and _hyperspaceZoomAdd
is negated. When _hyperspaceZoom
reaches zero, the hyperspace variables are set to zero. The spaceship velocity is also cancelled, meaning that hyperspace is the one thing that can completely stop the ship from moving.
Note that the hyperspace processing code returns from the Update
function at the end. This stops all other processing of the spaceship (movement, input controls, and so on) while hyperspacing is active.
Also note that all touch gestures are read and discarded before returning; this is important because without this they will queue up and all be processed together when the hyperspace has finished. If the gesture system reports a number of Pinch
gestures in response to the user's initial input, it will result in the ship going into hyperspace over and over again until the whole queue is empty. Discarding the queued gestures ensures that these extra gesture reports will have no effect.
The relevant Update code is shown in Listing 4-44.
Example 4.44. Updating the hyperspace variables
// Are we hyperspacing? if (_hyperspaceZoomAdd != 0) { // Add to the zoom _hyperspaceZoom += _hyperspaceZoomAdd; // Have we reached maximum zoom? if (_hyperspaceZoom >= 150) { // Yes, so move to the new location // Start to zoom back out _hyperspaceZoomAdd = -_hyperspaceZoomAdd; // Set a random new position PositionX = GameHelper.RandomNext(0, _game.Window.ClientBounds.Width - SpriteTexture.Width) + SpriteTexture.Width / 2; PositionY = GameHelper.RandomNext(0, _game.Window.ClientBounds.Height - SpriteTexture.Height) + SpriteTexture.Height / 2; } // Have we finished hyperspacing? if (_hyperspaceZoom <= 0) { // Yes, so cancel the hyperspace variables _hyperspaceZoom = 0; _hyperspaceZoomAdd = 0; // Stop movement _velocity = Vector2.Zero; } // Discard any queued gestures and then return while (TouchPanel.IsGestureAvailable) { TouchPanel.ReadGesture(); } // Don't allow any other updates while hyperspacing return; }
Finally, to indicate visually that hyperspace is in effect, we increase the scale of the ship and fade out its alpha across the duration of the hyperspace processing. This is achieved by overriding the ScaleX
, ScaleY
, and SpriteColor
properties. The scale properties increase the return value based on the contents of the _hyperspaceZoom
variable, whereas SpriteColor reduces the alpha level toward zero. Because the _hyperspaceZoom
value is 0 when not hyperspacing, these property overrides will have no effect unless the hyperspace is active. The overrides are shown in Listing 4-45.
Example 4.45. Overriding the scale and color properties to indicate the progress through hyperspace
// If the player is hyperspacing, zoom the spaceship to indicate this public override float ScaleX { get { return base.ScaleX + (_hyperspaceZoom * 0.02f); } } // If the player is hyperspacing, zoom the spaceship to indicate this public override float ScaleY { get { return base.ScaleX + (_hyperspaceZoom * 0.02f); } } // If the player is hyperspacing, fade out the spaceship to indicate this public override Color SpriteColor { get { Color ret = base.SpriteColor; ret.A = (byte)MathHelper.Clamp(255 - _hyperspaceZoom * 2.5f, 0, 255); return ret; } }
In this chapter, we have looked at all the different mechanisms that you have at your disposal when dealing with user interaction with your game.
The limited standard control mechanisms available to Windows Phone 7 devices mean that some thought might be needed to set up the input approach for your game. With no buttons or directional pad, some types of game can be a challenge to implement, whereas other game types will hugely benefit from the presence of the touch screen. The accelerometer is also useful as an input and can be used either in isolation or alongside the touch screen.
Before you start working on a game you should have a clear idea of how you expect the user to interact with it. These ideas might change during development, of course, but it is important to consider them before starting to try to avoid unexpected input problems once you have invested time in your game.
Cosmic Rocks is now looking and playing like a real game, and hopefully you are already finding it fun to play. It is, of course, still missing some important elements, such as moving to new levels, a score, and player lives. We could implement them with a little effort, but there is one other thing that it is missing that we haven't explored at all yet: sound. It will be the subject of the next chapter.
3.144.89.2