Chapter 7. Input: Let's Get Moving!

Up to this point we've done some basic moving around a scene, but nothing like we'll need if we're going to build commercial games for iOS devices. One thing that is often overlooked when writing games for iOS is that while iPads, iPhones, etc. can perform many of the same functions as their desktop cousins – there are very specific techniques you need to be aware of when moving from the keyboard mouse world to that of touchscreens and accelerometers. In this chapter we're going to spend some time getting into the nitty gritty of what those details are.

In this chapter we shall:

  • Learn about the iOS touch screen interface
  • Learn about accelerometers and how they work
  • Create an interface on the touch screen for moving through an environment
  • Learn how to process gestures

This may not sound like a lot, but with iOS development there are many things that you can do incorrectly, which will lead to difficulties when working with Unity. Rather than assume that you'll get it all right we're going to talk through it step by step to make sure that you can spend your time building games and not trying to decipher mysterious error messages.

So let's get on with it…

Input Capabilities

The iPhone is a collection of a wide variety of technologies that can be used to detect input from the user. The two most important technologies, from the perspective of a game developer, are the touch screen and the accelerometer. With these two input mechanisms nearly every game available to date has been constructed, so we will perform an in-depth analysis of how they work and how we can use their capabilities to determine the intent of the user within our game.

The technology of touch

A touch screen is a display device that can detect the presence and location of one or more touches within the display area. While early touch devices relied on passive instruments, such as a stylus to detect interactions with the touch surface, modern touch devices detect physical contact with the device.

While it may not seem the case, there are a variety of technologies used to drive touch interaction with devices. The decision on which technology is chosen depends upon a multitude of factors such as cost, durability, scalability, and versatility. It is very easy for one to suggest that one touch technology is superior to others, but a technology that works well for one particular application may be entirely inappropriate for another. For example, the technology used in the iPhone requires a person to make physical contact with the surface for a touch to be registered. However, if you're building a kiosk, you may desire that users are able to interact with the device with gloved hands. This seemingly innocent choice has radical implications on the technology chosen as well as the design of the device itself.

There are several common types of touch surfaces that are common in devices today: resistive, capacitive, and infrared. While the mechanics of their implementations vary, they all follow the same basic recipe – when you place your finger or stylus on the screen, there is some change in state on the surface that is then sent to a processor, which determines where that touch took place. It is how that change in state is measured which separates the technologies from one another.

While all of today's iOS devices utilize a particular surface type – capacitive, it is foreseeable that Apple may change technologies at some point in the future as they expand the platform to cover new types of devices. In addition, it is important to understand the other types of surfaces that you may encounter as you port your content to other platforms.

Resistive technology

A resistive screen is comprised of layers of conductive and resistive material. When pressure is placed on the screen, the pressure from the finger or stylus causes the resistive and conductive material to come into contact – resulting in a change in the electrical field. At this point, measuring the resistance on the circuits connected to the conductive material will denote the location of the touch.

Given that any pressure can cause the contact to occur, a resistive screen works well when you want to have a passive implement such as a stylus as a possible touch instrument. In addition, you can keep your gloves on with this technology, as a gloved hand will work just as well as a naked one. As resistive technology has been around for a lot longer it tends to be cheaper to produce and is the technology most commonly found at the lower end of the cost spectrum.

Capacitive technology

A capacitive screen uses a layer of capacitive material that holds an electrical charge. When touched, this material registers a difference in the amount of charge at a specific location on the surface at the point of contact. This information is then passed onto the processors which can determine precisely where the touch takes place. The iOS devices simplify this process by arranging the capacitors in a grid such that every point on the screen generates its own signal when touched. This has the added benefit of producing a very high resolution of touch data that can be processed by the processor.

As the capacitive approach relies on having capacitive material in order to function, it requires that something that can conduct electricity performs the touching. Since the human body conducts electricity this works fine, but it rules out the stylus approach, or more specifically it requires that a special capacitive stylus be used.

Infrared technology

An infrared screen uses an array of infrared or LED light beams, which it projects beneath the protective glass, or more commonly, acrylic surface. A camera will then peer up at this grid of beams and look for any interruption of the signal, similar to the grid approach used by iOS devices – just with an infrared camera and beams of light. This approach is refined and deployed with the Microsoft Surface and has some particular unexpected benefits. Since a camera is used to determine the touch location, that camera can also look at the object at that location. If that object is a marker, it can then extract information from that marker as well. This is used to good effect with the Microsoft Surface.

The obvious downside to the Infrared approach is that it requires a fair amount of space to work its magic. Due to the nature of the optics, the further you are away from the surface the more resolution you are able to gain on that surface. This makes the technology impractical for the typical iPhone application.

Accelerometer

An accelerometer is a device that measures the acceleration of motion on a structure. In iOS devices the accelerometer is a 3-axis system such that it can determine acceleration along the various axes of the device (x,y,z). When at rest, an accelerometer would be measuring the force of gravity (1g). As the device moves, the device will be able to measure the movements of the device based upon these accelerations along the various axes and determine the orientation of the new device. Without getting into the associated math, the only thing that you really need to know is that no matter what orientation you put the device in, the device is aware of that orientation:

Gyroscope

A gyroscope is a device for measuring the orientation of a device. Unlike an accelerometer, the orientation of a device can be derived without the device actually moving. Currently only available on a subset of the iOS devices, the gyroscope enables a much more refined detection of movement in the device. The 3-axis gyro in the iOS devices work in tandem with the built-in accelerometer to produce a complete 6-axis sensitivity for motion gestures. At the time of this writing there is no support within Unity for the Gyroscope so we will not focus on its use within the context of our game.

Touch screen

Our game design calls for having a set of joysticks at the bottom on the screen that we can use to move around the world and manipulate the camera. The control scheme mirrors what a player would expect if they were familiar with an Xbox style controller.

Touch screen

We also need to perform actions with the right button. Similar to the Xbox controller we want to be able to invoke actions by tapping down on the right joystick as an action.

The next feature we want to plan for is the ability to perform gestures on the surface so that we can avoid having to fill our interface with extraneous buttons. There are several gestures that we want to support in our gameplay.

Gesture

Meaning

Swipe Up

Throw Grenade

Swipe Left/Right

Dodge Left/Right

Swipe Down

Guard/Take Cover

Accelerometer/Gyroscope

Our game design doesn't call for the use of the accelerometer, but for the sake of instruction we will use the accelerometer as an additional mechanism for manipulating the camera and provide a shake command that we will use if the character is ever knocked down and needs to heal.

Motion

Meaning

Shake

Heal

Turn Left/Right

Rotate Camera

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.12.50