Rather than asking the input system about the state of a certain peripheral in every single frame, we can let the input system know that we are interested in certain user-generated events and we can get notified whenever they occur. This is known as event listening, and it is better known in the design patterns world as Observer.
It is important to know the most common design patters and when to apply them. To learn more about this topic, we recommend that you take a look at the book Design Patterns: Elements of Reusable Object-Oriented Software, Erich Gamma, Addison-Wesley Professional, or the freely available Game Programming Patterns, Robert Nystrom, Apress (http://gameprogrammingpatterns.com).
In this recipe, we will tell Libgdx to notify our sample application of the input events and show them all as a list on the screen. Examples of input events are mouse movements, touch dragging, scrolling, touching/clicking, key presses and releases, and character typing.
Once again, make sure that the sample projects are available from your Eclipse workspace.
This time round, please direct your attention towards the InputListeningSample.java
file, where the code for this recipe lives. We have a MESSAGE_MAX
constant that defines the maximum number of input events we can show at once on the screen:
private static final int MESSAGE_MAX = 45;
The list of events is nothing more than an array of string objects, holding each message, as shown in the following code. As we did in the previous recipe, we need an orthographic camera to render these messages, a rectangle to represent our viewport, a sprite batch, and a bitmap font object:
private Array<String> messages;
The create()
method details should be fairly easy to understand at this point. However, it is worth mentioning that in order to tell Libgdx that the current object is interested in receiving input events, we use Gdx.input.setInputProcessor()
, which takes an InputProcessor
interface implementation.
Have a look at the following code:
public void create() { … Gdx.input.setInputProcessor(this); }
Next one up is the
render()
method, where we iterate over all the messages and use the bitmap font to draw the message on the screen. We do so at a fixed location on the x axis, while we decrease the value of y axis at each step so as to stack messages on top of each other.
Here is the complete InputProcessor
interface, which InputListeningSample
implements. For every method, we make a call to the addMessage()
method, shown later on, passing in some string with the event information:
Public interface InputProcessor { public boolean keyDown (int keycode); public boolean keyUp (int keycode); public boolean keyTyped (char character); public boolean booleanbn (int boolea, int boolea, int pointer, int button); public boolean boolean (int boolea, int boolea, int pointer, int button); public boolean touchDragged (int boolea, int boolea, int pointer); public boolean mouseMoved (int boolea, int boolea); public boolean scrolled (int amount); }
The methods' names are quite self-explanatory. Nevertheless, please find a detailed list as follows:
keyDown()
: Whenever the user presses a hardware or on-screen key, this method receives a value from the Key
static class.keyUp()
: Whenever the user releases a previously pressed key, this method also receives a value from the Key
static class.keyTyped()
: This is fired up under the same circumstances as keyUp()
, only it takes the character that the user pressed.touchDown()
: This is called whenever the user touches the screen or clicks on the screen with the mouse/touchpad. It gets the screen coordinates, the pointer, and the button code from the Buttons
static class.touchUp()
: This is identical to touchDown()
, but it fires whenever the user releases the finger or the mouse button.touchDragged()
: This fires every time the mouse or finger position changes while clicking on or touching the screen. It receives the new screen coordinates and the pointer that caused the event.mouseMoved()
: Every time the user moves the mouse, the method takes the new cursor coordinates in screen space. Note that this will fire regardless of whether or not the user is clicking.scrolled()
: This is fired whenever the user activates the mouse wheel.Whenever addMessage()
is called, we call add()
on the messages array with the new string and a timestamp. We just need to make sure that the array's size does not exceed MESSAGE_MAX
, in which case, we remove the first element, which is the oldest event in the collection. This is shown in the following code:
private void addMessage(String message) { messages.add(message + " time: " + System.currentTimeMillis()); if (messages.size > MESSAGE_MAX) { messages.removeIndex(0); } }
It would be really good to experiment with events and see for yourself under which conditions they fire. This example will give you a better insight on how input listening works on Libgdx.
Which approach is better? Polling or event listening? An excellent question, but unfortunately, there is no immediate answer. It really depends on the situation that you are facing. If you simply want to occasionally make a decision depending on the state of some input device, polling might be the best approach. On the other hand, if what you want is to react to individual actions, event listening could be the way to go as it tends to be more efficient. If you need to check the state of a key when the user clicks the mouse, then you can use a combination of the two.
With input polling, you might miss inputs. If the user presses a key and releases it during the same frame, those events will be lost.
At the end of the day, what matters is to write efficient code while maintaining readability and ease of modification.
Event listeners are typically implemented in a very similar way across engines and frameworks, as follows:
Libgdx's Input
backend implementations handle the specifics of when to call the InputProcessor
event handlers while achieving uniform behavior across platforms.
Android devices have the Back and Menu keys with default behaviors. The Back key goes up one slot in the Activity
stack, and the Menu key directly takes us to the launcher. Our Libgdx games may have different screens, but they are not implemented as different Activity
objects, which means that both the Back and Menu keys will take us out of the application. Surely, we do not want this!
Using Libgdx, we can prevent such behavior and implement your response. As an example, the Back key could take us from the game screen to the level selection screen, whereas the Menu key could show an exit confirmation dialog.
To intercept the Back and Menu keys, we need to use the following methods (the Android Home key behavior cannot be overridden for security reasons):
Gdx.input.setCatchBackKey(boolean catchBack); Gdx.input.setCatchMenuKey(boolean catchMenu);
Later on, we can react to these keys using their corresponding codes inside the keyDown
event handler as shown in the following snippet:
public boolean keyDown (int keycode) { if (keycode == Keys.MENU) { } else if (keycode == Keys.BACK) { } return true; }
The following screenshot shows a game running on an Android phone with the Back, Menu, and Home keys visible:
18.189.189.67