Chapter 12. Drawing 2D and 3D Graphics

The Android menagerie of widgets and the tools for assembling them are convenient, powerful, and cover a broad variety of needs. What happens, though, when none of the existing widgets offer what you need? Maybe your application needs to represent playing cards, phases of the moon, or the power diverted to the main thrusters of a rocket ship. In that case, you’ll have to know how to roll your own.

This chapter is an overview of graphics and animation on Android. It’s directed at programmers with some background in graphics, and goes into quite a bit of depth about ways to twist and turn the display. You will definitely need to supplement this chapter with Android documentation, particularly because the more advanced interfaces are still undergoing changes. But the techniques here will help you dazzle your users.

Rolling Your Own Widgets

As mentioned earlier, “widget” is a just convenient term for a subclass of android.view.View, typically for a leaf node in the view tree. Many views are just containers for other views and are used for layout; we don’t consider them widgets, because they don’t directly interact with the user, handle events, etc. So the term “widget,” although informal, is useful for discussing the workhorse parts of the user interface that have the information and the behavior users care about.

You can accomplish a lot without creating a new widget. Chapter 11 constructed applications consisting entirely of existing widgets or simple subclasses of existing widgets. The code in that chapter demonstrated building trees of views, laying them out in code or in layout resources in XML files.

Similarly, the MicroJobs application has a view that contains a list of names corresponding to locations on a map. As additional locations are added to the map, new name-displaying widgets are added dynamically to the list. Even this dynamically changing layout is just a use of pre-existing widgets; it does not create new ones. The techniques in MicroJobs are, figuratively, adding or removing boxes from a tree like the one illustrated in Figure 10-3 in Chapter 10.

In contrast, this chapter shows you how to roll your own widget, which involves looking under the View hood.

The simplest customizations start with TextView, Button, DatePicker, or one of the many widgets presented in the previous chapter. For more extensive customization, you will implement your own widget as a direct subclass of View.

A very complex widget, perhaps used as an interface tool implemented in several places (even by multiple applications), might even be an entire package of classes, only one of which is a descendant of View.

This chapter is about graphics, and therefore about the View part of the Model-View-Controller pattern. Widgets also contain Controller code, which is good design because it keeps together all of the code relevant to a behavior and its representation on the screen. This part of this chapter discusses only the implementation of the View; the implementation of the Controller was discussed in Chapter 10.

Concentrating on graphics, then, we can break the tasks of this chapter into two essential parts: finding space on the screen and drawing in that space. The first task is known as layout. A leaf widget can assert its space needs by defining an onMeasure method that the Android framework will call at the right time. The second task, actually rendering the widget, is handled by the widget’s onDraw method.

Layout

Most of the heavy lifting in the Android framework layout mechanism is implemented by container views. A container view is one that contains other views. It is an internal node in the view tree and subclasses of ViewGroup (which, in turn, subclasses View). The framework toolkit provides a variety of sophisticated container views that provide powerful and adaptable strategies for arranging a screen. AbsoluteLayout (see AbsoluteLayout), LinearLayout (see LinearLayout), and RelativeLayout (see RelativeLayout), to name some common ones, are container views that are both relatively easy to use and fairly hard to reimplement correctly. Since they are already available, fortunately you are unlikely to have to implement most of the algorithm discussed here. Understanding the big picture, though—how the framework manages the layout process—will help you build correct, robust widgets.

Example 12-1 shows a widget that is about as simple as it can be, while still working. If added to some Activity’s view tree, this widget will fill in the space allocated to it with the color cyan. Not very interesting, but before we move on to create anything more complex, let’s look carefully at how this example fulfills the two basic tasks of layout and drawing. We’ll start with the layout process; drawing will be described later in the section Canvas Drawing.

Example 12-1. A trivial widget
public class TrivialWidget extends View {

    public TrivialWidget(Context context) {
        super(context);
        setMinimumWidth(100);
        setMinimumHeight(20);
    }

    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        setMeasuredDimension(
            getSuggestedMinimumWidth(),
            getSuggestedMinimumHeight());
    }

    @Override
    protected void onDraw(Canvas canvas) {
        canvas.drawColor(Color.CYAN);
    }
}

Dynamic layout is necessary because the space requirements for widgets change dynamically. Suppose, for instance, that a widget in a GPS-enabled application displays the name of the city in which you are currently driving. As you go from “Ely” to “Post Mills,” the widget receives notification of the change in location. When it prepares to redraw the city name, though, it notices that it doesn’t have enough room for the whole name of the new town. It needs to request that the screen be redrawn in a way that gives it more space, if that is possible.

Layout can be a surprisingly complex task and very difficult to get right. It is probably not very hard to make a particular leaf widget look right on a single device. On the other hand, it can be very tricky to get a widget that must arrange children to look right on multiple devices, even when the dimensions of the screen change.

Layout is initiated when the requestLayout method is invoked on some view in the view tree. Typically, a widget calls requestLayout on itself when it needs more space. The method could be invoked, though, from any place in an application, to indicate that some view in the current screen no longer has enough room to draw itself.

The requestLayout method causes the Android UI framework to enqueue an event on the UI event queue. When the event is processed, in order, the framework gives every container view an opportunity to ask each of its child widgets how much space each child would like for drawing. The process is separated into two phases: measuring the child views and then arranging them in their new positions. All views must implement the first phase, but the second is necessary only in the implementations of container views that must manage the layout of child views.

Measurement

The goal of the measurement phase is to provide each view with an opportunity to dynamically request the space it would ideally like for drawing. The UI framework starts the process by invoking the measure method of the view at the root of the view tree. Starting there, each container view asks each of its children how much space it would prefer. The call is propagated to all descendants, depth first, so that every child gets a chance to compute its size before its parent. The parent computes its own size based on the sizes of its children and reports that to its parent, and so on, up the tree.

In Assembling a Graphical Interface, for instance, the topmost LinearLayout asks each of the nested LinearLayout widgets for its preferred dimensions. They in turn ask the Buttons or EditText views they contain for theirs. Each child reports its desired size to its parent. The parents then add up the sizes of the children, along with any padding they insert themselves, and report the total to the topmost LinearLayout.

Because the framework must guarantee certain behaviors for all Views, during this process, the measure method is final and cannot be overridden. Instead, measure calls onMeasure, which widgets may override to claim their space. In other words, widgets cannot override measure, but they can override onMeasure.

The arguments to the onMeasure method describe the space the parent is willing to make available: a width specification and a height specification, measured in pixels. The framework assumes that no view will ever be smaller than 0 or bigger than 230 pixels in size and, therefore, it uses the high-order bits of the passed int parameter to encode the measurement specification mode. It is as if onMeasure were actually called with four arguments: the width specification mode, the width, the height specification mode, and the height. Do not be tempted to do your own bit-shifting to separate the pairs of arguments! Instead, use the static methods MeasureSpec.getMode and MeasureSpec.getSize.

The specification modes describe how the container view wants the child to interpret the associated size. There are three of them:

MeasureSpec.EXACTLY

The calling container view has already determined the exact size of the child view.

MeasureSpec.AT_MOST

The calling container view has set a maximum size for this dimension, but the child is free to request less.

MeasureSpec.UNSPECIFIED

The calling container view has not imposed any limits on the child, and so the child may request anything it chooses.

A widget is always responsible for telling its parent in the view tree how much space it needs. It does this by calling setMeasuredDimensions to set the properties that then become available to the parent, through the methods getMeasuredHeight and getMeasuredWidth. If your implementation overrides onMeasure but does not call setMeasuredDimensions, the measure method will throw IllegalStateException instead of completing normally.

The default implementation of onMeasure, inherited from View, calls setMeasuredDimensions with one of two values, in each direction. If the parent specifies MeasureSpec.UNSPECIFIED, it uses the default size of the view: the value supplied by either getSuggestedMinimumWidth or getSuggestedMinimumHeight. If the parent specifies either of the other two modes, the default implementation uses the size that was offered by the parent. This is a very reasonable strategy and allows a typical widget implementation to handle the measurement phase completely, simply by setting the values returned by getSuggestedMinimumWidth and getSuggestedMinimumHeight.

Your widget may not actually get the space it requests. Consider a view that is 100 pixels wide and has three children. It is probably obvious how the parent should arrange its children if the sum of the pixel widths requested by the children is 100 or less. If, however, each child requests 50 pixels, the parent container view is not going to be able to satisfy them all.

A container view has complete control of how it arranges its children. In the circumstance just described, it might decide to be “fair” and allocate 33 pixels to each child. Just as easily, it might decide to allocate 50 pixels to the leftmost child and 25 to each of the other two. In fact, it might decide to give one of the children the entire 100 pixels and nothing at all to the others. Whatever its method, though, in the end the parent determines a size and location for the bounding rectangle for each child.

Another example of a container view’s control of the space allocated to a widget comes from the example widget shown previously in Example 12-1. It always requests the amount of space it prefers, regardless of what it is offered (unlike the default implementation). This strategy is handy to remember for widgets that will be added to the toolkit containers, notably LinearLayout, that implement gravity. Gravity is a property that some views use to specify the alignment of their subelements. The first time you use one of these containers, you may be surprised to find that, by default, only the first of your custom widgets gets drawn! You can fix this either by using the setGravity method to change the property to Gravity.FILL or by making your widgets insistent about the amount of space they request.

It is also important to note that a container view may call a child’s measure method several times during a single measurement phase. As part of its implementation of onMeasure, a clever container view, attempting to lay out a horizontal row of widgets, might call each child widget’s measure method with mode MEASURE_SPEC.UNSPECIFIED and a width of 0 to find out what size the widget would prefer. Once it has collected the preferred widths for each of its children, it could compare the sum to the actual width available (which was specified in its parent’s call to its measure method). Now it might call each child widget’s measure method again, this time with the mode MeasureSpec.AT_MOST and a width that is an appropriate proportion of the space actually available. Because measure may be called multiple times, an implementation of onMeasure must be idempotent and must not change the application state.

A container view’s implementation of onMeasure is likely to be fairly complex. ViewGroup, the superclass of all container views, does not supply a default implementation. Each of the UI framework container views has its own. If you contemplate implementing a container view, you might consider basing it on one of them. If, instead, you implement measurement from scratch, you are still likely to need to call measure for each child and should consider using the ViewGroup helper methods: measureChild, measureChildren, and measureChildWithMargins. At the conclusion of the measurement phase, a container view, like any other widget, must report the space it needs by calling setMeasuredDimensions.

Arrangement

Once all the container views in the view tree have had a chance to negotiate the sizes of each of their children, the framework begins the second phase of layout, which consists of arranging the children. Again, unless you implement your own container view, you probably will never have to implement your own arrangement code. This section describes the underlying process so that you can better understand how it might affect your widgets. The default method, implemented in View, will work for typical leaf widgets, as demonstrated previously by Example 12-1.

Because a view’s onMeasure method might be called several times, the framework must use a different method to signal that the measurement phase is complete and that container views must fix the final locations of their children. Like the measurement phase, the arrangement phase is implemented with two methods. The framework invokes a final method, layout, at the top of the view tree. The layout method performs processing common to all views and then delegates to onLayout, which custom widgets override to implement their own behaviors. A custom implementation of onLayout must at least calculate the bounding rectangle that it will supply to each child when it is drawn and, in turn, invoke the layout method for each child (because it might also be a parent to other widgets).

It is worth reiterating that a widget is not guaranteed to receive the space it requests. It must be prepared to draw itself in whatever space is actually allocated to it. If it attempts to draw outside the space allocated to it by its parent, the drawing will be clipped by the clip rectangle. To exert fine control—to fill exactly the space allocated to it, for instance—a widget must either implement onLayout and record the dimensions of the allocated space or inspect the clip rectangle of the Canvas that is the parameter to onDraw.

Canvas Drawing

Now that we’ve explored how widgets allocate the space on the screen in which they draw themselves, we can turn to coding some widgets that actually do some drawing.

The Android framework handles drawing in a way that should be familiar, now that you’ve read about measurement and arrangement. When some part of the application determines that the current screen drawing is stale because some state has changed, it calls the View method invalidate. This call causes a redraw event to be added to the event queue.

Eventually, when that event is processed, the framework calls the draw method at the top of the view tree. This time the call is propagated preorder, with each view drawing itself before it calls its children. This means that leaf views are drawn after their parents, which are, in turn, drawn after their parents. Views that are lower in the tree appear to be drawn on top of those nearer the root of the tree.

The draw method calls onDraw, which a subclass overrides to implement its custom rendering. When your widget’s onDraw method is called, it must render itself according to the current application state and return. It turns out, by the way, that neither View.draw nor ViewGroup.dispatchDraw (responsible for the traversal of the view tree) is final! Override them at your peril!

In order to prevent extra painting, the framework maintains some state information about the view, called the clip rectangle. A key concept in the UI framework, the clip rectangle is part of the state passed in calls to a component’s graphical rendering methods. It has a location and size that can be retrieved and adjusted through methods on the Canvas, and it acts like a stencil through which a component does all of its drawing. By correctly setting the size, shape, and location of the clip rectangle aperture, the framework can prevent a component from drawing outside its boundaries or redrawing regions that are already correctly drawn.

Before proceeding to the specifics of drawing, let’s again put the discussion in the context of Android’s single-threaded MVC design pattern. There are two essential rules:

  • Drawing code should be inside the onDraw method. Your widget should draw itself completely, reflecting the program’s current state, when onDraw is invoked.

  • A widget should draw itself as quickly as possible when onDraw is invoked. The middle of the call to onDraw is no time to run a complex database query or to determine the status of some distant networked service. All the state you need to draw should be cached and ready for use at drawing time. Long-running tasks should use a separate thread and the Handler mechanism described in Advanced Wiring: Focus and Threading. The model state cached in the view is sometimes called the view-model.

The Android UI framework uses four main classes in drawing. If you are going to implement custom widgets and do your own drawing, you will want to become very familiar with them:

Canvas (a subclass of android.graphics.Canvas)

The canvas has no complete analog in real-life materials. You might think of it as a complex easel that can orient, bend, and even crumple the paper on which you are drawing in interesting ways. It maintains the clip rectangle, the stencil through which you paint. It can also scale drawings as they are drawn, like a photographic enlarger. It can even perform other transformations for which material analogs are more difficult to find: mapping colors and drawing text along paths.

Paint (a subclass of android.graphics.Paint)

This is the medium with which you will draw. It controls the color, transparency, and brush size for objects painted on the canvas. It also controls font, size, and style when drawing text.

Bitmap (a subclass of android.graphics.Bitmap)

This is the paper you are drawing on. It holds the actual pixels that you draw.

Drawables (likely a subclass of android.graphics.drawable.Drawable)

This is the thing you want to draw: a rectangle or image. Although not all of the things that you draw are Drawables (text, for instance, is not), many, especially the more complex ones, are.

Example 12-1 used only the Canvas, passed as a parameter to onDraw, to do its drawing. In order to do anything more interesting, we will need Paint, at the very least. Paint provides control over the color and transparency (alpha) of the graphics drawn with it. Paint has many, many other capabilities, some of which are described in Bling. Example 12-2, however, is enough to get you started. Explore the class documentation for other useful attributes.

The graphic created by the code in the example is shown in Figure 12-1.

Example 12-2. Using Paint
@Override
protected void onDraw(Canvas canvas) {
    canvas.drawColor(Color.WHITE);

    Paint paint = new Paint();

    canvas.drawLine(33, 0, 33, 100, paint);

    paint.setColor(Color.RED);
    paint.setStrokeWidth(10);
    canvas.drawLine(56, 0, 56, 100, paint);

    paint.setColor(Color.GREEN);
    paint.setStrokeWidth(5);

    for (int y = 30, alpha = 255; alpha > 2; alpha >>= 1, y += 10) {
        paint.setAlpha(alpha);
        canvas.drawLine(0, y, 100, y, paint);
    }
}
Using Paint
Figure 12-1. Using Paint

With the addition of Paint, we are prepared to understand most of the other tools necessary to create a useful widget. Example 12-3, for instance, is the widget used previously in Example 10-7. While still not very complex, it demonstrates all the pieces of a fully functional widget. It handles layout and highlighting, and reflects the state of the model to which it is attached.

Example 12-3. Dot widget
package com.oreilly.android.intro.view;

import android.content.Context;

import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.Paint.Style;

import android.view.View;

import com.oreilly.android.intro.model.Dot;
import com.oreilly.android.intro.model.Dots;


public class DotView extends View {
    private final Dots dots;

    /**
     * @param context the rest of the application
     * @param dots the dots we draw
     */
    public DotView(Context context, Dots dots) {
        super(context);
        this.dots = dots;
        setMinimumWidth(180);
        setMinimumHeight(200);
        setFocusable(true);
    }

    /** @see android.view.View#onMeasure(int, int) */
    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        setMeasuredDimension(
            getSuggestedMinimumWidth(),
            getSuggestedMinimumHeight());
    }

    /** @see android.view.View#onDraw(android.graphics.Canvas) */
    @Override protected void onDraw(Canvas canvas) {
        canvas.drawColor(Color.WHITE);

        Paint paint = new Paint();
        paint.setStyle(Style.STROKE);
        paint.setColor(hasFocus() ? Color.BLUE : Color.GRAY);
        canvas.drawRect(0, 0, getWidth() - 1, getHeight() -1, paint);

        paint.setStyle(Style.FILL);
        for (Dot dot : dots.getDots()) {
            paint.setColor(dot.getColor());
            canvas.drawCircle(
                dot.getX(),
                dot.getY(),
                dot.getDiameter(),
                paint);
        }
    }
}

As with Paint, we have only enough space to begin an exploration of Canvas methods. There are two groups of functionality, however, that are worth special notice.

Drawing text

The most important Canvas methods are those used to draw text. Although some Canvas functionality is duplicated in other places, text-rendering capabilities are not. In order to put text in your widget, you will have to use the Canvas (or, of course, subclass some other widget that uses it).

Canvas methods for rendering text come in pairs: three sets of two signatures. Example 12-4 shows one of the pairs.

Example 12-4. A pair of text drawing methods
public void drawText(String text, float x, float y, Paint paint)
public void drawText(char[] text, int index, int count, float x, float y, Paint paint)

There are several pairs of methods. In each pair, the first of the two methods in the pair uses String, and the second uses three parameters to describe the text: an array of char, the index indicating the first character in that array to be drawn, and the number of total characters in the text to be rendered. In some cases, there are additional convenience methods.

Example 12-5 contains an onDraw method that demonstrates the use of the first style of each of the three pairs of text rendering methods. The output is shown in Figure 12-2.

Example 12-5. Three ways of drawing text
@Override
protected void onDraw(Canvas canvas) {
    canvas.drawColor(Color.WHITE);

    Paint paint = new Paint();

    paint.setColor(Color.RED);
    canvas.drawText("Android", 25, 30, paint);

    Path path = new Path();
    path.addArc(new RectF(10, 50, 90, 200), 240, 90);
    paint.setColor(Color.CYAN);
    canvas.drawTextOnPath("Android", path, 0, 0, paint);

    float[] pos = new float[] {
        20, 80,
        29, 83,
        36, 80,
        46, 83,
        52, 80,
        62, 83,
        68, 80
    };
    paint.setColor(Color.GREEN);
    canvas.drawPosText("Android", pos, paint);
}
Output from three ways of drawing text
Figure 12-2. Output from three ways of drawing text

As you can see, the most elementary of the pairs, drawText, simply draws text at the passed coordinates. With DrawTextOnPath, on the other hand, you can draw text along any Path. The example path is just an arc. It could just as easily have been a line drawing or a Bezier curve.

For those occasions on which even DrawTextOnPath is insufficient, Canvas offers DrawPosText, which lets you specify the exact position of each character in the text. Note that the character positions are specified by alternating array elements: x1,y1,x2,y2, and so on.

Matrix transformations

The second interesting group of Canvas methods are the Matrix transformations and their related convenience methods, rotate, scale, and skew. These methods transform what you draw in ways that will immediately be recognizable to those familiar with 3D graphics. They allow a single drawing to be rendered in ways that can make it appear as if the viewer were moving with respect to the objects in the drawing.

The small application in Example 12-6 demonstrates the Canvas’s coordinate transformation capabilities.

Example 12-6. Using a Canvas
import android.app.Activity;

import android.content.Context;

import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.Rect;

import android.os.Bundle;

import android.view.View;

import android.widget.LinearLayout;


public class TranformationalActivity extends Activity {

    private interface Transformation {
        void transform(Canvas canvas);
        String describe();
    }

    private static class TransfomedViewWidget extends View {1
        private final Transformation transformation;

        public TransfomedViewWidget(Context context, Transformation xform) {
            super(context);

            transformation = xform;2

            setMinimumWidth(160);
            setMinimumHeight(105);
        }

        @Override
        protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
            setMeasuredDimension(
                getSuggestedMinimumWidth(),
                getSuggestedMinimumHeight());
        }

        @Override
        protected void onDraw(Canvas canvas) {3
            canvas.drawColor(Color.WHITE);

            Paint paint = new Paint();

            canvas.save();4
            transformation.transform(canvas);5

            paint.setTextSize(12);
            paint.setColor(Color.GREEN);
            canvas.drawText("Hello", 40, 55, paint);

            paint.setTextSize(16);
            paint.setColor(Color.RED);
            canvas.drawText("Android", 35, 65, paint);

            canvas.restore();6

            paint.setColor(Color.BLACK);
            paint.setStyle(Paint.Style.STROKE);
            Rect r = canvas.getClipBounds();
            canvas.drawRect(r, paint);

            paint.setTextSize(10);
            paint.setColor(Color.BLUE);
            canvas.drawText(transformation.describe(), 5, 100, paint);
        }

}

    @Override
    public void onCreate(Bundle savedInstanceState) {7
        super.onCreate(savedInstanceState);
        setContentView(R.layout.transformed);

        LinearLayout v1 = (LinearLayout) findViewById(R.id.v_left);8
        v1.addView(new TransfomedViewWidget(9
            this,
            new Transformation() {10
                @Override public String describe() { return "identity"; }
                @Override public void transform(Canvas canvas) { }
            } ));
        v1.addView(new TransfomedViewWidget(9
            this,
            new Transformation() {10
                @Override public String describe() { return "rotate(-30)"; }
                @Override public void transform(Canvas canvas) {
                    canvas.rotate(-30.0F);
                } }));
        v1.addView(new TransfomedViewWidget(9
            this,
            new Transformation() {10
                @Override public String describe() { return "scale(.5,.8)"; }
                @Override public void transform(Canvas canvas) {
                    canvas.scale(0.5F, .8F);
                } }));
        v1.addView(new TransfomedViewWidget(9
            this,
            new Transformation() {10
                @Override public String describe() { return "skew(.1,.3)"; }
                @Override public void transform(Canvas canvas) {
                    canvas.skew(0.1F, 0.3F);
                } }));

        LinearLayout v2 = (LinearLayout) findViewById(R.id.v_right);11
        v2.addView(new TransfomedViewWidget(12
            this,
            new Transformation() {10
                @Override public String describe() { return "translate(30,10)"; }
                @Override public void transform(Canvas canvas) {
                    canvas.translate(30.0F, 10.0F);
                } }));
        v2.addView(new TransfomedViewWidget(12
            this,
            new Transformation() {10
                @Override public String describe() 
                  { return "translate(110,-20),rotate(85)"; }
                @Override public void transform(Canvas canvas) {
                    canvas.translate(110.0F, -20.0F);
                    canvas.rotate(85.0F);
                } }));
        v2.addView(new TransfomedViewWidget(12
            this,
            new Transformation() {10
                @Override public String describe() 
                  { return "translate(-50,-20),scale(2,1.2)"; }
                @Override public void transform(Canvas canvas) {
                    canvas.translate(-50.0F, -20.0F);
                    canvas.scale(2F, 1.2F);
                } }));
        v2.addView(new TransfomedViewWidget(12
            this,
            new Transformation() {10
                @Override public String describe() { return "complex"; }
                @Override public void transform(Canvas canvas) {
                    canvas.translate(-100.0F, -100.0F);
                    canvas.scale(2.5F, 2F);
                    canvas.skew(0.1F, 0.3F);
                } }));
    }
}

The results of this protracted exercise are shown in Figure 12-3.

Here are some of the highlights of the code:

1

Definition of the new widget, TransfomedViewWidget.

2

Gets the actual transformation to perform from the second argument of the constructor.

3

onDraw method of TransfomedViewWidget.

4

Pushes the state on the stack through save before performing any transformation.

5

Performs the transformation passed in item 2.

6

Restores the old state saved in item 4, having finished the transformation.

7

The Activity’s onCreate method.

8

Creates the first layout view.

9

Instantiations of TransfomedViewWidget, added to layout view v1.

10

Creates a transformation as part of the parameter list to the constructor of TransfomedViewWidget.

11

Creates the second layout view.

12

Instantiations of TransfomedViewWidget, added to layout view v2.

Transformed views
Figure 12-3. Transformed views

This small application introduces several new ideas and demonstrates the power of Android graphics for maintaining state and nesting changes.

The application defines a single widget, TransformedViewWidget, of which it creates eight instances. For layout, the application creates two views named v1 and v2, retrieving their parameters from resources. It then adds four instances of TransformedViewWidget to each LinearLayout view. This is an example of how applications combine resource-based and dynamic views. Note that the creation both of the layout views and the new widgets take place within the Activity’s onCreate method.

This application also makes the new widget flexible through a sophisticated division of labor between the widget and its Transformation. Several simple objects are drawn directly within the definition of TransformedViewWidget, in its onDraw method:

  • A white background

  • The word “Hello” in 12-point green type

  • The word “Android” in 16-point red type

  • A black frame

  • A blue label

In the middle of this, the onDraw method performs a transformation specified at its creation. The application defines its own interface, called Transformation, and the constructor for TransformedViewWidget accepts a Transformation as a parameter. We’ll see in a moment how the caller actually codes a transformation.

It’s important to see first how the widget onDraw preserves its own text from being affected by the Transformation. In this example, we want to make sure that the frame and label are drawn last, so that they are drawn over anything else drawn by the widget, even if they might overlap. On the other hand, we do not want the transformation applied earlier to affect them.

Fortunately, the Canvas maintains an internal stack onto which we can record and recover the translation matrix, clip rectangle, and many other elements of mutable state in the Canvas. Taking advantage of this stack, onDraw calls save to preserve its state before the transformation, and restore afterward to recover the saved state.

The rest of the application controls the transformation used in each of the eight instances of TransformedViewWidget. Each new instance of the widget is created with its own anonymous instance of Tranformation. The image in the area labeled “identity” has no translation applied. The other seven areas are labeled with the transformations they demonstrate.

The base methods for Canvas translation are setMatrix and concatMatrix. These two methods allow you to build any possible transformation. The getMatrix method allows you to recover a dynamically constructed matrix for later use. The methods introduced in the example—translate, rotate, scale, and skew—are convenience methods that compose specific, constrained matrixes into the current Canvas state.

Although it may not be obvious at first, these transformation functions can be tremendously useful. They allow your application to appear to change its point of view with respect to a 3D object. It doesn’t take too much imagination, for instance, to see the scene in the square labeled “scale(.5,.8)” as the same as that seen in the square labeled “identity”, but viewed from farther away. With a bit more imagination, the image in the box labeled “skew(.1,.3)” again could be the untransformed image, but this time viewed from above and slightly to the side. Scaling or translating an object can make it appear to a user as if the object has moved. Skewing and rotating can make it appear that the object has turned. We will make good use of this technique in animation.

When you consider that these transformation functions apply to everything drawn on a canvas—lines, text, and even images—their importance in applications becomes even more apparent. A view that displays thumbnails of photos could be implemented trivially, though perhaps not optimally, as a view that scales everything it displays to 10% of its actual size. An application that simulates what you see as you look to your left while driving down the street might be implemented in part by scaling and skewing a small number of images.

Drawables

A Drawable is an object that knows how to render itself on a Canvas. Because a Drawable has complete control during rendering, even a very complex rendering process can be encapsulated in a way that makes it fairly easy to use.

Examples 12-7 and 12-8 show the changes necessary to implement the previous example, Figure 12-3, using a Drawable. The code that draws the red and green text has been refactored into a HelloAndroidTextDrawable class, used in rendering by the widget’s onDraw method.

Example 12-7. Using a TextDrawable
private static class HelloAndroidTextDrawable extends Drawable {
    private ColorFilter filter;
    private int opacity;

    public HelloAndroidTextDrawable() {}

    @Override
    public void draw(Canvas canvas) {
        Paint paint = new Paint();

        paint.setColorFilter(filter);
        paint.setAlpha(opacity);

        paint.setTextSize(12);
        paint.setColor(Color.GREEN);
        canvas.drawText("Hello", 40, 55, paint);

        paint.setTextSize(16);
        paint.setColor(Color.RED);
        canvas.drawText("Android", 35, 65, paint);
}

    @Override
    public int getOpacity() { return PixelFormat.TRANSLUCENT; }

    @Override
    public void setAlpha(int alpha) { }

    @Override
    public void setColorFilter(ColorFilter cf) { }
}

Using the new Drawable implementation requires only a few small changes to the onDraw method.

Example 12-8. Using a Drawable widget
package com.oreilly.android.intro.widget;

import android.content.Context;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.Rect;
import android.graphics.drawable.Drawable;
import android.view.View;


/**A widget that renders a drawable with a transformation */
public class TransformedViewWidget extends View {

    /** A transformation */
    public interface Transformation {
        /** @param canvas */
        void transform(Canvas canvas);
        /** @return text descriptiont of the transform. */
        String describe();
    }

    private final Transformation transformation;
    private final Drawable drawable;

    /**
     * Render the passed drawable, transformed.
     *
     * @param context app context
     * @param draw the object to be drawn, in transform
     * @param xform the transformation
     */
    public TransformedViewWidget(
        Context context,
        Drawable draw,
        Transformation xform)
    {
        super(context);

        drawable = draw;
        transformation = xform;

        setMinimumWidth(160);
        setMinimumHeight(135);
    }

    /** @see android.view.View#onMeasure(int, int) */
    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        setMeasuredDimension(
            getSuggestedMinimumWidth(),
            getSuggestedMinimumHeight());
    }

    /** @see android.view.View#onDraw(android.graphics.Canvas) */
    @Override
    protected void onDraw(Canvas canvas) {
        canvas.drawColor(Color.WHITE);

        canvas.save();
        transformation.transform(canvas);
        drawable.draw(canvas);
        canvas.restore();

        Paint paint = new Paint();
        paint.setColor(Color.BLACK);
        paint.setStyle(Paint.Style.STROKE);
        Rect r = canvas.getClipBounds();
        canvas.drawRect(r, paint);

        paint.setTextSize(10);
        paint.setColor(Color.BLUE);
        canvas.drawText(
            transformation.describe(),
            5,
            getMeasuredHeight() - 5,
            paint);
    }
}

This code begins to demonstrate the power of using a Drawable. This implementation of TransformedViewWidget will transform any Drawable, no matter what it happens to draw. It is no longer tied to rotating and scaling our original, hardcoded text. It can be reused to transform both the text from the previous example and a photo captured from the camera, as Figure 12-4 demonstrates. It could even be used to transform a Drawable animation.

Transformed views with photos
Figure 12-4. Transformed views with photos

The ability to encapsulate complex drawing tasks in a single object with a straightforward API is a valuable—and even necessary—tool in the Android toolkit. Drawables make complex graphical techniques such as nine-patches and animation tractable. In addition, since they wrap the rendering process completely, Drawables can be nested to decompose complex rendering into small reusable pieces.

Consider for a moment how we might extend the previous example to make each of the six images fade to white over a period of a minute. Certainly, we could just change the code in Example 12-8 to do the fade. A different—and very appealing—implementation involves writing one new Drawable.

This new Drawable, FaderDrawable, will take, in its constructor, a reference to its target, the Drawable that it will fade to white. In addition, it must have some notion of time, probably an integer—let’s call it t—that is incremented by a timer. Whenever the draw method of FaderDrawable is called, it first calls the draw method of its target. Next, it paints over exactly the same area with the color white, using the value of t to determine the transparency (alpha value) of the paint (as demonstrated in Example 12-2). As time passes, t gets larger, the white gets more and more opaque, and the target Drawable fades to white.

This hypothetical FaderDrawable demonstrates some of the important features of Drawables. First, note that FaderDrawable is nicely reusable: it will fade just about any Drawable. Also note that, since FaderDrawable extends Drawable, we can use it anywhere that we would have used its target, the Drawable that it fades to white. Any code that uses a Drawable in its rendering process can use a FaderDrawable without change.

Of course, a FaderDrawable could itself be wrapped. In fact, it seems possible to achieve very complex effects, simply by building a chain of Drawable wrappers. The Android toolkit provides Drawable wrappers that support this strategy, including ClipDrawable, RotateDrawable, and ScaleDrawable.

At this point you may be mentally redesigning your entire UI in terms of Drawables. Although a powerful tool, they are not a panacea. There are several issues to keep in mind when considering the use of Drawables.

You may well have noticed that they share a lot of the functionality of the View class: location, dimensions, visibility, etc. It’s not always easy to decide when a View should draw directly on the Canvas, when it should delegate to a subview, and when it should delegate to one or more Drawable objects. There is even a DrawableContainer class that allows grouping several child Drawables within a parent. It is possible to build trees of Drawables that parallel the trees of Views we’ve been using so far. In dealing with the Android framework, you just have to accept that sometimes there is more than one way to scale a cat.

One difference between the two choices is that Drawables do not implement the View measure/layout protocol, which allows a container view to negotiate the layout of its components in response to changing view size. When a renderable object needs to add, remove, or lay out internal components, it’s a pretty good indication that it should be a full-fledged View instead of a Drawable.

A second issue to consider is that Drawables completely wrap the drawing process because they are not drawn like String or Rect objects. There are, for instance, no Canvas methods that will render a Drawable at specific coordinates. You may find yourself deliberating over whether, in order to render a certain image twice, a View onDraw method should use two different, immutable Drawables or a single Drawable twice, resetting its coordinates.

Perhaps most important, though, is a more generic problem. The idea of a chain of Drawables works because the Drawable interface contains no information about the internal implementation of the Drawable. When your code is passed a Drawable, there is no way for it to know whether it is something that will render a simple image or a complex chain of effects that rotates, flashes, and bounces. Clearly this can be a big advantage. But it can also be a problem.

Quite a bit of the drawing process is stateful. You set up Paint and then draw with it. You set up Canvas clip regions and transformations and then draw through them. When cooperating in a chain, if Drawables change state, they must be very careful that those changes never collide. The problem is that, when constructing a Drawable chain, the possibility of collision cannot be explicit in the object’s type by definition (they are all just Drawables). A seemingly small change might have an effect that is not desirable and is difficult to debug.

To illustrate the problem, consider two Drawable wrapper classes, one that is meant to shrink its contents and another that is meant to rotate them by 90 degrees. If either is implemented by setting the transformation matrix to a specific value (instead of composing its transformation with any that already exist), composing the two Drawables may not have the desired effect. Worse, it might work perfectly if A wraps B, but not if B wraps A! Careful documentation of how a Drawable is implemented is essential.

Bitmaps

The Bitmap is the last member of the four essentials for drawing: something to draw (a String, Rect, etc.), Paint with which to draw, a Canvas on which to draw, and the Bitmap to hold the bits. Most of the time, you don’t have to deal directly with a Bitmap, because the Canvas provided as an argument to the onDraw method already has one behind it. However, there are circumstances under which you may want to use a Bitmap directly.

A common use for a Bitmap is as a way to cache a drawing that is time-consuming to draw but unlikely to change frequently. Consider, for example, a drawing program that allows the user to draw in multiple layers. The layers act as transparent overlays on a base image, and the user turns them off and on at will. It might be very expensive to actually draw each individual layer every time onDraw gets called. Instead, it might be faster to render the entire drawing with all visible layers once, and only update it when the user changes which are visible.

The implementation of such an application might look something like Example 12-9.

Example 12-9. Bitmap caching
private class CachingWidget extends View {
    private Bitmap cache;

    public CachingWidget(Context context) {
        super(context);
        setMinimumWidth(200);
        setMinimumHeight(200);
    }

    public void invalidateCache() {
        cache = null;
        invalidate();
    }

    @Override
    protected void onDraw(Canvas canvas) {
        if (null == cache) {
            cache = Bitmap.createBitmap(
                getMeasuredWidth(),
                getMeasuredHeight(),
                Bitmap.Config.ARGB_8888);

            drawCachedBitmap(new Canvas(cache));
        }

        canvas.drawBitmap(cache, 0, 0, new Paint());
    }


    // ... definition of drawCachedBitmap
}

This widget normally just copies the cached Bitmap, cache, to the Canvas passed to onDraw. If the cache is marked stale (by calling invalidateCache), only then will drawCachedBitmap be called to actually render the widget.

The most common way to encounter a Bitmap is as the programmatic representation of a graphics resource. Resources.getDrawable returns a BitmapDrawable when the resource is an image.

Combining these two ideas—caching an image and wrapping it in a Drawable—opens yet another interesting window. It means that anything that can be drawn can also be postprocessed. An application that used all of the techniques demonstrated in this chapter could allow a user to draw furniture in a room (creating a bitmap) and then to walk around it (using the matrix transforms).

Bling

The Android UI framework is a lot more than a just an intelligent, well-put-together GUI toolkit. When it takes off its glasses and shakes out its hair, it can be downright sexy! The tools mentioned here certainly do not make an exhaustive catalog. They might get you started, though, on the path to making your application Filthy Rich.

Warning

Several of the techniques discussed in this section are close to the edges of the Android landscape. As such, they are less well established: the documentation is not as thorough, some of the features are clearly in transition, and you may even find bugs. If you run into problems, the Google Group “Android Developers” is an invaluable resource. Questions about a particular aspect of the toolkit have sometimes been answered by the very person responsible for implementing that aspect.

Be careful about checking the dates on solutions you find by searching the Web. Some of these features are changing rapidly, and code that worked as recently as six months ago may not work now. A corollary, of course, is that any application that gets wide distribution is likely to be run on platforms that have differing implementations of the features discussed here. By using these techniques, you may limit the lifetime of your application and the number of devices that it will support.

The rest of this section considers a single application, much like the one used in Example 12-6: a couple of LinearLayouts that contain multiple instances of a single widget, each demonstrating a different graphics effect. Example 12-10 contains the key parts of the widget, with code discussed previously elided for brevity. The widget simply draws a few graphical objects and defines an interface through which various graphics effects can be applied to the rendering.

Example 12-10. Effects widget
public class EffectsWidget extends View {

    /** The effect to apply to the drawing */
    public interface PaintEffect { void setEffect(Paint paint); }

    // ...

    // PaintWidget's widget rendering method
    protected void onDraw(Canvas canvas) {
        Paint paint = new Paint();
        paint.setAntiAlias(true);

        effect.setEffect(paint);
        paint.setColor(Color.DKGRAY);

        paint.setStrokeWidth(5);
        canvas.drawLine(10, 10, 140, 20, paint);

        paint.setTextSize(26);
        canvas.drawText("Android", 40, 50, paint);

        paint = new Paint();
        paint.setColor(Color.BLACK);
        canvas.drawText(String.valueOf(id), 2.0F, 12.0F, paint);
        paint.setStyle(Paint.Style.STROKE);
        paint.setStrokeWidth(2);
        canvas.drawRect(canvas.getClipBounds(), paint);
    }
}

The application that uses this widget, shown in Example 12-11, should also feel familiar. It creates several copies of the EffectsWidget, each with its own effect. There are two special widgets: the bottom widget in the right column is animated, and the bottom widget in the left column uses OpenGL animation.

Example 12-11. Effects application
private void buildView() {
    setContentView(R.layout.main);

    LinearLayout view = (LinearLayout) findViewById(R.id.v_left);
    view.addView(new EffectsWidget(
        this,
        1,
        new EffectsWidget.PaintEffect() {
            @Override public void setEffect(Paint paint) {
                paint.setShadowLayer(1, 3, 4, Color.BLUE);
            } }));
    view.addView(new EffectsWidget(
        this,
        3,
        new EffectsWidget.PaintEffect() {
            @Override public void setEffect(Paint paint) {
                paint.setShader(
                    new LinearGradient(
                        0.0F,
                        0.0F,
                        160.0F,
                        80.0F,
                        new int[] { Color.BLACK, Color.RED, Color.YELLOW },
                        new float[] { 0.2F, 0.3F, 0.2F },
                        Shader.TileMode.REPEAT));
            } }));
    view.addView(new EffectsWidget(
        this,
        5,
        new EffectsWidget.PaintEffect() {
            @Override public void setEffect(Paint paint) {
                paint.setMaskFilter(
                    new BlurMaskFilter(2, BlurMaskFilter.Blur.NORMAL));
            } }));

    // Not and EffectsWidget: this is the OpenGL Anamation widget.
    glWidget = new GLDemoWidget(this);
    view.addView(glWidget);


    view = (LinearLayout) findViewById(R.id.v_right);
    view.addView(new EffectsWidget(
        this,
        2,
        new EffectsWidget.PaintEffect() {
            @Override public void setEffect(Paint paint) {
                paint.setShadowLayer(3, -8, 7, Color.GREEN);
            } }));
    view.addView(new EffectsWidget(
        this,
        4,
        new EffectsWidget.PaintEffect() {
            @Override public void setEffect(Paint paint) {
                paint.setShader(
                    new LinearGradient(
                        0.0F,
                        40.0F,
                        15.0F,
                        40.0F,
                        Color.BLUE,
                        Color.GREEN,
                        Shader.TileMode.MIRROR));
            } }));

    // A widget with an animated background
    View w = new EffectsWidget(
        this,
        6,
        new EffectsWidget.PaintEffect() {
            @Override public void setEffect(Paint paint) { }
        });
    view.addView(w);
    w.setBackgroundResource(R.drawable.throbber);

    // This is, alas, necessary until Cupcake.
    w.setOnClickListener(new OnClickListener() {
        @Override public void onClick(View v) {
            ((AnimationDrawable) v.getBackground()).start();
        } });
}

Figure 12-5 shows what the code looks like when run. The bottom two widgets are animated: the green checkerboard moves from left to right across the widget, and the bottom-right widget has a throbbing red background.

Graphics effects
Figure 12-5. Graphics effects

Shadows, Gradients, and Filters

PathEffect, MaskFilter, ColorFilter, Shader, and ShadowLayer are all attributes of Paint. Anything drawn with Paint can be drawn under the influence of one or more of these transformations. The top several widgets in Figure 12-5 give examples of some of these effects.

Widgets 1 and 2 demonstrate shadows. Shadows are currently controlled by the setShadowLayer method. The arguments, a blur radius and X and Y displacements, control the apparent distance and position of the light source that creates the shadow, with respect to the shadowed object. Although this is a very neat feature, the documentation explicitly warns that it is a temporary API. However, it seems unlikely that the setShadowLayer method will completely disappear or even that future implementations will be backward-incompatible.

The Android toolkit contains several prebuilt shaders. Widgets 3 and 4 demonstrate one of them, the LinearGradient shader. A gradient is a regular transition between colors that might be used, for example, to give a page background a bit more life, without resorting to expensive bitmap resources.

A LinearGradient is specified with a vector that determines the direction and rate of the color transition, an array of colors through which to transition, and a mode. The final argument, the mode, determines what happens when a single complete transition through the gradient is insufficient to cover the entire painted object. For instance, in widget 4, the transition is only 15 pixels long, whereas the drawing is more than 100 pixels wide. Using the mode Shader.TileMode.Mirror causes the transition to repeat, alternating direction across the drawing. In the example, the gradient transitions from blue to green in 15 pixels, then from green to blue in the next 15, and so on across the canvas.

Animation

The Android UI toolkit offers several different animation tools. Transition animations—which the Google documentation calls tweened animations—are subclasses of android.view.animation.Animation: RotateAnimation, TranslateAnimation, ScaleAnimation, etc. These animations are used as transitions between pairs of views. A second type of animation, subclasses of android.graphics.drawable.AnimationDrawable, can be put into the background of any widget to provide a wide variety of effects. Finally, there is full-on animation, on top of a SurfaceView that gives you full control to do your own seat-of-the-pants animation.

Because both of the first two types of animation, transition and background, are supported by View—the base class for all widgets—every widget, toolkit, and custom will potentially support them.

Transition animation

A transition animation is started by calling the View method startAnimation with an instance of Animation (or, of course, your own subclass). Once installed, the animation runs to completion: transition animations have no pause state.

The heart of the animation is its applyTransformation method. This method is called to produce successive frames of the animation. Example 12-12 shows the implementation of one transformation. As you can see, it does not actually generate entire graphical frames for the animation. Instead, it generates successive transformations to be applied to a single image being animated. You will recall, from the section Matrix transformations, that matrix transformations can be used to make an object appear to move. Transition animations depend on exactly this trick.

Example 12-12. Transition animation
@Override
protected void applyTransformation(float t, Transformation xf) {
    Matrix xform = xf.getMatrix();

    float z = ((dir > 0) ? 0.0f : -Z_MAX) - (dir * t * Z_MAX);

    camera.save();
    camera.rotateZ(t * 360);
    camera.translate(0.0F, 0.0F, z);
    camera.getMatrix(xform);
    camera.restore();

    xform.preTranslate(-xCenter, -yCenter);
    xform.postTranslate(xCenter, yCenter);
}

This particular implementation makes its target appear to spin in the screen plane (the rotate method call), and at the same time, to shrink into the distance (the translate method call). The matrix that will be applied to the target image is obtained from the Transformation object passed in that call.

This implementation uses camera, an instance of the utility class Camera. The Camera class—not to be confused with the camera in the phone—is a utility that makes it possible to record rendering state. It is used here to compose the rotation and translations transformations into a single matrix, which is then stored as the animation transformation.

The first parameter to applyTransformation, named t, is effectively the frame number. It is passed as a floating-point number between 0.0 and 1.0, and might also be understood as the percent of the animation that is complete. This example uses t to increase the apparent distance along the Z-axis (a line perpendicular to the plane of the screen) of the image being animated, and to set the proportion of one complete rotation through which the image has passed. As t increases, the animated image appears to rotate further and further counter-clockwise and to move farther and farther away, along the Z-axis, into the distance.

The preTranslate and postTranslate operations are necessary in order to translate the image around its center. By default, matrix operations transform their target around the origin. If we did not perform these bracketing translations, the target image would appear to rotate around its upper-left corner. preTranslate effectively moves the origin to the center of the animation target for the translation, and postTranslate causes the default to be restored after the translation.

If you consider what a transition animation must do, you’ll realize that it is likely to compose two animations: the previous screen must be animated out and the next one animated in. Example 12-12 supports this using the remaining, unexplained variable dir. Its value is either 1 or –1, and it controls whether the animated image seems to shrink into the distance or grow into the foreground. We need only find a way to compose a shrink and a grow animation.

This is done using the familiar Listener pattern. The Animation class defines a listener named Animation.AnimationListener. Any instance of Animation that has a nonnull listener calls that listener once when it starts, once when it stops, and once for each iteration in between. Creating a listener that notices when the shrinking animation completes and spawns a new growing animation will create exactly the effect we desire. Example 12-13 shows the rest of the implementation of the animation.

Example 12-13. Transition animation composition
public void runAnimation() {
    animateOnce(new AccelerateInterpolator(), this);
}

@Override
public void onAnimationEnd(Animation animation) {
    root.post(new Runnable() {
        public void run() {
            curView.setVisibility(View.GONE);
            nextView.setVisibility(View.VISIBLE);
            nextView.requestFocus();
            new RotationTransitionAnimation(-1, root, nextView, null)
                .animateOnce(new DecelerateInterpolator(), null);
        } });
}

void animateOnce(
    Interpolator interpolator,
    Animation.AnimationListener listener)
{
    setDuration(700);
    setInterpolator(interpolator);
    setAnimationListener(listener);
    root.startAnimation(this);
}

The runAnimation method starts the transition. The overridden AnimationListener method, onAnimationEnd, spawns the second half. Called when the target image appears to be far in the distance, it hides the image being animated out (the curView) and replaces it with the newly visible image, nextView. It then creates a new animation that, running in reverse, spins and grows the new image into the foreground.

The Interpolater class represents a nifty attention to detail. The values for t, passed to applyTransformation, need not be linearly distributed over time. In this implementation the animation appears to speed up as it recedes, and then to slow again as the new image advances. This is accomplished by using the two interpolators: AccelerateInterpolator for the first half of the animation and DecelerateInterpolator for the second. Without the interpolator, the difference between successive values of t, passed to applyTransformation, would be constant. This would make the animation appear to have a constant speed. The AccelerateInterpolator converts those equally spaced values of t into values that are close together at the beginning of the animation and much further apart toward the end. This makes the animation appear to speed up. DecelerateInterpolator has exactly the opposite effect. Android also provides a CycleInterpolator and LinearInterpolator, for use as appropriate.

Animation composition is actually built into the toolkit, using the (perhaps confusingly named) AnimationSet class. This class provides a convenient way to specify a list of animations to be played, in order (fortunately not a Set: it is ordered and may refer to a given animation more than once). In addition, the toolkit provides several standard transitions: AlphaAnimation, RotateAnimation, ScaleAnimation, and TranslateAnimation. Certainly, there is no need for these transitional animations to be symmetric, as they are in the previous implementation. A new image might alpha fade in as the old one shrinks into a corner or slide up from the bottom as the old one fades out. The possibilities are endless.

Background animation

Frame-by-frame animation, as it is called in the Google documentation, is completely straightforward: a set of frames, played in order at regular intervals. This kind of animation is implemented by subclasses of AnimationDrawable.

As subclasses of Drawable, AnimationDrawable objects can be used in any context that any other Drawable is used. The mechanism that animates them, however, is not a part of the Drawable itself. In order to animate, an AnimationDrawable relies on an external service provider—an implementation of the Drawable.Callback interface—to animate it.

The View class implements this interface and can be used to animate an AnimationDrawable. Unfortunately, it will supply animation services only to the one Drawable object that is installed as its background with one of the two methods setBackgroundDrawable or setBackgroundResource.

The good news, however, is that this is probably sufficient. A background animation has access to the entire widget canvas. Everything it draws will appear to be behind anything drawn by the View.onDraw method, so it would be hard to use the background to implement full-fledged sprites (animation integrated into a static scene). Still, with clever use of the DrawableContainer class (which allows you to animate several different animations simultaneously) and because the background can be changed at any time, it is possible to accomplish quite a bit without resorting to implementing your own animation framework.

An AnimationDrawable in a view background is entirely sufficient to do anything from, say, indicating that some long-running activity is taking place—maybe winged packets flying across the screen from a phone to a tower—to simply making a button’s background pulse.

The pulsing button example is illustrative and surprisingly easy to implement. Examples 12-14 and 12-15 show all you need. The animation is defined as a resource, and code applies it to the button.

Example 12-14. Frame-by-frame animation (resource)
<animation-list
    xmlns:android="http://schemas.android.com/apk/res/android"
    android:oneshot="false">
  <item android:drawable="@drawable/throbber_f0" android:duration="70" />
  <item android:drawable="@drawable/throbber_f1" android:duration="70" />
  <item android:drawable="@drawable/throbber_f2" android:duration="70" />
  <item android:drawable="@drawable/throbber_f3" android:duration="70" />
  <item android:drawable="@drawable/throbber_f4" android:duration="70" />
  <item android:drawable="@drawable/throbber_f5" android:duration="70" />
  <item android:drawable="@drawable/throbber_f6" android:duration="70" />
</animation-list>
          
Example 12-15. Frame-by-frame animation (code)
// w is a button that will "throb"
button.setBackgroundResource(R.drawable.throbber);

//!!! This is necessary, but should not be so in Cupcake
button.setOnClickListener(new OnClickListener() {
    @Override public void onClick(View v) {
        AnimationDrawable animation
            = (AnimationDrawable) v.getBackground();
        if (animation.isRunning()) { animation.stop(); }
        else { animation.start(); }
        // button action.
    } });

There are several gotchas here, though. First of all, as of this writing, the animation-list example in the Google documentation does not quite work. There is a problem with the way it identifies the animation-list resource. To make it work, don’t define an android:id in that resource. Instead, simply refer to the object by its filename (R.drawable.throbber), as Example 12-15 demonstrates.

The second issue is that a bug in the V1_r2 release of the toolkit prevents a background animation from being started in the Activity.onCreate method. If your application’s background should be animated whenever it is visible, you’ll have to use trickery to start it. The example implementation uses an onClick handler. There are suggestions on the Web that the animation can also be started successfully from a thread that pauses briefly before calling AnimationDrawable.start. The Android development team has a fix for this problem, so the constraint should be relaxed with the release of Cupcake.

Finally, if you have worked with other UI frameworks, especially Mobile UI frameworks, you may be accustomed to painting the view background in the first couple of lines of the onDraw method (or equivalent). If you do that in Android, however, you will paint over your animation. It is, in general, a good idea to get into the habit of using setBackground to control the View background, whether it is a solid color, a gradient, an image, or an animation.

Specifying an AnimationDrawable by resource is very flexible. You can specify a list of drawable resources—any images you like—that comprise the animation. If your animation needs to be dynamic, AnimationDrawable is a straightforward recipe for creating a dynamic drawable that can be animated in the background of a View.

Surface view animation

Full-on animation requires a SurfaceView. The SurfaceView provides a node in the view tree (and, therefore, space on the display) on which any process at all can draw. The SurfaceView node is laid out, sized, and receives clicks and updates, just like any other widget. Instead of drawing, however, it simply reserves space on the screen, preventing other widgets from affecting any of the pixels within its frame.

Drawing on a SurfaceView requires implementing the SurfaceHolder.Callback interface. The two methods surfaceCreated and surfaceDestroyed inform the implementor that the drawing surface is available for drawing and that it has become unavailable, respectively. The argument to both of the calls is an instance of yet a third class, SurfaceHolder. In the interval between these two calls, a drawing routine can call the SurfaceView methods lockCanvas and unlockCanvasAndPost to edit the pixels there.

If this seems complex, even alongside some of the elaborate animation discussed previously…well, it is. As usual, concurrency increases the likelihood of nasty, hard-to-find bugs. The client of a SurfaceView must be sure that access to any state shared across threads is properly synchronized, and also that it never touches the SurfaceView, Surface, or Canvas except in the interval between the calls to surfaceCreated and surfaceDestroyed. The toolkit could clearly benefit from a more complete framework support for SurfaceView animation.

If you are considering SurfaceView animation, you are probably also considering OpenGL graphics. As we’ll see, there is an extension available for OpenGL animation on a SurfaceView. It will turn up in a somewhat out-of-the-way place, though.

OpenGL Graphics

The Android platform supports OpenGL graphics in roughly the same way that a silk hat supports rabbits. Although this is certainly among the most exciting technologies in Android, it is definitely at the edge of the map. It also appears that just before the final beta release, the interface underwent major changes. Much of the code and many of the suggestions found on the Web are obsolete and no longer work.

The API V1_r2 release is an implementation of OpenGL ES 1.0 and much of ES 1.1. It is, essentially, a domain-specific language embedded in Java. Someone who has been doing gaming UIs for a while is likely to be much more comfortable developing Android OpenGL programs than a Java programmer, even a programmer who is a Java UI expert.

Before discussing the OpenGL graphics library itself, we should take a minute to consider exactly how pixels drawn with OpenGL appear on the display. The rest of this chapter has discussed the intricate View framework that Android uses to organize and represent objects on the screen. OpenGL is a language in which an application describes an entire scene that will be rendered by an engine that is not only outside the JVM, but possibly running on another processor altogether (the Graphics Processing Unit, or GPU). Coordinating the two processors’ views of the screen is tricky.

The SurfaceView, discussed earlier, is nearly the right thing. Its purpose is to create a surface on which a thread other than the UI graphics thread can draw. The tool we’d like is an extension of SurfaceView that has a bit more support for concurrency, combined with support for OpenGL.

It turns out that there is exactly such a tool. All of the demo applications in the Android SDK distribution that do OpenGL animation depend on the utility class GLSurfaceView. Since the demo applications written by the creators of Android use this class, considering it for other applications seems advisable.

GLSurfaceView defines an interface, GLSurfaceView.Renderer, which dramatically simplifies the otherwise overwhelming complexity of using OpenGL and GLSurfaceView. GLSurfaceView calls the renderer method getConfigSpec to get its OpenGL configuration information. Two other methods, sizeChanged and surfaceCreated, are called by the GLSurfaceView to inform the renderer that its size has changed or that it should prepare to draw, respectively. Finally, drawFrame, the heart of the interface, is called to render a new OpenGL frame.

Example 12-16 shows the important methods from the implementation of an OpenGL renderer.

Example 12-16. Frame-by-frame animation with OpenGL
// ... some state set up in the constructor

@Override
public void surfaceCreated(GL10 gl) {
    // set up the surface
    gl.glDisable(GL10.GL_DITHER);

    gl.glHint(
        GL10.GL_PERSPECTIVE_CORRECTION_HINT,
        GL10.GL_FASTEST);

    gl.glClearColor(0.4f, 0.2f, 0.2f, 0.5f);
    gl.glShadeModel(GL10.GL_SMOOTH);
    gl.glEnable(GL10.GL_DEPTH_TEST);

    // fetch the checker-board
    initImage(gl);
}

@Override
public void drawFrame(GL10 gl) {
    gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);

    gl.glMatrixMode(GL10.GL_MODELVIEW);
    gl.glLoadIdentity();

    GLU.gluLookAt(gl, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f);

    gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
    gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);

    // apply the checker-board to the shape
    gl.glActiveTexture(GL10.GL_TEXTURE0);

    gl.glTexEnvx(
        GL10.GL_TEXTURE_ENV,
        GL10.GL_TEXTURE_ENV_MODE,
        GL10.GL_MODULATE);
    gl.glTexParameterx(
        GL10.GL_TEXTURE_2D,
        GL10.GL_TEXTURE_WRAP_S,
        GL10.GL_REPEAT);
    gl.glTexParameterx(
        GL10.GL_TEXTURE_2D,
        GL10.GL_TEXTURE_WRAP_T,
        GL10.GL_REPEAT);

    // animation
    int t = (int) (SystemClock.uptimeMillis() % (10 * 1000L));
    gl.glTranslatef(6.0f - (0.0013f * t), 0, 0);

    // draw
    gl.glFrontFace(GL10.GL_CCW);
    gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuf);
    gl.glEnable(GL10.GL_TEXTURE_2D);
    gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuf);
    gl.glDrawElements(
        GL10.GL_TRIANGLE_STRIP,
        5,
        GL10.GL_UNSIGNED_SHORT, indexBuf);
}

private void initImage(GL10 gl) {
    int[] textures = new int[1];
    gl.glGenTextures(1, textures, 0);
    gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);

    gl.glTexParameterf(
        GL10.GL_TEXTURE_2D,
        GL10.GL_TEXTURE_MIN_FILTER,
        GL10.GL_NEAREST);
    gl.glTexParameterf(
        GL10.GL_TEXTURE_2D,
        GL10.GL_TEXTURE_MAG_FILTER,
        GL10.GL_LINEAR);
    gl.glTexParameterf(
        GL10.GL_TEXTURE_2D,
        GL10.GL_TEXTURE_WRAP_S,
        GL10.GL_CLAMP_TO_EDGE);
    gl.glTexParameterf(
        GL10.GL_TEXTURE_2D,
        GL10.GL_TEXTURE_WRAP_T,
        GL10.GL_CLAMP_TO_EDGE);
    gl.glTexEnvf(
        GL10.GL_TEXTURE_ENV,
        GL10.GL_TEXTURE_ENV_MODE,
        GL10.GL_REPLACE);

    InputStream in
        = context.getResources().openRawResource(R.drawable.cb);
    Bitmap image;
    try { image = BitmapFactory.decodeStream(in); }
    finally {
        try { in.close(); } catch(IOException e) {  }
    }

    GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, image, 0);

    image.recycle();
}

The surfaceCreated method prepares the scene. It sets several OpenGL attributes that need to be initialized only when the widget gets a new drawing surface. In addition, it calls initImage, which reads in a bitmap resource and stores it as a 2D texture. Finally, when drawFrame is called, everything is ready for drawing. The texture is applied to a plane whose vertices were set up in vertexBuf by the constructor, the animation phase is chosen, and the scene is redrawn.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.200.94.150