Chapter 1. Object-Oriented Programming: What's It All About?

In This Chapter

  • Reviewing the basics of object-oriented programming

  • Getting a handle on abstraction and classification

  • Understanding why object-oriented programming is important

This chapter answers the two-pronged musical question: "What are the concepts behind object-oriented programming, and how do they differ from the procedural concepts covered in Book I?"

Object-Oriented Concept #1: Abstraction

Sometimes, when my son and I are watching football, I whip up a terribly unhealthy batch of nachos. I dump chips on a plate, throw on some beans and cheese and lots of jalapeños, and nuke the whole mess in the microwave oven for a few minutes.

To use my microwave, I open the door, throw in the plate of food, and punch a few buttons on the front. After a few minutes, the nachos are done. (I try not to stand in front of the microwave while it's working, lest my eyes start glowing in the dark.)

Now think for a minute about all the things I don't do in order to use my microwave. I don't

  • Rewire or change anything inside the microwave to get it to work. The microwave has an interface — the front panel with all the buttons and the little time display — that lets me do everything I need.

  • Reprogram the software used to drive the little processor inside the microwave, even if I cooked a different dish the last time I used the microwave.

  • Look inside the microwave's case.

Even if I were a microwave designer and knew all about the inner workings of a microwave, including its software, I still wouldn't think about all those concepts while using it to heat nachos.

Note

These observations aren't profound: You can deal with only so much stress in your life. To reduce the number of issues you deal with, you work at a certain level of detail. In object-oriented (OO) computerese, the level of detail at which you're working is the level of abstraction. To introduce another OO term while I have the chance, I abstract away the details of the microwave's innards.

Happily, computer scientists — and thousands of geeks — have invented object orientation and numerous other concepts that reduce the level of complexity at which programmers have to work. Using powerful abstractions makes the job simpler and far less error-prone than it used to be. In a sense, that's what the past half-century or so of computing progress has been about: managing ever more complex concepts and structures with ever fewer errors.

When I'm working on nachos, I view my microwave oven as a box. (While I'm trying to knock out a snack, I can't worry about the innards of the microwave oven and still follow the Dallas Cowboys on the tube.) As long as I use the microwave only by way of its interface (the keypad), nothing I can do should cause the microwave to enter an inconsistent state and crash or, worse, turn my nachos — or my house — into a blackened, flaming mass.

Preparing procedural nachos

Suppose that I ask my son to write an algorithm for how to make nachos. After he understands what I want, he can write, "Open a can of beans, grate some cheese, cut the jalapeños," and so on. When he reaches the part about microwaving the concoction, he might write (on a good day) something like this: "Cook in the microwave for five minutes."

That description is straightforward and complete. But it isn't the way a procedural programmer would code a program to make nachos. Procedural programmers live in a world devoid of objects such as microwave ovens and other appliances. They tend to worry about flowcharts with their myriad procedural paths. In a procedural solution to the nachos problem, the flow of control would pass through my finger to the front panel and then to the internals of the microwave. Soon, flow would wiggle through complex logic paths about how long to turn on the microwave tube and whether to sound the "come and get it" tone.

In that world of procedural programming, you can't easily think in terms of levels of abstraction. You have no objects and no abstractions behind which to hide inherent complexity.

Preparing object-oriented nachos

In an object-oriented approach to making nachos, you first identify the types of objects in the problem: chips, beans, cheese, jalapeños, and an oven. Then you begin the task of modeling those objects in software, without regard for the details of how they might be used in the final program. For example, you can model cheese as an object in isolation from the other objects and then combine it with the beans, the chips, the jalapeños, and the oven and make them interact. (And you might decide that some of these objects don't need to be objects in the software: cheese, for instance.)

While you do that, you're said to be working (and thinking) at the level of the basic objects. You need to think about making a useful oven, but you don't have to think about the logical process of making nachos — yet. After all, the microwave designers didn't think about the specific problem of you making a snack. Rather, they set about solving the problem of designing and building a useful microwave.

After you successfully code and test the objects you need, you can ratchet up to the next level of abstraction and start thinking at the nacho-making level rather than at the microwave-making level.

(And, at this point, I can translate my son's instructions directly into C# code.)

Object-Oriented Concept #2: Classification

Critical to the concept of abstraction is that of classification. If I were to ask my son, "What's a microwave?" he might say, "It's an oven that. . . ." If I then ask, "What's an oven?" he might reply "It's a kitchen appliance that. . . ." If I then ask "What's a kitchen appliance?" he would probably say "Why are you asking so many stupid questions?"

The answers my son might give stems from his understanding of this particular microwave as an example of the type of item known as a microwave oven. In addition, he might see a microwave oven as just a special type of oven, which itself is just a special type of kitchen appliance.

Note

In object-oriented computerese, the microwave is an instance of the class microwave. The class microwave is a subclass of the class oven, and the class oven is a subclass of the class kitchen appliance.

Humans classify. Everything about our world is ordered into taxonomies. We do this to reduce the number of items we have to remember. For example, the first time you saw an SUV, the advertisement probably referred to the SUV as "revolutionary, the likes of which have never been seen." But you and I know that it just isn't so. I like the looks of certain SUVs (others need to go back to take another crack at it), but hey, an SUV is a car. As such, it shares all (or at least most of) the properties of other cars. It has a steering wheel, seats, a motor, and brakes, for example. I would bet that I could even drive one without reading the user's manual first.

I don't have to clutter the limited amount of storage space in my head with all the features that an SUV has in common with other cars. All I have to remember is "An SUV is a car that . . ." and tack on those few characteristics that are unique to an SUV (such as the price tag). I can go further. Cars are a subclass of wheeled vehicles along with other members, such as trucks and pickups. Maybe wheeled vehicles are a subclass of vehicles, which include boats and planes — and so on.

Why Classify?

Why should you classify? It sounds like a lot of trouble. Besides, people have been using the procedural approach for a long time — why change now?

Designing and building a microwave oven specifically for this problem may seem easier than building a separate, more generic oven object. Suppose that you want to build a microwave oven to cook only nachos. You wouldn't need to put a front panel on it, other than a Start button. You probably always cook nachos for the same length of time. You could dispense with all that Defrost and Temp Cook nonsense in the options. The oven needs to hold only one flat, little plate. Three cubic feet of space would be wasted on nachos.

For that matter, you can dispense with the concept of "microwave oven." All you need is the guts of the oven. Then, in the recipe, you put the instructions to make it work: "Put nachos in the box. Connect the red wire to the black wire. Bring the radar tube to about 3,000 volts. Notice a slight hum. Try not to stand too close if you intend to have children." Stuff like that.

But the procedural approach has these problems:

  • It's too complex. You don't want the details of oven-building mixed into the details of nacho-building. If you can't define the objects and pull them from the morass of details to deal with separately, you must deal with all the complexities of the problem at the same time.

  • It isn't flexible. Someday, you may need to replace the microwave oven with another type of oven. You should be able to do so as long as the two ovens have the same interface. Without being clearly delineated and developed separately, one object type can't be cleanly removed and replaced with another.

  • It isn't reusable. Ovens are used to make lots of different dishes. You don't want to create a new oven every time you encounter a new recipe. Having solved a problem once, you want to be able to reuse the solution in other places within my program. If you're lucky, you may be able to reuse it in future programs as well.

Object-Oriented Concept #3: Usable Interfaces

An object must be able to project an external interface that is sufficient but as simple as possible. This concept is sort of the reverse of Concept #4 (described in the next section). If the device interface is insufficient, users may start ripping the top off the device, in direct violation of the laws of God and society — or at least the liability laws of the Great State of Texas. And believe me, you do not want to violate the laws of the Great State of Texas. On the flip side, if the device interface is too complex, no one will buy the device — or at least no one will use all its features.

People complain continually that their DVD players are too complex, though it's less of a problem with today's onscreen controls. These devices have too many buttons with too many different functions. Often, the same button has different functions, depending on the state of the machine. In addition, no two DVD players seem to have the same interface. For whatever reason, the DVD player projects an interface that's too difficult and too nonstandard for most people to use beyond the bare basics.

Compare the VCR with an automobile. It would be difficult to argue that a car is less complicated than a VCR. However, people don't seem to have much trouble driving cars.

All automobiles offer more or less the same controls in more or less the same place. For example, my sister once had a car (need I say a French car?) that had the headlight control on the left side of the steering wheel, where the turn signal handle normally lives. You pushed down on the light lever to turn off the lights, and you raised the lever to turn them on. This difference may seem trivial, but I never did learn to turn left in that car at night without turning off the lights.

A well-designed auto doesn't use the same control to perform more than one operation, depending on the state of the car. I can think of only one exception to this rule: Some buttons on most cruise controls are overloaded with multiple functions.

Object-Oriented Concept #4: Access Control

A microwave oven must be built so that no combination of keystrokes that you can enter on the front keypad can cause the oven to hurt you. Certainly, some combinations don't do anything. However, no sequence of keystrokes should

  • Break the device: You may be able to put the device into a strange state in which it doesn't do anything until you reset it (say, by throwing an internal breaker). However, you shouldn't be able to break the device by using the front panel — unless, of course, you throw it to the ground in frustration. The manufacturer of this type of device would probably have to send out some type of fix for it.

  • Cause the device to catch fire and burn down the house: As bad as it may be for the device to break itself, catching fire is much worse. We live in a litigious society. The manufacturer's corporate officers would likely end up in jail, especially if I have anything to say about it.

However, to enforce these two rules, you have to take some responsibility. You can't make modifications to the inside of the device.

Almost all kitchen devices of any complexity, including microwave ovens, have a small seal to keep consumers from reaching inside them. If the seal is broken, indicating that the cover of the device has been removed, the manufacturer no longer bears responsibility. If you modify the internal workings of an oven, you're responsible if it subsequently catches fire and burns down the house.

Similarly, a class must be able to control access to its data members. No sequence of calls to class members should cause your program to crash. The class cannot possibly ensure control of this access if external elements have access to the internal state of the class. The class must be able to keep critical data members inaccessible to the outside world.

How C# Supports Object-Oriented Concepts

Okay, how does C# implement object-oriented programming? In a sense, this is the wrong question. C# is an object-oriented language; however, it doesn't implement object-oriented programming — the programmer does. You can certainly write a non-object-oriented program in C# or any other language (by, for instance, writing all of Microsoft Word in Main()). Something like "you can lead a horse to water" comes to mind. But you can easily write an object-oriented program in C#.

These C# features are necessary for writing object-oriented programs:

  • Controlled access: C# controls the way in which class members can be accessed. C# keywords enable you to declare some members wide open to the public whereas internal members are protected from view and some secrets are kept private. Notice the little hints. Access control secrets are revealed in Chapter 5 of this minibook.

  • Specialization: C# supports specialization through a mechanism known as class inheritance. One class inherits the members of another class. For example, you can create a Car class as a particular type of Vehicle. Chapter 6 in this minibook specializes in specialization.

  • Polymorphism: This feature enables an object to perform an operation the way it wants to. The Rocket type of Vehicle may implement the Start operation much differently from the way the Car type of Vehicle does. At least, I hope it does every time I turn the key in my car. (With my car, you never know.) But all Vehicles have a Start operation, and you can rely on that. Chapter 7 in this minibook finds its own way of describing polymorphism.

  • Indirection. Objects frequently use the services of other objects — by calling their public methods. But classes can "know too much" about the classes they use. The two classes are then said to be "too tightly coupled," which makes the using class too dependent on the used class. The design is too brittle — liable to break if you make changes. But change is inevitable in software, so you should find more indirect ways to connect the two classes. That's where the C# interface construct comes in. (You can get the scoop on interfaces in Chapter 8 of this minibook.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.57.3