2
TWO PYTHON PARADIGMS

Image

Now that we’ve explored some topics in the Python programming language, let’s learn about the two main paradigms we can use to write code. In this second chapter on Python, we’ll discuss the functional and object-oriented programming paradigms and the benefits each brings. We’ll wrap up with a brief look at type hints. Let’s get started.

Functional Programming

Functional programming is a programming paradigm, which means that it’s a style of writing code we can decide to adhere to. For us to say “we’re writing functional-style code” we have to follow some simple rules that define what functional programming is about.

The central elements of the functional programming paradigm are pure functions and the immutability of data. We’ll break these concepts down in the next sections.

Not all programming languages have good support for writing functional-style code. For example, languages like C have no good support for it. On the other hand, there are languages, like Haskell, that are purely functional, meaning you can only write functional-style code. By design, Python isn’t a functional language, but it does have support for the functional programming style.

Let’s learn about pure functions.

Pure Functions

Let’s quickly review the syntax for a Python function:

def function_name(parameters):
    <function body>

The definition of a function starts with the def keyword followed by the name of the function and the input parameters inside parentheses. A colon (:) marks the end of the function header. The code in the body of the function is indented one level.

A function, in the functional programming paradigm, is similar to the mathematical concept of a function: a mapping of some input to some output. We say a function is pure if

  • It consistently returns the same outputs for the same set of inputs.
  • It doesn’t have side effects.

A side effect happens when something outside the body of the function is mutated by the function. A side effect also occurs when the function’s inputs are modified by the function, because a pure function never modifies its inputs. For example, the following function is pure:

def make_vector_between(p, q):
    u = q['x'] - p['x']
    v = q['y'] - p['y']

    return {'u': u, 'v': v}

Given the same input points p and q, the output is always the same vector, and nothing outside the function’s body is modified. In contrast, the following code is an impure version of make_vector:

last_point = {'x': 10, 'y': 20}

def make_vector(q):
    u = q['x'] - last_point['x']
    v = q['y'] - last_point['y']
    new_vector = {'u': u, 'v': v}
    last_point = q

    return new_vector

The previous snippet uses the shared state of last_point, which is mutated every time make_vector is called. This mutation is a side effect of the function. The returned vector depends on the last_point shared state, so the function doesn’t return the same vector consistently for the same input point.

Immutability

As you saw in the previous example, one key aspect of functional programming is immutability. Something is immutable if it doesn’t change with time. If we decide to write code in the functional programming style, we make the firm decision of avoiding data mutations and modeling our programs using pure functions.

Let’s take a look at an example. Imagine we had defined a point and a vector in the plane using dictionaries:

point = {'x': 5, 'y': 2}
vector = {'u': 10, 'v': 20}

If we wanted to compute the point resulting from displacing the existing point by the vector, we could do it in a functional way by creating a new point using a function. Here’s an example:

def displaced_point(point, vector):
    x = point['x'] + vector['u']
    y = point['y'] + vector['v']

    return {'x': x, 'y': y}

This function is pure: given the same point and vector inputs, the resulting displaced point is consistently the same, and there is nothing that escapes the function’s body that is mutated in any sense, not even the function parameters.

If we run this function, passing in the point and vector defined earlier, we get the following:

>>> displaced_point(point, vector)
{'x': 15, 'y': 22}

# let's check the state of point (shouldn't have been mutated)
>>> point
{'x': 5, 'y': 2}

Conversely, a nonfunctional way of solving this case could involve mutating the original point using a function like the following:

def displace_point_in_place(point, vector):
    point['x'] += vector['u']
    point['y'] += vector['v']

This function mutates the point it receives as an argument, which violates one of the key rules of the functional style.

Note the use of in_place in the function name. This is a commonly used naming convention that implies that the changes will happen by mutating the original object. We’ll adhere to this naming convention throughout the book.

Now let’s see how we’d go about using this displace_point_in_place function:

>>> displace_point_in_place(point, vector)
# nothing gets returned from the function, so let's check the point

>>> point
{'x': 15, 'y': 22}
# the original point has been mutated!

As you can see, the function isn’t returning anything, which is a sign that the function isn’t pure, because to do some kind of useful operation it must have mutated something somewhere. In this case, that “something” is our point, whose coordinates have been updated.

An important benefit of the functional style is that by respecting the immutability of data structures, we avoid unintended side effects. When you mutate an object, you may not be aware of all the places in your code where that object is referenced. If there are other parts in the code relying on that object’s state, there may be side effects you are not aware of. So, after the object was mutated, your program may behave differently than expected. These kinds of errors are extremely hard to hunt down and can require hours of debugging.

If we minimize the number of mutations in our project, we make it more reliable and less error prone.

Let’s now take a look at a special kind of function that has a central role in functional programming: the lambda function.

Lambdas

Back in the 1930s, a mathematician named Alonzo Church invented lambda calculus, a theory about functions and how they are applied to their arguments. Lambda calculus is the core of functional programming.

In Python, a lambda function, or lambda, is an anonymous, typically short function defined on a single line. We’ll find lambdas to be useful when passing functions as parameters to other functions, for instance.

We define a lambda function in Python using the lambda keyword followed by the arguments (separated by commas), a colon, and the function’s expression body:

    lambda <arg1>, <arg2>, ...: <expression body>

The expression’s result is the returned value.

A lambda function to sum two numbers can be written as follows:

>>> sum = lambda x, y: x + y
>>> sum(1, 2)
3

This is equivalent to the regular Python function:

>>> def sum(x, y):
...     return x + y
...
>>> sum(1, 2)
3

Lambdas are going to appear in the next sections; we’ll see there how they’re used in several contexts. The place we’ll be using lambdas the most is as arguments to the filter, map, and reduce functions, as we’ll discuss in “Filter, Map, and Reduce” on page 29.

Higher-Order Functions

A higher-order function is a function that either receives a function (or functions) as input parameters or returns a function as its result.

Let’s take a look at examples for both cases.

Functions As Function Arguments

Imagine we want to write a function that can run a function a given number of times. We could implement this as follows:

>>> def repeat_fn(fn, times):
...     for _ in range(times):
...         fn()
...

>>> def say_hi():
...     print('Hi there!')
...

>>> repeat_fn(say_hi, 5)
Hi there!
Hi there!
Hi there!
Hi there!
Hi there!

As you can see, the repeat_fn function’s first parameter is another function, which is executed as many times as the second argument times dictates. Then, we define another function to simply print the string "Hi there!" to the screen: say_hi. The result of calling the repeat_fn function and passing it say_hi is those five greetings.

We could rewrite the previous example using an anonymous lambda function:

>>> def repeat_fn(fn, times):
...     for _ in range(times):
...         fn()
...

>>> repeat_fn(lambda: print("Hello!"), 5)
Hello!
Hello!
Hello!
Hello!
Hello!

This spares us from having to define a named function to print the message.

Functions As Function Return Values

Let’s take a look at a function that returns another function. Imagine we want to define validation functions that validate if a given string contains a sequence of characters. We can write a function named make_contains_validator that takes a sequence and returns a function to validate strings that contain that sequence:

>>> def make_contains_validator(sequence):
...     return lambda string: sequence in string

We can use this function to generate validation functions, like the following one,

>>> validate_contains_at = make_contains_validator('@')

which can be used to check whether the passed-in strings contain the @ character:

>>> validate_contains_at('[email protected]')
True
>>> validate_contains_at('not this one')
False

Higher-order functions are a useful resource we’ll use throughout the book.

Functions Inside Other Functions

Another convenient technique we’ll use throughout this book is defining a function inside another function. There are two good reasons we may want to do this: for one, it gives the inner function access to everything inside the outer function, without needing to pass that information as parameters; and also, the inner function may define some logic that we don’t want to expose to the outside world.

A function can be defined inside another function using the regular syntax. Let’s take a look at an example:

def outer_fn(a, b):
    c = a + b

    def inner_fn():
        # we have access to a, b and c here
        print(a, b, c)

    inner_fn()

Here, the inner_fn function is defined inside the outer_fn function, and thus, it can’t be accessed from outside this host function, only from within its body. The inner_fn function has access to everything defined inside outer_fn, including the function parameters.

Defining subfunctions inside of functions is useful when a function’s logic grows complex and it can be broken down into smaller tasks. Of course, we could also split the function into smaller functions all defined at the same level. In this case, to signal that those subfunctions are not meant to be imported and consumed from outside the module, we’ll follow Python’s standard and name those functions starting with two underscores:

def public_fn():
    # this function can be imported

def __private_fn():
    # this function should only be accessed from inside the module

Note that Python has no access modifiers (public, private, . . .); thus, all the code written at the top level of a module, that is, a Python file, can be imported and used.

Remember that the two underscores are just a convention that we have to respect. Nothing really prevents us from importing and using that code. If we import a function that starts with two underscores, we have to understand that the function was not written by its authors to be used from the outside, and we may get unexpected results if we call that function. By defining our subfunctions within the functions that call them, we prevent this behavior.

Filter, Map, and Reduce

In functional programming, we never mutate a collection’s items, but instead always create a new collection to reflect the changes of an operation over that collection. There are three operations that form the cornerstone of functional programming and can accomplish every modification to a collection we can ever think of: filter, map, and reduce.

Filter

The filter operation takes a collection and creates a new collection where some items may have been left out. The items are filtered according to a predicate function, which is a function that accepts one argument and returns either True or False depending on whether that argument passes a given test.

Figure 2-1 illustrates the filter operation.

Image

Figure 2-1: Filtering a collection

Figure 2-1 shows a source collection made of four elements: A, B, C, and D. Below the collection is a box representing the predicate function, which determines which elements to keep and which to discard. Each element in the collection is passed to the predicate, and only those that pass the test are included in the resulting collection.

There are two ways we can filter collections in Python: using the filter global function and, if the collection is a list, using list comprehensions. We’ll focus on the filter function here; we’ll cover list comprehensions in the next section. Python’s filter function receives a function (the predicate) and collection as parameters:

    filter(<predicate_fn>, <collection>)

Let’s write a predicate lambda function to test whether a number is even:

lambda n: n % 2 == 0

Now let’s use our lambda function to filter a list of numbers and obtain a new collection with only even numbers:

>>> numbers = [1, 2, 3, 4, 5, 6, 7, 8]
>>> evens = filter(lambda n: n % 2 == 0, numbers)
>>> list(evens)
[2, 4, 6, 8]

One thing to note is that the filter function doesn’t return a list, but rather an iterator. Iterators allow for iteration over a collection of items, one at a time. If you want to know more about Python iterators and how they work under the hood, please refer to the documentation at https://docs.python.org/3/library/stdtypes.html#typeiter and https://docs.python.org/3/glossary.html#term-iterator.

We can consume all the iterator values and put them into a list using the list function we saw earlier. We can also consume the iterator using a for loop:

>>> for number in evens:
...     print(number)
...
2
4
6
8
Map

The map operation creates a new collection by taking each item in the source collection and running it through a function, storing the results in a new collection. The new collection is the same size as the source collection.

Figure 2-2 illustrates the map operation.

Image

Figure 2-2: Mapping a collection

We run our source collection made of items A, B, C, and D through a mapping function, illustrated within a rectangle in Figure 2-2; the result of the mapping is stored in a new collection.

We can map a collection either using the global map function or, if we have a list, using list comprehensions. We’ll discuss list comprehensions in a moment; for now, let’s study how to map collections using the map function.

The map global function receives two parameters: a mapping function and a source collection:

    map(<mapping_fn>, <collection>)

This is how we would map a list of names to their length:

>>> names = ['Angel', 'Alvaro', 'Mery', 'Paul', 'Isabel']
>>> lengths = map(lambda name: len(name), names)
>>> list(lengths)
[5, 6, 4, 4, 6]

As with the filter function, map returns an iterator that can be consumed into a list using the list function. In the previous example, the resulting list contains the number of letters in each of the names in the names list: five letters in Angel, six letters in Alvaro, and so on. We’ve mapped each name into a number representing its length.

Reduce

The reduce operation is the most complex, but at the same time, it’s the most versatile of the three. It creates a new collection that can have fewer items than, more items than, or the same number of items as the original. To construct this new collection, it first applies a reducer function to the first and second elements. It then applies the reducer function to the third element and the result of the first application. It then applies the reducer function to the fourth element and the result of the second application. In this way, the results accumulate. A figure will help here. Take a look at Figure 2-3.

Image

Figure 2-3: Reducing a collection

The reduction function in this example concatenates every element in the collection (A, B, C, and D) into a single element: ABCD.

The reducer function takes two parameters: the accumulated result and an item in the collection:

    reducer_fn(<accumulated_result>, <item>)

The function is expected to return the accumulated result after the new item has been processed.

There’s no global reduce function provided by Python, but there is a package named functools with some useful operations for working with higher-order functions, including a reduce function. This function doesn’t return an iterator, but rather it returns the resulting collection or item directly. The function’s signature looks like this:

    reduce(<reducer_fn>, <collection>)

Let’s work with an example:

>>> from functools import reduce

>>> letters = ['A', 'B', 'C', 'D']

>>> reduce(lambda result, letter: result + letter, letters)
'ABCD'

In this example, the reduce function returned a single item: ’ABCD’, the result of concatenating each letter in the collection. To start the reduction process, the reduce function takes the first two letters, A and B, and concatenates them into AB. For this first step, Python uses the initial item of the collection (A) as the accumulated result and applies the reducer to it and the second item. Then, it moves to the third letter, C, and concatenates it with the current accumulated result AB, thus producing the new result: ABC. The last step does the same with the D letter to produce the result ABCD.

What happens when the accumulated result and the items of the collection have different types? In that case, we can’t take the first item as the accumulated result, and thus the reduce function expects us to provide a third argument to use as the starting accumulated result:

    reduce(<reducer_fn>, <collection>, <start_result>)

For example, imagine that we have the collection of names from earlier and we want to reduce it to obtain the total sum of the lengths of those names. In this case, the accumulated result is numeric, whereas the items in the collection are strings; we can’t use the first item as the accumulated length. If we forget to provide reduce with the start result, Python is nice enough to remind us by raising an error:

>>> reduce(lambda total_length, name: total_length + len(name), names)
Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "<input>", line 1, in <lambda>
TypeError: can only concatenate str (not "int") to str

For this case, we should pass 0 as the initial accumulated length:

>>> reduce(lambda total_length, name: total_length + len(name), names, 0)
25

One interesting note here is that if the accumulated result and the items of the collection have different types, you can always concatenate a map with a reduce to obtain the same result. For example, in the previous exercise we could have also done the following:

>>> from functools import reduce

>>> names = ['Angel', 'Alvaro', 'Mery', 'Paul', 'Isabel']
>>> lengths = map(lambda name: len(name), names)
>>> reduce(lambda total_length, length: total_length + length, lengths)
25

In this code we first map the names list into a list of the name lengths: lengths. Then, we reduce the lengths list to sum all the values, with no starting value necessary.

When reducing items using a common operation—like a sum of two numbers or a concatenation of two strings—we don’t need to write a lambda function ourselves; we can simply pass the reduce function an existing Python function. For example, when reducing numbers, there’s a useful module provided by Python named operator.py. This module defines functions to operate with numbers, among others. Using this module, we can simplify our previous example to the following:

>>> from functools import reduce
>>> import operator

>>> names = ['Angel', 'Alvaro', 'Mery', 'Paul', 'Isabel']
>>> lengths = map(lambda name: len(name), names)
>>> reduce(operator.add, lengths)
25

This code is shorter and more readable, so we’ll prefer this form throughout the book.

The operator.add function is defined by Python as follows:

def add(a, b):
    "Same as a + b."
    return a + b

As you can see, this function is equivalent to the lambda function we defined to sum two numbers. We’ll see more examples of functions defined by Python that can be used with reduce throughout the book.

So far, all of our examples have reduced collections to a single value, but the reduce operation can do much more. In fact, both the filter and map operations are specializations of the reduce operation. We can filter and map a collection using only a reduce operation. But this isn’t something we’ll stop to analyze here; try to figure it out on your own if you feel motivated.

Let’s see an example where we want to create a new collection based on the names list, where every item is the concatenation of all the previous names with the current name separated by the hyphen character (-). The result we’re looking for should be something like this:

['Angel', 'Angel-Alvaro', 'Angel-Alvaro-Mery', ...]

We can do this using the following code:

>>> from functools import reduce

>>> names = ['Angel', 'Alvaro', 'Mery', 'Paul', 'Isabel']
>>> def compute_next_name(names, name):
...     if len(names) < 1:
...         return name
...     return names[-1] + '-' + name
...
>>> reduce(
...    lambda result, name: result + [compute_next_name(result, name)],
...    names,
...    [])
['Angel', 'Angel-Alvaro', 'Angel-Alvaro-Mery', 'Angel-Alvaro-Mery-Paul', ...]

Here, we use compute_next_name to determine the next item in the sequence. The lambda used inside reduce concatenates the accumulated result, which is the list of stitched-together names, with a new list consisting of the new item. The initial solution, an empty list, needs to be provided, since once again the type of each item in the list (string) is different from the result (list of strings).

As you can see, the reduce operation is very versatile.

List Comprehensions

As mentioned earlier, we can filter and map lists in Python using list comprehensions. This form is typically preferred over the filter and map functions when dealing with lists, as its syntax is more concise and readable.

A list comprehension to map items has the following structure:

    [<expression> for <item> in <list>]

There are two parts to it:

  • for <item> in <list> is the for loop that iterates over the items in <list>.
  • <expression> is a mapping expression to map <item> into something else.

Let’s repeat the exercise we did earlier where we mapped a list of names to a list of the lengths of each name, this time using a list comprehension:

>>> names = ['Angel', 'Alvaro', 'Mery', 'Paul', 'Isabel']
>>> [len(name) for name in names]
[5, 6, 4, 4, 6]

I hope you see why Python programmers favor list comprehensions over the map function; the example almost reads like plain English: “length of name for (each) name in names.” In the example, for name in names iterates over the names in the original list and then uses the length of each name (len(name)) as the result.

To filter a list using a list comprehension we can add an if clause at the end of the comprehension:

    [<expression> for <item> in <list> if <condition>]

If we wanted to, for example, filter a list of names, this time keeping only those that start with A, we could write the following list comprehension:

>>> [name for name in names if name.startswith('A')]
['Angel', 'Alvaro']

Note two things from this example: the mapping expression is the name itself (an identity mapping, which is the same as no mapping), and the filter uses the string startswith method. This method returns True only if the string has the given argument as a prefix.

We can filter and map in the same list comprehension. For example, let’s say we want to take our list of names and filter out those that have more than five letters and then construct a new list whose elements are a tuple of the original name and its length. We could do this easily:

>>> [(name, len(name)) for name in names if len(name) < 6]
[('Angel', 5), ('Mery', 4), ('Paul', 4)]

For comparison’s sake, let’s see what this would look like if we decided to use the filter and map functions:

>>> names_with_length = map(lambda name: (name, len(name)), names)
>>> result = filter(lambda name_length: name_length[1] < 6, names_with_length)
>>> list(result)
[('Angel', 5), ('Mery', 4), ('Paul', 4)]

As you can see, the result is the same, but the list comprehension version is simpler and more readable. What’s easier to read is easier to maintain, so list comprehensions are going to be our preferred way of filtering and mapping lists.

Let’s now turn our attention to the second paradigm we’ll be exploring in this chapter: object-oriented programming.

Object-Oriented Programming

In the previous section, we talked about functional programming and some functional patterns. Now we’ll learn about another paradigm: the object-oriented paradigm. As the function is to functional programming, the object is to object-oriented programming. So, first things first: What’s an object?

There are several ways we could describe what an object is. I’m going to deviate from the standard academic definition of an object in object-oriented programming theory and try a rather unconventional explanation.

From a practical standpoint, we can think of objects as experts on a given subject. We can ask them questions, and they will give us information; or we can request that they do things for us, and they will do them. Our questions or requests may require complex operations, but these experts hide the complexity from us so that we don’t need to worry about the details—we just care about getting the job done.

For example, think of a dentist. When you go to the dentist, you don’t need to know anything about dentistry yourself. You rely on the dentist’s expertise to get your cavities fixed. You can also ask the dentist questions about your teeth, and the dentist will respond using a language that you can understand, hiding the real complexity of the subject. In this example, the dentist would be an object you’d rely on for odontology-related tasks or queries.

To request things from an object, we call one of the object’s methods. Methods are functions that belong to a given object and have access to the object’s internals. The object itself has some memory that contains data that is typically hidden to the outside world, although the object may decide to expose some of this data in the form of properties.

NOTE

A method is a function that belongs to a class: it’s part of the class definition. It needs to be called (executed) on the instance of the class where it’s defined. By contrast, a function doesn’t belong to any class; it works on its own.

In Python’s parlance, any function or variable in an object is called an attribute. Both properties and methods are attributes. We’ll be using these equivalent terms throughout this chapter and the rest of the book.

Let’s now get practical and see how we can define and work with objects in Python.

Classes

A class defines how objects are constructed and what characteristics and knowledge they have. Some people like to compare classes to blueprints; they are general descriptions of what information the object holds and what it can do. Objects and classes are related but distinct; if the class is the blueprint, the object is the finished building.

We define a new class in Python using the reserved class keyword. By convention, class names start with an uppercase letter and use an uppercase letter at the start of every new word (this case is commonly known as Pascal case). Let’s create a class that models a coffee machine:

class CoffeeMachine:
    def __init__(self):
        self.__coffees_brewed = 0

In this listing we define a new class representing a coffee machine. We can use this class to generate new coffee machine objects, in a process referred to as instantiation. When we instantiate a class, we create a new object of that class. A class is instantiated by calling its name as if it were a function that’s returning the instantiated object:

>>> machine = CoffeeMachine()

Now we have the machine object whose functionality is defined by the Coffee Machine class (which is still empty, but we’ll complete it in the following sections). When a class is instantiated, its __init__ function is called. Inside this __init__ function, we can perform one-time initialization tasks. For example, here we add a count of the number of brewed coffees and set it to zero:

def __init__(self):
    self.__coffees_brewed = 0

Notice the two underscores at the beginning of __coffees_brewed. If you remember from our discussion on access levels earlier, in Python, by default, everything is visible to the outside. The double underscore naming pattern is used to signify that something is private and no one is expected to access it directly.

# Don't do this!
>>> machine.__coffees_brewed
0

In this case, we don’t want the outside world to access __coffees_brewed; they could change the coffees brewed count at will!

# Don't do this!
>>> machine.__coffees_brewed = 5469
>>> machine.__coffees_brewed
5469

So if we can’t access __coffees_brewed, how do we know how many coffees our machine has brewed? The answer is properties. Properties are a class’s read-only attributes. Before we can discuss properties, however, we have some syntax to cover.

self

If you look at the previous example, you’ll see that we make frequent use of a variable named self. We could use any other name for this variable, but self is used by convention. As you saw earlier, we pass it to the definition of every function inside the class, including the initializer. Thanks to this first parameter, self, we gain access to whatever is defined in the class. In the __init__ function, for example, we append the __coffees_brewed variable to self; from that point on, this variable exists in the object.

The variable self needs to appear as the first parameter in the definition of every function inside the class, but it doesn’t need to be passed as the first argument when we call those functions on instances of the class. For example, to instantiate the CoffeeMachine class, we wrote the following:

>>> machine = CoffeeMachine()

The initializer was called without parameters (no self here). If you think about it, how could we possibly pass the initializer as self in this case if we haven’t yet initialized the object? As it turns out, Python takes care of that for us: we’ll never need to pass self to the initializer or any of the object’s methods or properties.

The self reference is how different attributes of a class have access to the other definitions in the class. For example, in the brew_coffee method we’ll write later, we use self to access the __coffees_brewed count:

def brew_coffee(self):
    # we need 'self' here to access the class' __coffees_brewed count
    self.__coffees_brewed += 1

With an understanding of self, we can move on to properties.

Class Properties

An object’s property is a read-only attribute that returns some data. A property of an object is accessed using dot notation: object.property. Following our coffee machine example, we could add a coffees_brewed property (the number of coffees brewed by the machine), like so:

class CoffeeMachine:
    def __init__(self):
        self.__coffees_brewed = 0

    @property
    def coffees_brewed(self):
        return self.__coffees_brewed

Then we could access it:

>>> machine = CoffeeMachine()
>>> machine.coffees_brewed
0

Properties are defined as functions using the @property decorator:

@property
def coffees_brewed(self):
    return self.__coffees_brewed

Properties shouldn’t accept any parameter (except for the customary self), and they should return something. A property that doesn’t return anything or expects parameters is conceptually wrong: properties should just be read-only data we request the object to give us.

We mentioned that @property is an example of a decorator. Python decorators allow us to modify a function’s behavior. The @property modifies the function of a class so that it can be consumed as if it were an attribute of the class. We won’t use any other decorators in this book, so we won’t cover them here, but I encourage you to read up on them if you’re interested.

Properties get us information about an object. For instance, if we wanted to know whether a given instance of a CoffeeMachine has brewed at least one coffee, we could include a property like the following:

class CoffeeMachine:
    def __init__(self):
        self.__coffees_brewed

    @property
    def has_brewed(self):
        return self.__coffees_brewed > 0

    --snip--

We can now ask instances of the CoffeeMachine class whether they’ve brewed at all:

>>> machine.has_brewed
False

This machine hasn’t prepared any coffee yet, so how can we ask a CoffeeMachine instance to brew a coffee for us? We use methods.

Class Methods

Properties allow us to know something about an object: they answer our queries. To request an object to perform some task for us, we use methods. A method is nothing more than a function that belongs to a class and has access to the attributes defined in that class. In our CoffeeMachine class example, let’s write a method to request it to brew some coffee:

class CoffeeMachine:
    def __init__(self):
        self.__coffees_brewed = 0

    @property
    def coffees_brewed(self):
        return self.__coffees_brewed

    @property
    def has_brewed(self):
        return self.__coffees_brewed > 0

    def brew_coffee(self):
        self.__coffees_brewed += 1

Methods get self as their first parameter, which gives them access to everything defined inside the class. As we discussed earlier, when calling a method on an object, we never pass self ourselves; Python does it for us.

NOTE

Note that properties are like methods decorated with @property. Both properties and methods expect self as their first argument. When calling a method, we use parentheses and optionally pass it arguments, but properties are accessed without parentheses.

We can call the brew_coffee method on an instance of the class:

>>> machine = CoffeeMachine()
>>> machine.brew_coffee()

Now that we’ve brewed our first coffee, we can ask the instance this:

>>> machine.coffees_brewed
1
>>> machine.has_brewed
True

As you see, methods have to be called on a particular instance of a class (an object). This object will be the one responding to the request. So, whereas functions are called without a particular receiver, like

a_function()

methods have to be called on an object, like

machine.brew_coffee()

Objects can only respond to the methods defined in the class that created them. If a method (or any attribute for that matter) is called on an object but this method wasn’t defined in the class, an AttributeError is raised. Let’s try this. Let’s order our coffee machine to brew tea even though we never gave it the instructions on how to do so:

>>> machine.brew_tea()
Traceback (most recent call last):
  File "<input>", line 1, in <module>
AttributeError: 'CoffeeMachine' object has no attribute 'brew_tea'

Okay, our object complained: we never told it we expected it to know how to prepare tea. Here’s the key to its complaint:

    'CoffeeMachine' object has no attribute 'brew_tea'

Lesson learned: don’t ever request an object to do something it wasn’t taught; it’ll just freak out and make your program fail.

Methods can accept any number of parameters, which in our class have to be defined after the first mandatory argument: self. For example, let’s add a method to our CoffeeMachine class that allows us to fill it with a given amount of water.

class CoffeeMachine:

    def __init__(self):
        self.__coffees_brewed = 0
        self.__liters_of_water = 0

    def fill_water_tank(self, liters):
        self.__liters_of_water += liters

We can fill the coffee machine instance by calling our new method:

>>> machine = CoffeeMachine()
>>> machine.fill_water_tank(5)

One last thing to know about methods before we move on is how powerful their dynamic dispatch nature is. When a method is called on an object, Python will check whether the object responds to that method or not, but, and here’s the key, Python doesn’t care about the object’s class as long as this class has the requested method defined.

We can use this feature to define different objects that respond to the same method (by same method we mean same name and arguments) and use them interchangeably. For instance, we could define a new, more modern coffee-producer entity:

class CoffeeHipster:
    def __init__(self, skill_level):
        self.__skill_level = skill_level

    def brew_coffee(self):
        # depending on the __skill_level, this method
        # may take a long time to complete.
        # But apparently the result will be worth it?
        --snip--

Now we can write a function that expects a coffee producer (any object whose class defines a brew_cofee() method) and does something with it:

def keep_programmer_awake(programmer, coffee_producer):
    while programmer.wants_to_sleep:
        # give the coder some wakey juice
        coffee_producer.brew_coffee()
        --snip--

This function works with both an instance of CoffeeMachine and CoffeeHipster:

>>> machine = CoffeeMachine()
>>> hipster = CoffeeHipster()
>>> programmer = SleepyProgrammer('Angel')

# works!
>>> keep_programmer_awake(programmer, machine)

# also works!
>>> keep_programmer_awake(programmer, hipster)

For this technique to work, we need to make sure that the methods have the same signature, that is, they’re called the same and expect exactly the same parameters with the same names.

Magic Methods

There are some special methods our classes may define that are known as magic methods or dunder methods (short for double underscore). These methods aren’t typically called by us directly, but Python uses them under the hood, as we’ll see in the following examples.

We’ve already used one such method: __init__, which we used as the initializer when instantiating objects. This __init__ method defines the code that’s executed when a new instance of a class is created.

One prominent use case for magic methods (which we’ll use a lot throughout the book) is overloading operators. Let’s see this through an example. Imagine we implement a class to represent complex numbers:

class ComplexNum:
    def __init__(self, re, im):
        self.__re = re
        self.__im = im

    @property
    def real(self):
        return self.__re

    @property
    def imaginary(self):
        return self.__im

How would we go about implementing the addition operation on ComplexNum instances? A first option could be including a method called plus:

class ComplexNum:

    --snip--

    def plus(self, addend):
        return ComplexNum(
            self.__re + addend.__re,
            self.__im + addend.__im
        )

which we could use like so:

>>> c1 = ComplexNum(2, 3)
>>> c2 = ComplexNum(5, 7)

>>> c1.plus(c2)
# the result is: 7 + 10i

This is okay, but it would be nicer if we could instead use the + operator like we do with any other number:

>>> c1 + c2

Python includes a magic method, __add__; if we implement that method, then we can use the + operator as shown earlier, and Python will call this __add__ method under the hood. So if we rename our plus method __add__, we can automatically add ComplexNums using the + operator:

class ComplexNum:

    --snip--

    def __add__(self, addend):
        return ComplexNum(
            self.__re + addend.__re,
            self.__im + addend.__im
        )

There are more magic methods we can implement in our classes to perform subtraction, division, comparisons, and more. You can take a brief look at Table 4-1 on page 70 for a reference of the operations we can implement with magic methods. For example, subtracting two of our complex numbers using the - operator would be as simple as implementing the __sub__ method:

class ComplexNum:

    --snip--

    def __sub__(self, subtrahend):
        return ComplexNum(
            self.__re - subtrahend.__re,
            self.__im - subtrahend.__im
        )

Now we can use the - operator:

>>> c1 - c2
# yields: -3 - 4i

What about comparing two instances for equality using the == operator? Simply implement the __eq__ magic method:

class ComplexNum:

    --snip--

    def __eq__(self, other):
        return (self.__re == other.__re) and (self.__im == other.__im)

Now we can easily compare complex numbers:

>>> c1 == c2
False

We’ll be using some magic methods throughout the book; they really improve the readability of the code.

Let’s now change topics and learn about type hints.

Type Hints

Python type hints are a small help we can use when writing code to make sure we don’t mistype the name of a method or property of a class.

For example, let’s use the implementation of a complex number from the previous section:

class ComplexNum:

    def __init__(self, re, im):
        self.__re = re
        self.__im = im

    @property
    def real(self):
        return self.__re

    @property
    def imaginary(self):
        return self.__im

Now say that we write a function that takes an instance of ComplexNum as an argument, and we want to extract the imaginary part of the number, but we’re a bit sleepy and mistakenly write the following:

def defrangulate(complex):
    --snip--
    im = complex.imaginry

Did you spot the typo? Well, since we know nothing about the complex argument, there’s no visual clue our IDE can give us. As far as the IDE knows, imaginry is a perfectly valid attribute name, and it won’t be until we run the program and pass a complex number that we get an error.

Python is a dynamically typed language: it uses type information at runtime. For example, it checks whether a given type of object responds to a method at runtime, and if it doesn’t, an error is raised:

    AttributeError: 'ComplexNum' object has no attribute 'imaginry'

A bit unfortunate, isn’t it? In this case, we know that this function only expects instances of the ComplexNum class, so it would be nice if our IDE warned us about that property being mistyped. And in fact, we can do this using type hints.

In a function or method definition, a type hint goes after the argument name, separated by a colon:

def defrangulate(complex: ComplexNum):
    --snip--
    im = complex.imaginry
    -------------^-------
    'ComplexNum' object has no attribute 'imaginry'

As you can see, the IDE has signaled to us that ComplexNum has no attribute named imaginry.

In addition to the types we define using classes, we can use Python’s built-in types as type hints. For instance, the complex-number initializer expecting two floating-point numbers could be written like so:

class ComplexNum:
    def __init__(self, re: float, im: float):
        self.__re = re
        self.__im = im

And now our IDE would warn us if we tried to instantiate the class with the wrong parameter types:

i = ComplexNumber('one', 'two')
------------------^------------
Expected type 'float', got 'str' instead.

We can use float for floating-point numbers, int for integers, and str for strings.

These type hints help us during development but have no effect at runtime. We’ll be using type hints in many places throughout the book: it takes no time to add them, and we get a bit of extra safety.

Summary

We discussed two programming paradigms in this chapter: functional programming and object-oriented programming. Of course, both of these are huge topics, and whole books could be, and have been, written about them. We only scratched the surface.

We also talked about magic methods and type hints, two techniques we’ll use extensively throughout the book.

In the next chapter, we’ll discuss the command line. After that, we’ll start writing code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.95.231.212