Variable number of arguments

Python, as well as other languages, has built-in functions and constructions that can take a variable number of arguments. Consider for example string interpolation functions (whether it be by using the % operator or the format method for strings), which follow a similar structure to the printf function in C, a first positional parameter with the string format, followed by any number of arguments that will be placed on the markers of that formatting string.

Besides taking advantage of these functions that are available in Python, we can also create our own, which will work in a similar fashion. In this section, we will cover the basic principles of functions with a variable number of arguments, along with some recommendations, so that in the next section, we can explore how to use these features to our advantage when dealing with common problems, issues, and constraints that functions might have if they have too many arguments.

For a variable number of positional arguments, the star symbol (*) is used, preceding the name of the variable that is packing those arguments. This works through the packing mechanism of Python.

Let's say there is a function that takes three positional arguments. In one part of the code, we conveniently happen to have the arguments we want to pass to the function inside a list, in the same order as they are expected by the function. Instead of passing them one by one by the position (that is, list[0] to the first element, list[1] to the second, and so on), which would be really un-Pythonic, we can use the packing mechanism and pass them all together in a single instruction:

>>> def f(first, second, third):
... print(first)
... print(second)
... print(third)
...
>>> l = [1, 2, 3]
>>> f(*l)
1
2
3

The nice thing about the packing mechanism is that it also works the other way around. If we want to extract the values of a list to variables, by their respective position, we can assign them like this:

>>> a, b, c = [1, 2, 3]
>>> a
1
>>> b
2
>>> c
3

Partial unpacking is also possible. Let's say we are just interested in the first values of a sequence (this can be a list, tuple, or something else), and after some point we just want the rest to be kept together. We can assign the variables we need and leave the rest under a packaged list. The order in which we unpack is not limited. If there is nothing to place in one of the unpacked subsections, the result will be an empty list. The reader is encouraged to try examples such as those presented in the following listing on a Python terminal, and also to explore that unpacking works with generators as well:

>>> def show(e, rest):
... print("Element: {0} - Rest: {1}".format(e, rest))
...
>>> first, *rest = [1, 2, 3, 4, 5]
>>> show(first, rest)
Element: 1 - Rest: [2, 3, 4, 5]
>>> *rest, last = range(6)
>>> show(last, rest)
Element: 5 - Rest: [0, 1, 2, 3, 4]
>>> first, *middle, last = range(6)
>>> first
0
>>> middle
[1, 2, 3, 4]
>>> last
5
>>> first, last, *empty = (1, 2)
>>> first
1
>>> last
2
>>> empty
[]

One of the best uses for unpacking variables can be found in iteration. When we have to iterate over a sequence of elements, and each element is, in turn, a sequence, it is a good idea to unpack at the same time each element is being iterated over. To see an example of this in action, we are going to pretend that we have a function that receives a list of database rows, and that it is in charge of creating users out of that data. The first implementation takes the values to construct the user with from the position of each column in the row, which is not idiomatic at all. The second implementation uses unpacking while iterating:

USERS = [(i, f"first_name_{i}", "last_name_{i}") for i in range(1_000)]


class User:
def __init__(self, user_id, first_name, last_name):
self.user_id = user_id
self.first_name = first_name
self.last_name = last_name


def bad_users_from_rows(dbrows) -> list:
"""A bad case (non-pythonic) of creating ``User``s from DB rows."""
return [User(row[0], row[1], row[2]) for row in dbrows]


def users_from_rows(dbrows) -> list:
"""Create ``User``s from DB rows."""
return [
User(user_id, first_name, last_name)
for (user_id, first_name, last_name) in dbrows
]

Notice that the second version is much easier to read. In the first version of the function (bad_users_from_rows), we have data expressed in the form row[0], row[1], and row[2], which doesn't tell us anything about what they are. On the other hand, variables such as user_id, first_name, and last_name speak for themselves.

We can leverage this kind of functionality to our advantage when designing our own functions.

An example of this that we can find in the standard library lies in the max function, which is defined as follows:

max(...)
max(iterable, *[, default=obj, key=func]) -> value
max(arg1, arg2, *args, *[, key=func]) -> value

With a single iterable argument, return its biggest item. The
default keyword-only argument specifies an object to return if
the provided iterable is empty.
With two or more arguments, return the largest argument.

There is a similar notation, with two stars (**) for keyword arguments. If we have a dictionary and we pass it with a double star to a function, what it will do is pick the keys as the name for the parameter, and pass the value for that key as the value for that parameter in that function.

For instance, check this out:

function(**{"key": "value"})

It is the same as the following:

function(key="value")

Conversely, if we define a function with a parameter starting with two-star symbols, the opposite will happen—keyword-provided parameters will be packed into a dictionary:

>>> def function(**kwargs):
... print(kwargs)
...
>>> function(key="value")
{'key': 'value'}

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.212.186