5
DISTRIBUTING YOUR SOFTWARE

image

It’s safe to say that at some point, you will want to distribute your software. As tempted as you might be to just zip your code and upload it to the internet, Python provides tools to make it easier for your end users to get your software to work. You should already be familiar with using setup.py to install Python applications and libraries, but you have probably never delved into how it works behind the scenes or how to make a setup.py of your own.

In this chapter, you’ll learn the history of setup.py, how the file works, and how to create your own custom setup.py. We’ll also take a look at some of the less well-known capabilities of the package installation tool pip and how to make your software downloadable via pip. Finally, we’ll see how to use Python’s entry points to make functions easy to find between programs. With these skills, you can make your published software accessible for end users.

A Bit of setup.py History

The distutils library, originally created by software developer Greg Ward, has been part of the standard Python library since 1998. Ward sought to create an easy way for developers to automate the installation process for their end users. Packages provide the setup.py file as the standard Python script for their installation, and they can use distutils to install themselves, as shown in Listing 5-1.

#!/usr/bin/python
from distutils.core import setup

setup(name="rebuildd",
      description="Debian packages rebuild tool",
      author="Julien Danjou",
      author_email="[email protected]",
      url="http://julien.danjou.info/software/rebuildd.html",
      packages=['rebuildd'])

Listing 5-1: Building a setup.py using distutils

With the setup.py file as the root of a project, all users have to do to build or install your software is run that file with the appropriate command as its argument. Even if your distribution includes C modules in addition to native Python ones, distutils can handle them automatically.

Development of distutils was abandoned in 2000; since then, other developers have picked up where it left off. One of the notable successors is the packaging library known as setuptools, which offers more frequent updates and advanced features, such as automatic dependency handling, the Egg distribution format, and the easy_install command. Since distutils was still the accepted means of packaging software included with the Python Standard Library at the time of development, setuptools provided a degree of backward compatibility with it. Listing 5-2 shows how you’d use setuptools to build the same installation package as in Listing 5-1.

#!/usr/bin/env python
import setuptools

setuptools.setup(
    name="rebuildd",
    version="0.2",
    author="Julien Danjou",
    author_email="[email protected]",
    description="Debian packages rebuild tool",
    license="GPL",
    url="http://julien.danjou.info/software/rebuildd/",
    packages=['rebuildd'],
    classifiers=[
        "Development Status :: 2 - Pre-Alpha",
        "Intended Audience :: Developers",
        "Intended Audience :: Information Technology",
        "License :: OSI Approved :: GNU General Public License (GPL)",
        "Operating System :: OS Independent",
        "Programming Language :: Python"
    ],
)

Listing 5-2: Building a setup.py using setuptools

Eventually, development on setuptools slowed down too, but it wasn’t long before another group of developers forked it to create a new library called distribute, which offered several advantages over setuptools, including fewer bugs and Python 3 support.

All the best stories have a twist ending, though: in March 2013, the teams behind setuptools and distribute decided to merge their codebases under the aegis of the original setuptools project. So distribute is now deprecated, and setuptools is once more the canonical way to handle advanced Python installations.

While all this was happening, another project, known as distutils2, was developed with the intention of completely replacing distutils in the Python Standard Library. Unlike both distutils and setuptools, it stored package metadata in a plaintext file, setup.cfg, which was easier both for developers to write and for external tools to read. However, distutils2 retained some of the failings of distutils, such as its obtuse command-based design, and lacked support for entry points and native script execution on Windows—both features provided by setuptools. For these and other reasons, plans to include distutils2, renamed as packaging, in the Python 3.3 Standard Library fell through, and the project was abandoned in 2012.

There is still a chance for packaging to rise from the ashes through distlib, an up-and-coming effort to replace distutils. Before release, it was rumored that the distlib package would become part of the Standard Library in Python 3.4, but that never came to be. Including the best features from packaging, distlib implements the basic groundwork described in the packaging-related PEPs.

So, to recap:

  • distutils is part of the Python Standard Library and can handle simple package installations.

  • setuptools, the standard for advanced package installations, was at first deprecated but is now back in active development and the de facto standard.

  • distribute has been merged back into setuptools as of version 0.7; distutils2 (aka packaging) has been abandoned.

  • distlib might replace distutils in the future.

There are other packaging libraries out there, but these are the five you’ll encounter the most. Be careful when researching these libraries on the internet: plenty of documentation is outdated due to the complicated history outlined above. The official documentation is up-to-date, however.

In short, setuptools is the distribution library to use for the time being, but keep an eye out for distlib in the future.

Packaging with setup.cfg

You’ve probably already tried to write a setup.py for a package at some point, either by copying one from another project or by skimming through the documentation and building it yourself. Building a setup.py is not an intuitive task. Choosing the right tool to use is just the first challenge. In this section, I want to introduce you to one of the recent improvements to setuptools: the setup.cfg file support.

This is what a setup.py using a setup.cfg file looks like:

import setuptools

setuptools.setup()

Two lines of code—it is that simple. The actual metadata the setup requires is stored in setup.cfg, as in Listing 5-3.

[metadata]
name = foobar
author = Dave Null
author-email = [email protected]
license = MIT
long_description = file: README.rst
url = http://pypi.python.org/pypi/foobar
requires-python = >=2.6
classifiers =
    Development Status :: 4 - Beta
    Environment :: Console
    Intended Audience :: Developers
    Intended Audience :: Information Technology
    License :: OSI Approved :: Apache Software License
    Operating System :: OS Independent
    Programming Language :: Python

Listing 5-3: The setup.cfg metadata

As you can see, setup.cfg uses a format that’s easy to write and read, directly inspired by distutils2. Many other tools, such as Sphinx or Wheel, also read configuration from this setup.cfg file—that alone is a good argument to start using it.

In Listing 5-3, the description of the project is read from the README.rst file. It’s good practice to always have a README file—preferably in the RST format—so users can quickly understand what the project is about. With just these basic setup.py and setup.cfg files, your package is ready to be published and used by other developers and applications. The setuptools documentation provides more details if needed, for example, if you have some extra steps in your installation process or want to include extra files.

Another useful packaging tool is pbr, short for Python Build Reasonableness. The project was started in OpenStack as an extension of setuptools to facilitate installation and deployment of packages. The pbr packaging tool, used alongside setuptools, implements features absent from setuptools, including these:

  • Automatic generation of Sphinx documentation

  • Automatic generation of AUTHORS and ChangeLog files based on git history

  • Automatic creation of file lists for git

  • Version management based on git tags using semantic versioning

And all this with little to no effort on your part. To use pbr, you just need to enable it, as shown in Listing 5-4.

import setuptools

setuptools.setup(setup_requires=['pbr'], pbr=True)

Listing 5-4: setup.py using pbr

The setup_requires parameter indicates to setuptools that pbr must be installed prior to using setuptools. The pbr=True argument makes sure that the pbr extension for setuptools is loaded and called.

Once enabled, the python setup.py command is enhanced with the pbr features. Calling python setup.py –version will, for example, return the version number of the project based on existing git tags. Running python setup.py sdist would create a source tarball with automatically generated ChangeLog and AUTHORS files.

The Wheel Format Distribution Standard

For most of Python’s existence, there’s been no official standard distribution format. While different distribution tools generally use some common archive format—even the Egg format introduced by setuptools is just a zip file with a different extension—their metadata and package structures are incompatible with each other. This problem was compounded when an official installation standard was finally defined in PEP 376 that was also incompatible with existing formats.

To solve these problems, PEP 427 was written to define a new standard for Python distribution packages called Wheel. The reference implementation of this format is available as a tool, also called Wheel.

Wheel is supported by pip starting with version 1.4. If you’re using setuptools and have the Wheel package installed, it automatically integrates itself as a setuptools command named bdist_wheel. If you don’t have Wheel installed, you can install it using the command pip install wheel. Listing 5-5 shows some of the output when calling bdist_wheel, abridged for print.

   $ python setup.py bdist_wheel
   running bdist_wheel
   running build
   running build_py
   creating build/lib
   creating build/lib/daiquiri
   creating build/lib/daiquiri/tests
   copying daiquiri/tests/__init__.py -> build/lib/daiquiri/tests
   --snip--
   running egg_info
   writing requirements to daiquiri.egg-info/requires.txt
   writing daiquiri.egg-info/PKG-INFO
   writing top-level names to daiquiri.egg-info/top_level.txt
   writing dependency_links to daiquiri.egg-info/dependency_links.txt
   writing pbr to daiquiri.egg-info/pbr.json
   writing manifest file 'daiquiri.egg-info/SOURCES.txt'
   installing to build/bdist.macosx-10.12-x86_64/wheel
   running install
   running install_lib
   --snip--

   running install_scripts
   creating build/bdist.macosx-10.12-x86_64/wheel/daiquiri-1.3.0.dist-info/WHEEL
creating '/Users/jd/Source/daiquiri/dist/daiquiri-1.3.0-py2.py3-none-any.whl'
   and adding '.' to it
   adding 'daiquiri/__init__.py'
   adding 'daiquiri/formatter.py'
   adding 'daiquiri/handlers.py'

   --snip--

Listing 5-5: Calling setup.py bdist_wheel

The bdist_wheel command creates a .whl file in the dist directory . As with the Egg format, a Wheel archive is just a zip file with a different extension. However, Wheel archives do not require installation—you can load and run your code just by adding a slash followed by the name of your module:

$ python wheel-0.21.0-py2.py3-none-any.whl/wheel -h
usage: wheel [-h]

             {keygen,sign,unsign,verify,unpack,install,install-
scripts,convert,help}
             --snip--

positional arguments:
--snip--

You might be surprised to learn this is not a feature introduced by the Wheel format itself. Python can also run regular zip files, just like with Java’s .jar files:

python foobar.zip

This is equivalent to:

PYTHONPATH=foobar.zip python -m __main__

In other words, the __main__ module for your program will be automatically imported from __main__.py. You can also import __main__ from a module you specify by appending a slash followed by the module name, just as with Wheel:

python foobar.zip/mymod

This is equivalent to:

PYTHONPATH=foobar.zip python -m mymod.__main__

One of the advantages of Wheel is that its naming conventions allow you to specify whether your distribution is intended for a specific architecture and/or Python implementation (CPython, PyPy, Jython, and so on). This is particularly useful if you need to distribute modules written in C.

By default, Wheel packages are tied to the major version of Python that you used to build them. When called with python2 setup.py bdist_wheel, the pattern of a Wheel filename will be something like library-version-py2-none-any.whl.

If your code is compatible with all major Python versions (that is, Python 2 and Python 3), you can build a universal Wheel:

python setup.py bdist_wheel --universal

The resulting filename will be different and contains both Python major versions—something like library-version-py2.py3-none-any.whl. Building a universal Wheel avoids ending up with two different Wheels when only one would cover both Python major versions.

If you don’t want to pass the --universal flag each time you are building a Wheel, you can just add this to your setup.cfg file:

[wheel]
universal=1

If the Wheel you build contains binary programs or libraries (like a Python extension written in C), the binary Wheel might not be as portable as you imagine. It will work by default on some platforms, such as Darwin (macOS) or Microsoft Windows, but it might not work on all Linux distributions. The PEP 513 (https://www.python.org/dev/peps/pep-0513) targets this Linux problem by defining a new platform tag named manylinux1 and a minimal set of libraries that are guaranteed to be available on that platform.

Wheel is a great format for distributing ready-to-install libraries and applications, so you are encouraged to build and upload them to PyPI as well.

Sharing Your Work with the World

Once you have a proper setup.py file, it is easy to build a source tarball that can be distributed. The sdist setuptools command does just that, as demonstrated in Listing 5-6.

$ python setup.py sdist
running sdist

[pbr] Generating AUTHORS
running egg_info
writing requirements to ceilometer.egg-info/requires.txt
writing ceilometer.egg-info/PKG-INFO
writing top-level names to ceilometer.egg-info/top_level.txt
writing dependency_links to ceilometer.egg-info/dependency_links.txt
writing entry points to ceilometer.egg-info/entry_points.txt
[pbr] Processing SOURCES.txt
[pbr] In git context, generating filelist from git
warning: no previously-included files matching '*.pyc' found anywhere in
distribution
writing manifest file 'ceilometer.egg-info/SOURCES.txt'
running check
copying setup.cfg -> ceilometer-2014.1.a6-g772e1a7
Writing ceilometer-2014.1.a6-g772e1a7/setup.cfg

--snip--

Creating tar archive
removing 'ceilometer-2014.1.a6.g772e1a7' (and everything under it)

Listing 5-6: Using setup.py sdist to build a source tarball

The sdist command creates a tarball under the dist directory of the source tree. The tarball contains all the Python modules that are part of the source tree. As seen in the previous section, you can also build Wheel archives using the bdist_wheel command. Wheel archives are a bit faster to install as they’re already in the correct format for installation.

The final step to make that code accessible is to export your package somewhere users can install it via pip. That means publishing your project to PyPI.

If it’s your first time exporting to PyPI, it pays to test out the publishing process in a safe sandbox rather than on the production server. You can use the PyPI staging server for this purpose; it replicates all the functionality of the main index but is solely for testing purposes.

The first step is to register your project on the test server. Start by opening your ~/.pypirc file and adding these lines:

[distutils]
index-servers =
    testpypi
[testpypi]
username = <your username>
password = <your password>
repository = https://testpypi.python.org/pypi

Save the file, and now you can register your project in the index:

$ python setup.py register -r testpypi
running register
running egg_info
writing requirements to ceilometer.egg-info/requires.txt
writing ceilometer.egg-info/PKG-INFO
writing top-level names to ceilometer.egg-info/top_level.txt
writing dependency_links to ceilometer.egg-info/dependency_links.txt
writing entry points to ceilometer.egg-info/entry_points.txt
[pbr] Reusing existing SOURCES.txt
running check
Registering ceilometer to https://testpypi.python.org/pypi
Server response (200): OK

This connects to the test PyPI server instance and creates a new entry. Don’t forget to use the -r option; otherwise, the real production PyPI instance would be used!

Obviously, if a project with the same name is already registered there, the process will fail. Retry with a new name, and once you get your program registered and receive the OK response, you can upload a source distribution tarball, as shown in Listing 5-7.

$ python setup.py sdist upload -r testpypi
running sdist
[pbr] Writing ChangeLog
[pbr] Generating AUTHORS
running egg_info
writing requirements to ceilometer.egg-info/requires.txt
writing ceilometer.egg-info/PKG-INFO
writing top-level names to ceilometer.egg-info/top_level.txt
writing dependency_links to ceilometer.egg-info/dependency_links.txt
writing entry points to ceilometer.egg-info/entry_points.txt
[pbr] Processing SOURCES.txt
[pbr] In git context, generating filelist from git
warning: no previously-included files matching '*.pyc' found anywhere in
distribution
writing manifest file 'ceilometer.egg-info/SOURCES.txt'
running check
creating ceilometer-2014.1.a6.g772e1a7

--snip--

copying setup.cfg -> ceilometer-2014.1.a6.g772e1a7
Writing ceilometer-2014.1.a6.g772e1a7/setup.cfg
Creating tar archive
removing 'ceilometer-2014.1.a6.g772e1a7' (and everything under it)
running upload
Submitting dist/ceilometer-2014.1.a6.g772e1a7.tar.gz to https://testpypi
.python.org/pypi
Server response (200): OK

Listing 5-7: Uploading your tarball to PyPI

Alternatively, you could upload a Wheel archive, as in Listing 5-8.

$ python setup.py bdist_wheel upload -r testpypi
running bdist_wheel
running build
running build_py
running egg_info
writing requirements to ceilometer.egg-info/requires.txt
writing ceilometer.egg-info/PKG-INFO
writing top-level names to ceilometer.egg-info/top_level.txt
writing dependency_links to ceilometer.egg-info/dependency_links.txt
writing entry points to ceilometer.egg-info/entry_points.txt
[pbr] Reusing existing SOURCES.txt
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64/wheel

--snip--

creating build/bdist.linux-x86_64/wheel/ceilometer-2014.1.a6.g772e1a7
.dist-info/WHEEL
running upload
Submitting /home/jd/Source/ceilometer/dist/ceilometer-2014.1.a6
.g772e1a7-py27-none-any.whl to https://testpypi.python.org/pypi
Server response (200): OK

Listing 5-8: Uploading a Wheel archive to PyPI

Once those operations are finished, you and other users can search for the uploaded packages on the PyPI staging server, and even install those packages using pip, by specifying the test server using the -i option:

$ pip install -i https://testpypi.python.org/pypi ceilometer

If everything checks out, you can upload your project to the main PyPI server. Just make sure to add your credentials and the details for the server to your ~/.pypirc file first, like so:

[distutils]
index-servers =
    pypi
    testpypi

[pypi]
username = <your username>
password = <your password>
[testpypi]
repository = https://testpypi.python.org/pypi
username = <your username>
password = <your password>

Now if you run register and upload with the -r pypi switch, your package should be uploaded to PyPI.

NOTE

PyPI can keep several versions of your software in its index, allowing you to install specific and older versions, if you ever need to. Just pass the version number to the pip install command; for example, pip install foobar==1.0.2.

This process is straightforward to use and allows for any number of uploads. You can release your software as often as you want, and your users can install and update as often as they need.

Entry Points

You may have already used setuptools entry points without knowing anything about them. Software distributed using setuptools includes important metadata describing features such as its required dependencies and—more relevantly to this topic—a list of entry points. Entry points are methods by which other Python programs can discover the dynamic features a package provides.

The following example shows how to provide an entry point named rebuildd in the console_scripts entry point group:

#!/usr/bin/python
from distutils.core import setup

setup(name="rebuildd",
    description="Debian packages rebuild tool",
    author="Julien Danjou",
    author_email="[email protected]",
    url="http://julien.danjou.info/software/rebuildd.html",
    entry_points={
        'console_scripts': [
            'rebuildd = rebuildd:main',
        ],
    },
    packages=['rebuildd'])

Any Python package can register entry points. Entry points are organized in groups: each group is made of a list of key and value pairs. Those pairs use the format path.to.module:variable_name. In the previous example, the key is rebuildd, and the value is rebuildd:main.

The list of entry points can be manipulated using various tools, from setuptools to epi, as I’ll show here. In the following sections, we discuss how we can use entry points to add extensibility to our software.

Visualizing Entry Points

The easiest way to visualize the entry points available in a package is to use a package called entry point inspector. You can install it by running pip install entry-point-inspector. When installed, it provides the command epi that you can run from your terminal to interactively discover the entry points provided by installed packages. Listing 5-9 shows an example of running epi group list on my system.

$ epi group list
---------------------------
| Name                    |
--------------------------
| console_scripts |
| distutils.commands |
| distutils.setup_keywords |
| egg_info.writers |
| epi.commands |
| flake8.extension |
| setuptools.file_finders |
| setuptools.installation |
--------------------------

Listing 5-9: Getting a list of entry point groups

The output from epi group list in Listing 5-9 shows the different packages on a system that provide entry points. Each item in this table is the name of an entry point group. Note that this list includes console_scripts, which we’ll discuss shortly. We can use the epi command with the show command to show details of a particular entry point group, as in Listing 5-10.

$ epi group show console_scripts
-------------------------------------------------
| Name     | Module   | Member | Distribution | Error |
-------------------------------------------------
| coverage | coverage | main   | coverage 3.4 |       |

Listing 5-10: Showing details of an entry point group

We can see that in the group console_scripts, an entry point named coverage refers to the member main of the module coverage. This entry point in particular, provided by the package coverage 3.4, indicates which Python function to call when the command line script coverage is executed. Here, the function coverage.main is to be called.

The epi tool is just a thin layer on top of the complete Python library pkg_resources. This module allows us to discover entry points for any Python library or program. Entry points are valuable for various things, including console scripts and dynamic code discovery, as you’ll see in the next few sections.

Using Console Scripts

When writing a Python application, you almost always have to provide a launchable program—a Python script that the end user can run—that needs to be installed inside a directory somewhere in the system path.

Most projects have a launchable program similar to this:

#!/usr/bin/python
import sys
import mysoftware

mysoftware.SomeClass(sys.argv).run()

This kind of script is a best-case scenario: many projects have a much longer script installed in the system path. However, such scripts pose some major issues:

  • There’s no way the user can know where the Python interpreter is or which version it uses.

  • This script leaks binary code that can’t be imported by software or unit tests.

  • There’s no easy way to define where to install this script.

  • It’s not obvious how to install this in a portable way (for example, on both Unix and Windows).

Helping us circumvent these problems, setuptools offers the console_scripts feature. This entry point can be used to make setuptools install a tiny program in the system path that calls a specific function in one of your modules. With setuptools, you can specify a function call to start your program by setting up a key/value pair in the console_scripts entry point group: the key is the script name that will be installed, and the value is the Python path to your function (something like my_module.main).

Let’s imagine a foobar program that consists of a client and a server. Each part is written in its module—foobar.client and foobar.server, respectively, in foobar/client.py:

def main():
    print("Client started")

And in foobar/server.py:

def main():
    print("Server started")

Of course, this program doesn’t do much of anything—our client and server don’t even talk to each other. For our example, though, they just need to print a message letting us know they have started successfully.

We can now write the following setup.py file in the root directory with entry points defined in setup.py.

from setuptools import setup

setup(
    name="foobar",
    version="1",
    description="Foo!",
    author="Julien Danjou",
    author_email="[email protected]",
    packages=["foobar"],
    entry_points={
        "console_scripts": [
          "foobard = foobar.server:main",
            "foobar = foobar.client:main",
        ],
     },
)

We define entry points using the format module.submodule:function. You can see here that we’ve defined an entry point each for both client and server .

When python setup.py install is run, setuptools will create a script that will look like the one in Listing 5-11.

#!/usr/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'foobar==1','console_scripts','foobar'
__requires__ = 'foobar==1'
import sys
from pkg_resources import load_entry_point

if __name__ == '__main__':
    sys.exit(
        load_entry_point('foobar==1', 'console_scripts', 'foobar')()
    )

Listing 5-11: A console script generated by setuptools

This code scans the entry points of the foobar package and retrieves the foobar key from the console_scripts group, which is used to locate and run the corresponding function. The return value of the load_entry_point will then be a reference to the function foobar.client.main, which will be called without any arguments and whose return value will be used as an exit code.

Notice that this code uses pkg_resources to discover and load entry point files from within your Python programs.

NOTE

If you’re using pbr on top of setuptools, the generated script is simpler (and therefore faster) than the default one built by setuptools, as it will call the function you wrote in the entry point without having to search the entry point list dynamically at runtime.

Using console scripts is a technique that removes the burden of writing portable scripts, while ensuring that your code stays in your Python package and can be imported (and tested) by other programs.

Using Plugins and Drivers

Entry points make it easy to discover and dynamically load code deployed by other packages, but this is not their only use. Any application can propose and register entry points and groups and then use them as it wishes.

In this section, we’re going to create a cron-style daemon pycrond that will allow any Python program to register a command to be run once every few seconds by registering an entry point in the group pytimed. The attribute indicated by this entry point should be an object that returns number_of_seconds, callable.

Here’s our implementation of pycrond using pkg_resources to discover entry points, in a program I’ve named pytimed.py:

import pkg_resources
import time

def main():
    seconds_passed = 0
    while True:
        for entry_point in pkg_resources.iter_entry_points('pytimed'):
            try:
                seconds, callable = entry_point.load()()
            except:
                # Ignore failure
                pass
            else:
                if seconds_passed % seconds == 0:
                    callable()
        time.sleep(1)
        seconds_passed += 1

This program consists of an infinite loop that iterates over each entry point of the pytimed group. Each entry point is loaded using the load() method. The program then calls the returned method, which needs to return the number of seconds to wait before calling the callable as well as the aforementioned callable.

The program in pytimed.py is a very simplistic and naive implementation, but it is sufficient for our example. Now we can write another Python program, named hello.py, that needs one of its functions called on a periodic basis:

def print_hello():
    print("Hello, world!")

def say_hello():
    return 2, print_hello

Once we have that function defined, we register it using the appropriate entry points in setup.py.

from setuptools import setup

setup(
    name="hello",
    version="1",
    packages=["hello"],
    entry_points={
        "pytimed": [
            "hello = hello:say_hello",
        ],
     },)

The setup.py script registers an entry point in the group pytimed with the key hello and the value pointing to the function hello.say_hello. Once that package is installed using that setup.py—for example, using pip install—the pytimed script can detect the newly added entry point.

At startup, pytimed will scan the group pytimed and find the key hello. It will then call the hello.say_hello function, getting two values: the number of seconds to wait between each call and the function to call, 2 seconds and print_hello in this case. By running the program, as we do in Listing 5-12, you can see “Hello, world!” printed on the screen every 2 seconds.

>>> import pytimed
>>> pytimed.main()
Hello, world!
Hello, world!
Hello, world!

Listing 5-12: Running pytimed

The possibilities this mechanism offers are immense: you can build driver systems, hook systems, and extensions easily and generically. Implementing this mechanism by hand in every program you make would be tedious, but fortunately, there’s a Python library that can take care of the boring parts for us.

The stevedore library provides support for dynamic plugins based on the same mechanism demonstrated in our previous examples. The use case in this example is already simplistic, but we can still simplify it further in this script, pytimed_stevedore.py:

from stevedore.extension import ExtensionManager
import time

def main():
    seconds_passed = 0
    extensions = ExtensionManager('pytimed', invoke_on_load=True)
    while True:
        for extension in extensions:
            try:
                seconds, callable = extension.obj
            except:
                # Ignore failure
                pass
            else:
                if seconds_passed % seconds == 0:
                    callable()
        time.sleep(1)
        seconds_passed += 1

The ExtensionManager class of stevedore provides a simple way to load all extensions of an entry point group. The name is passed as a first argument. The argument invoke_on_load=True makes sure that each function of the group is called once discovered. This makes the results accessible directly from the obj attribute of the extension.

If you look through the stevedore documentation, you will see that ExtensionManager has a variety of subclasses that can handle different situations, such as loading specific extensions based on their names or the result of a function. All of those are commonly used models you can apply to your program in order to implement those patterns directly.

For example, we might want to load and run only one extension from our entry point group. Leveraging the stevedore.driver.DriverManager class allows us to do that, as Listing 5-13 shows.

from stevedore.driver import DriverManager
import time

def main(name):
    seconds_passed = 0
    seconds, callable = DriverManager('pytimed', name, invoke_on_load=True).
driver
    while True:
        if seconds_passed % seconds == 0:
            callable()
        time.sleep(1)
        seconds_passed += 1

main("hello")

Listing 5-13: Using stevedore to run a single extension from an entry point

In this case, only one extension is loaded and selected by name. This allows us to quickly build a driver system in which only one extension is loaded and used by a program.

Summary

The packaging ecosystem in Python has a bumpy history; however, the situation is now settling. The setuptools library provides a complete solution to packaging, not only to transport your code in different formats and upload it to PyPI, but also to handle connection with other software and libraries via entry points.

Nick Coghlan on Packaging

Nick is a Python core developer working at Red Hat. He has written several PEP proposals, including PEP 426 (Metadata for Python Software Packages 2.0), and he is acting as delegate for our Benevolent Dictator for Life, Guido van Rossum, author of Python.

The number of packaging solutions (distutils, setuptools, distutils2, distlib, bento, pbr, and so on) for Python is quite extensive. In your opinion, what are the reasons for such fragmentation and divergence?

The short answer is that software publication, distribution, and integration is a complex problem with plenty of room for multiple solutions tailored for different use cases. In my recent talks on this, I have noted that the problem is mainly one of age, with the different packaging tools being born into different eras of software distribution.

PEP 426, which defines a new metadata format for Python packages, is still fairly recent and not yet approved. How do you think it will tackle current packaging problems?

PEP 426 originally started as part of the Wheel format definition, but Daniel Holth realized that Wheel could work with the existing metadata format defined by setuptools. PEP 426 is thus a consolidation of the existing setuptools metadata with some of the ideas from distutils2 and other packaging systems (such as RPM and npm). It addresses some of the frustrations encountered with existing tools (for example, with cleanly separating different kinds of dependencies).

The main gains will be a REST API on PyPI offering full metadata access, as well as (hopefully) the ability to automatically generate distribution policy–compliant packages from upstream metadata.

The Wheel format is somewhat recent and not widely used yet, but it seems promising. Why is it not part of the Standard Library?

It turns out the Standard Library is not really a suitable place for packaging standards: it evolves too slowly, and an addition to a later version of the Standard Library cannot be used with earlier versions of Python. So, at the Python language summit earlier this year, we tweaked the PEP process to allow distutils-sig to manage the full approval cycle for packaging-related PEPs, and python-dev will only be involved for proposals that involve changing CPython directly (such as pip bootstrapping).

What is the future for Wheel packages?

We still have some tweaks to make before Wheel is suitable for use on Linux. However, pip is adopting Wheel as an alternative to the Egg format, allowing local caching of builds for fast virtual environment creation, and PyPI allows uploads of Wheel archives for Windows and macOS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.88.62