Chapter 38

Using Programming Tools

If you’re looking to learn C, C++, or Java programming, this part of the book isn’t the right place to start. Unlike with Perl, Python, PHP, or even C#, producing something productive with languages like C, C++, and Java takes more than a little dabbling. This chapter is primarily focused on the tools Ubuntu offers you as a programmer.

Whether you’re looking to compile your own code or someone else’s, the GNU Compiler Collection (gcc) is there to help. It understands C, C++, Fortran, Pascal, and dozens of other popular languages, which means you can try your hand at whatever interests you. Ubuntu also ships with hundreds of libraries you can link to, from the GUI toolkits behind GNOME and KDE to XML parsing and game coding. Some libraries use C, others C++, and still others offer support for both, meaning you can choose what you’re most comfortable with.

Why Use C or C++?

Every language has benefits and shortcomings. Some languages make life easier for the programmer but at the expense of runtime. Languages such as Perl and Python and even Java make it hard for the user to guarantee that memory is fetched sequentially or that it fits in cache, due to things such as checks on the bounds on each access. They are useful languages, but they run more slowly than languages that are harder for the programmer but faster at runtime, such as C or Fortran.

For some programs, such as short shell scripts or quick one-liners in Perl to search text in a file, the difference in runtime speed is negligible. On a desktop computer, it might not matter that your music player is written in Python, and if it seems slow, buying a newer, faster desktop system might be an acceptable solution.

There are some applications, however, where the time needed to run your program can make a big difference. For example, using a slow-to-run language to perform calculations on scientific data, especially if you are doing it on high-performance computing (HPC) resources like a supercomputing cluster, is foolish; to take advantage of the platform, it is both time and cost effective to use the fastest language available to you, like C.

This idea was reinforced in a 2011 conversation between Matthew Helmke and Dan Stanzione, then deputy director of the Texas Advanced Computing Center at the University of Texas at Austin. Stanzione said that HPC resources are expensive, so it is often wiser to spend grant money to hire a good C programmer for a year than it is to run a bioinformatics program written in Perl or Python on an HPC system. As he put it, “If your computer costs $2,000, the programmer’s time is the dominant cost, and that is what drives software development. If your computer costs $100 million or more, then having a programmer spend an extra month, or year, or decade working on software optimization is well worth it. Toughen up and write in C.

Programming in C with Linux

C is the programming language most frequently associated with UNIX-like operating systems such as Linux and BSD. Since the 1970s, the bulk of the UNIX operating system and its applications have been written in C. Because the C language doesn’t directly rely on any specific hardware architecture, UNIX was one of the first portable operating systems. In other words, the majority of the code that makes up UNIX doesn’t know and doesn’t care which computer it is actually running on. Machine-specific features are isolated in a few modules within the UNIX kernel, which makes it easy for you to modify them when you are porting to different hardware architectures.

Because C is so important to UNIX and Linux, we use it in the examples in this section. Much of what is discussed here also applies to other languages, perhaps with slight variations for language-specific features.

C is a compiled language, which means that your C source code is first analyzed by the preprocessor and then translated into assembly language before it’s translated into machine instructions that are appropriate to the target CPU. An assembler then creates a binary, or object, file from the machine instructions. Finally, the object file is linked to any required external software support by the linker. A C program is stored in a text file that ends with a .c extension and always contains at least one routine, or function, such as main(), unless the file is an include file (with a .h extension, also known as a header file) containing shared variable definitions or other data or declarations. Functions are the commands that perform each step of the task that the C program was written to accomplish.

Note

The Linux kernel is mostly written in C, which is why Linux works with so many different CPUs. To learn more about building the Linux kernel from source, see Chapter 22, “Kernel and Module Management.”

C++ is an object-oriented extension to C. Because C++ is a superset of C, C++ compilers compile C programs correctly, and writing non-object-oriented code in C++ is possible. The reverse is not true: C compilers cannot compile C++ code.

C++ extends the capabilities of C by providing the necessary features for object-oriented design and code. C++ also provides some features, such as the capability to associate functions with data structures that do not require the use of class-based object-oriented techniques. For these reasons, the C++ language enables existing UNIX programs to migrate toward the adoption of object orientation over time.

Support for C++ programming is provided by gcc, which you run with the name g++ when you are compiling C++ code.

Using the C Programming Project Management Tools Provided with Ubuntu

Ubuntu is replete with tools that make your life as a C/C++ programmer easier. There are tools to create programs (editors), compile programs (gcc), create libraries (ar), control the source (Git, Subversion, Bazaar), automate builds (make), debug programs (gdb and ddd), and determine where inefficiencies lie (gprof).

The following sections introduce some of the programming and project management tools included with Ubuntu. If you have some previous UNIX experience, you will be familiar with most of these programs because they are traditional complements to a programmer’s suite of software.

Building Programs with make

You use the make command to automatically build and install a C program, and for that use it is an easy tool. If you want to create your own automated builds, however, you need to learn the special syntax that make uses; the following sections walk you through a basic make setup.

Using Makefiles

The make command automatically builds and updates applications by using a makefile. A makefile is a text file that contains instructions about which options to pass on to the compiler preprocessor, the compiler, the assembler, and the linker. The makefile also specifies, among other things, which source code files have to be compiled (and the compiler command line) for a particular code module and which code modules are needed to build the program—a mechanism called dependency checking.

The beauty of the make command is its flexibility. You can use make with a simple makefile, or you can write complex makefiles that contain numerous macros, rules, or commands that work in a single directory or traverse your file system recursively to build programs, update your system, and even function as document management systems. The make command works with nearly any program, including text processing systems such as TeX.

You could use make to compile, build, and install a software package, using a simple command like this:

matthew@seymour:~$ sudo make install

You can use the default makefile (usually called Makefile, with a capital M), or you can use make’s -f option to specify any makefile, such as MyMakeFile, like this:

matthew@seymour:~$ sudo make -f MyMakeFile

Other options might be available, depending on the contents of your makefile. You might have a source file named hi.c and just run make hi, where make figures out what to do automatically to build the final executable. See make’s built-in rules with make -p.

Using Macros and Makefile Targets

Using make with macros can make a program portable. Macros allow users of other operating systems to easily configure a program build by specifying local values, such as the names and locations, or pathnames, of any required software tools. In the following example, macros define the name of the compiler (CC), the installer program (INS), where the program should be installed (INSDIR), where the linker should look for required libraries (LIBDIR), the names of required libraries (LIBS), a source code file (SRC), the intermediate object code file (OBJS), and the name of the final program (PROG):

# a sample makefile for a skeleton program
CC= gcc
INS= install
INSDIR = /usr/local/bin
LIBDIR= -L/usr/X11R6/lib
LIBS= -lXm -lSM -lICE -lXt -lX11
SRC= skel.c
OBJS= skel.o
PROG= skel

skel:  ${OBJS}
        ${CC} -o ${PROG} ${SRC} ${LIBDIR} ${LIBS}

install: ${PROG}
        ${INS} -g root -o root ${PROG} ${INSDIR}

Note

The indented lines in the previous example are indented with tabs, not spaces. This is important to remember! It is difficult for a person to see the difference, but make can tell. If make reports confusing errors when you first start building programs under Linux, check your project’s makefile for the use of tabs and other proper formatting.

Using the makefile from the preceding example, you can build a program like this:

matthew@seymour:~$ sudo make

To build a specified component of a makefile, you can use a target definition on the command line. To build just the program, you use make with the skel target, like this:

matthew@seymour:~$ sudo make skel

If you make any changes to any element of a target object, such as a source code file, make rebuilds the target automatically. This feature is part of the convenience of using make to manage a development project. To build and install a program in one step, you can specify the target of install like this:

matthew@seymour:~$ sudo make install

Larger software projects might have a number of traditional targets in the makefile, such as the following:

test—To run specific tests on the final software

man—To process an include or a troff document with the man macros

clean—To delete any remaining object files

archive—To clean up, archive, and compress the entire source code tree

bugreport—To automatically collect and then mail a copy of the build or error logs

Large applications can require hundreds of source code files. Compiling and linking these applications can be a complex and error-prone task. The make utility helps you organize the process of building the executable form of a complex application from many source files.

Using the autoconf Utility to Configure Code

The make command is only one of several programming automation utilities included with Ubuntu. There are others, such as pmake (which causes a parallel make); imake (which is a dependency-driven makefile generator that is used for building X11 clients); automake; and one of the newer tools, autoconf, which builds shell scripts that can be used to configure program source code packages.

Building many software packages for Linux that are distributed in source form requires the use of GNU’s autoconf utility. This program builds an executable shell script named configure that, when executed, automatically examines and tailors a client’s build from source according to software resources, or dependencies (such as programming tools, libraries, and associated utilities) that are installed on the target host (your Linux system).

Many Linux commands and graphical clients for X downloaded in source code form include configure scripts. To configure the source package, build the software, and then install the new program, the root user might use the script like this (after uncompressing the source and navigating into the resulting build directory):

matthew@seymour:~$ ./configure ; make ; sudo make install

The autoconf program uses a file named configure.in that contains a basic ruleset, or set of macros. The configure.in file is created with the autoscan command. Building a properly executing configure script also requires a template for the makefile, named Makefile.in. Although creating the dependency-checking configure script can be done manually, you can easily overcome any complex dependencies by using a graphical project development tool such as KDE’s KDevelop or GNOME’s Glade. (See the “Graphical Development Tools” section, later in this chapter, for more information.)

Debugging Tools

Debugging is both a science and an art. Sometimes, the simplest tool—the code listing—is the best debugging tool. At other times, however, you need to use other debugging tools, such as splint, gprof, and gdb.

Using splint to Check Source Code

The splint command is similar to the traditional UNIX lint command: It statically examines source code for possible problems, and it also has many additional features. Even if your C code meets the standards for C and compiles cleanly, it might still contain errors. splint performs many types of checks and can provide extensive error information. For example, this simple program might compile cleanly and may even run:

matthew@seymour:~$ gcc -o tux tux.c
matthew@seymour:~$ ./tux

But the splint command might point out some serious problems with the source:

matthew@seymour:~$ splint tux.c
Splint 3.1.2 -- 29 Apr 2009

tux.c: (in function main)
tux.c:2:19: Return value (type int) ignored: putchar(t[++j] -...
  Result returned by function call is not used. If this is intended, can cast
  result to (void) to eliminate message. (Use -retvalint to inhibit warning)
Finished checking -- 1 code warning

You can use the splint command’s -strict option, like this, to get a more verbose report:

matthew@seymour:~$ splint -strict tux.c

gcc also supports diagnostics through the use of extensive warnings (through the -Wall and -pedantic options):

matthew@seymour:~$ gcc -Wall tux.c
tux.c:1: warning: return type defaults to 'int'
tux.c: In function 'main':
tux.c:2: warning: implicit declaration of function 'putchar'
Using gprof to Track Function Time

You use the gprof (profile) command to study how a program is spending its time. If a program is compiled and linked with -p as a flag, a mon.out file is created when it executes, with data on how often each function is called and how much time is spent in each function. gprof parses and displays this data. An analysis of the output generated by gprof helps you determine where performance bottlenecks occur. Using an optimizing compiler can speed up a program, but taking the time to use gprof’s analysis and revising bottleneck functions significantly improves program performance.

Doing Symbolic Debugging with gdb

The gdb tool is a symbolic debugger. When you compile a program with the -g flag, the symbol tables are retained, and you can use a symbolic debugger to track program bugs. The basic technique is to invoke gdb after a core dump (which involves taking a snapshot of the memory used by a program that has crashed) and get a stack trace. The stack trace indicates the source line where the core dump occurred and the functions that were called to reach that line. Often, this is enough to identify a problem. It isn’t the limit of gdb, though.

gdb also provides an environment for debugging programs interactively. Invoking gdb with a program enables you to set breakpoints, examine the values of variables, and monitor variables. If you suspect a problem near a line of code, you can set a breakpoint at that line and run gdb. When the line is reached, execution is interrupted. You can check variable values, examine the stack trace, and observe the program’s environment. You can single-step through the program to check values. You can resume execution at any point. By using breakpoints, you can discover many bugs in code.

A graphical X Window interface to gdb is called the Data Display Debugger, or ddd.

Using the GNU C Compiler

If you elected to install the development tools package when you installed Ubuntu (or perhaps later on, using synaptic), you should have the GNU C compiler (gcc). Many different options are available for the GNU C compiler, and many of them are similar to those of the C and C++ compilers that are available on other UNIX systems. Look at the man page or information file for gcc for a full list of options and descriptions.

Note

The GNU C compiler is a part of the GNU Compiler Collection, which also includes compilers for several other languages.

When you build a C program using gcc, the compilation process takes place in several steps:

  1. First, the C preprocessor parses the file. To do so, it sequentially reads the lines, includes header files, and performs macro replacement.

  2. The compiler parses the modified code to determine whether the correct syntax is used. In the process, it builds a symbol table and creates an intermediate object format. Most symbols have specific memory addresses assigned, although symbols defined in other modules, such as external variables, do not.

  3. In the last compilation stage, linking, the GNU C compiler ties together different files and libraries and then links the files by resolving the symbols that had not previously been resolved.

Note

Most C programs compile with a C++ compiler if you follow strict ANSI rules. For example, you can compile the standard hello.c program (everyone’s first program) with the GNU C++ compiler. Typically, you name the file something like hello.cc, hello.C, hello.c++, or hello.cxx. The GNU C++ compiler accepts any of these names.

Programming in Java with Linux

The Java programming language was originally developed by Sun Microsystems in the 1990s. The goals were to implement a virtual machine and a language that had a familiar C-like syntax, but that was simpler and with a promise to “write once, run everywhere” by creating a free Java runtime for all popular platforms. They succeeded.

In May 2007, Sun Microsystems released almost all of its Java technologies with an open source GNU GPL license. Development continued at Sun and with a community of outside contributors, mostly sponsored by companies with an interest in helping their customers also use Java.

In 2010, Oracle bought Sun, including the few remaining proprietary bits of Java. As a result, multiple implementations of the Java virtual machine (JVM) have appeared. There is an implementation from Oracle, unsurprisingly called Oracle Java, which has some proprietary enhancements. The problem is that any code written using these enhancements will only run on the Oracle Java JVM that it was written for, violating the idea of “write once, run everywhere.” There is also a completely open source implementation called OpenJDK (Open Java Development Kit) that only includes code written or added using the same GNU GPL license. OpenJDK is sponsored by a community that includes Red Hat, IBM, Apple, and others. It is now considered the reference Java and is the one you should install and use unless an employer tells you otherwise because they use something else.

The Java language is class-based, object-oriented, and programs are compiled to bytecode that can run on any Java runtime on any platform (Linux, Windows, macOS, and others). Java uses an automatic garbage collector to manage memory, sparing programmers the burden of writing manual memory management. The syntax is similar to C++ (and was largely influenced by C++).

There are two Java packages in the Ubuntu repositories, both of which are OpenJDK related. One is the Java Runtime Environment (JRE), which is all you need to run Java programs; to use it, install the default-jre package. The other is the Java Development Kit (JDK), which is needed to write Java programs; to use it install the default-jdk package. If you install the JDK, the JRE features are included. The JDK also includes development and debugging tools and libraries.

There is also Kotlin, the new Java for Android development. See “Beginning Mobile Development for Android” later in this chapter for how to develop for Android. To just install the Kotlin compiler, install the Snap package with sudo snap install kotlin --classic.

Graphical Development Tools

This section branches out into information that more obviously applies to many languages. For example, Java is in widespread use, and you can develop in Java from Ubuntu along with any of the popular programming languages listed in Chapter 39, “Using Popular Programming Languages.”

Ubuntu has a number of graphical prototyping and development environments available. If you want to program in Java, for example, using your favorite integrated development environment (IDE) or a language with a standard software development kit (SDK), you can do that. If you want to build client software for KDE or GNOME, you might find the KDevelop, and Glade programs extremely helpful. You can use each of these programs to build graphical frameworks for interactive windowing clients, and you can use each of them to automatically generate the necessary skeleton of code needed to support a custom interface for your program.

IDEs and SDKs

IDEs and SDKs have become extremely popular. Although some programmers still prefer to write and edit software using a standard text editor, such as nano or vi (covered in Chapter 12, “Command-Line Master Class, Part 2”), many prefer using a tool that is more powerful. One commonly used tool, emacs, started out as a text editor, but as more and more features were added, it evolved into something more (see Chapter 12). By adding tools and features to make the programmer’s life easier, emacs unintentionally became the template for modern IDEs.

Some IDEs support multiple languages, like emacs does. Others focus on only one language. Most include not only programming language–specific features like code highlighting to help you read and browse code more quickly and efficiently, but also contain a compiler and debugger and even build automation tools. If you read through the details earlier in this chapter of using make with C, you can understand the value added.

So, what is the downside? Well, you can’t run a typical IDE on a server because you need a graphical interface, so if you are working on code that will run on a server that only has a command line or text interface available to you, you need to make sure you are comfortable with traditional methods. This doesn't mean you can’t use a local desktop machine for development using an IDE and then push your code out to the server, but it means you should cover your bases—just in case.

The most commonly used IDEs seem to also be used most frequently by Java developers. We discuss several of them in this section. You should download these IDEs directly from the providers to ensure that you install the most current and standard versions.

Eclipse was originally created by IBM but has been spun off to a foundation created just for it. The nonprofit Eclipse Foundation coordinates efforts of volunteers and companies that contribute time, money, and code to this open source project. Eclipse is very widely used and popular. It supports multiple languages, and many plug-ins are available to extend its capabilities.

NetBeans is an extremely popular IDE that works with multiple languages. It is now owned by Oracle but was started by student programmers who were looking to create more useful tools for their needs. Others asked to contribute code, and soon NetBeans developed into a commercial program with plug-ins to extend its capabilities, many contributed by a large supporting community. Sun Microsystems, which developed and owned Java, bought NetBeans and released it under an open source license. When Oracle acquired Sun, it also acquired NetBeans; due to its popularity, it is worth a look. You can learn about and download NetBeans from https://netbeans.org.

Visual Studio Code is built on open source, which means it open source code along with proprietary code. The website provides a .deb download for installation on Ubuntu. It is pretty and comes highly recommended by developers who use it. You can learn more and download it from https://code.visualstudio.com/.

Oracle, which owns Java, provides an IDE for Java called Oracle JDeveloper. It is most commonly used in enterprise settings, where a team of developers work together using a standard tool. It is the least popular of the options mentioned here. You can learn more about it at www.oracle.com/technetwork/developer-tools/jdev/overview/index.html.

An SDK is a set of software development tools that are focused not on one language but on something narrower, such as one software package or framework (for example, the Android development SDK, described in the later section, “Beginning Mobile Development for Android”). A company may provide an SDK when it wants to encourage outsiders to write programs that run on the company's product, such as its platform (like a game system from Nintendo or Sega) or operating system (like Android or iOS). Many open source enthusiasts will not participate in writing code for these platforms, so SDKs are less popular in this environment than they are on Windows and other platforms. Also, depending on the software license used to release the SDK, the potential uses of the code produced using the SDK can be limited, and not everyone is comfortable with those limitations. However, many SDKs are in use, and if you want to write code for a project that releases an SDK, it is likely to contain useful code examples, tools, and documentation to make the task much easier. Do your homework and make a choice that you are comfortable with.

Using the KDevelop Client

You can launch the KDevelop client from the applications menu or from the command line of a terminal window, like this:

matthew@seymour:~$ kdevelop &

After you press Enter, the KDevelop Setup Wizard runs, and you are taken through several short wizard dialogs that set up and ensure a stable build environment. You must then run kdevelop again (either from the command line or by clicking its menu item under the desktop panel’s Programming menu). You then see the main KDevelop window and can start your project by selecting KDevelop’s Project menu and clicking the New menu item.

You can begin building your project by stepping through the wizard dialogs. When you click the Create button, KDevelop automatically generates all the files that are normally found in a KDE client source directory (including the configure script, which checks dependencies and builds the client’s makefile). To test your client, you can either first click the Build menu’s Make menu item (or press F8) or just click the Execute menu item (or press F9), and the client is built automatically. You can use KDevelop to create KDE clients, plug-ins for the Konqueror browser, KDE kicker panel applets, KDE desktop themes, Qt library-based clients, and even programs for GNOME.

The Glade Client for Developing in GNOME

If you prefer to use GNOME and its development tools, the Glade GTK+ GUI builder can help you save time and effort when building a basic skeleton for a program. You launch Glade from the desktop panel’s Programming menu.

When you launch Glade, a directory named Projects is created in your home directory, and you see a main window. You can use Glade’s File menu to save the blank project and then start building your client by clicking and adding user interface elements from the Palette window. For example, you can first click the Palette window’s Gnome button and then click to create your new client’s main window. A window with a menu and a toolbar appears—the basic framework for a new GNOME client.

Beginning Mobile Development for Android

Many Linux users have embraced not only smart phones but specifically those based on Android. Android, owned by Google and based on the Linux kernel, is one of the best-selling platforms for smart phones and tablet computers. The Android platform includes the operating system, middleware, and several key applications. Middleware and application examples include an integrated web browser based on WebKit, optimized graphics libraries, media support for most formats, and structured data storage with SQLite. It also includes software for hardware-dependent functions such as GSM, Bluetooth, 3G, Wi-Fi, camera, GPS, and more.

Most of the Android source code is freely available and licensed using the Apache License. Google operates an online app store called Google Play, where users of Android phones or tablet computers can download free and for-payment applications to extend the functionality of their devices. Other third-party sites exist for the same purpose, thereby creating many paths for making software available to Android users.

This section helps you get started writing software for Android on your Ubuntu machine by describing how to find and set up the development tools you need. It discusses the basic setup details for developing Android software.

Before we get further into the details of developing software for Android, a more detailed introduction to the Android architecture is appropriate. Our description starts with the hardware and builds layer upon layer from that foundation.

Hardware

Although it has been proved possible to run Android on other platforms, the main target platform is ARM. ARM processors are 32-bit or 64-bit reduced instruction set computer (RISC) processors. Like other RISC processors, they are designed for speed, with the idea that a simpler set of processor instructions creates greater efficiency and throughput. ARM processors are also designed for low power usage, making them ideal for mobile and embedded devices. Indeed, ARM is the dominant processor in these markets.

Linux Kernel

The first layer of software to run in the Android stack is a customized Linux kernel. Most of the customizations take the form of feature enhancements or optimizations to help Android and Linux work together more efficiently. Originally, Google made a point of contributing code it developed, but some of the features were rejected by the mainline Linux kernel developers for inclusion in the standard Linux kernel. This meant that to keep its desired code customizations, Google had to create a fork of the Linux kernel, which is permissible due to the license under which the kernel is released. Chapter 22, “Kernel and Module Management,” provides an introduction to the Linux kernel.

Libraries

Software libraries run on top of the kernel. These libraries are used by the higher-level components of Android and are made available to developers to use when writing Android applications using the Android software development kit (SDK), which is discussed later in this chapter. These libraries include a version of the standard C library (libc), libraries for recording and playback of many popular media formats, graphics and web browser engines, font rendering, and more.

Android Runtime

Some of the higher-level components of Android in the Application layer (described next) interact directly with the libraries just described. Other parts of the Application layer interact with the libraries via the Android Runtime. Android software is primarily written in Java, using Google-developed and -specific Java libraries. That software runs on the Android Runtime, composed of some additional core libraries running on top of a special virtual machine called Dalvik. The core libraries provide most of the functionality of Java. Dalvik performs just-in-time (JIT) compilation and is optimized for mobile devices.

Application Framework

The Application Framework is a set of useful systems and services that top-level applications can call. These systems and services provide standardized means of accessing system information, using device hardware, creating notifications, and so on. They are the same set used by the core applications included in Android, so end user–created applications can have the same look, feel, and interaction style as those provided by Android.

Applications

Android comes with a set of core applications, including a web browser, programs for text messaging, managing contacts, a calendar, an email client, and more. As noted earlier, Android software is written in Java.

Installing Android Studio

Android provides a bundled integrated development environment (IDE) with the software development kit, which is a set of tools to enable the creation of applications to run on Android. Android Studio has versions available for Linux, macOS, and Windows.

Download the latest version of Android Studio from the Android Developers website at https://developer.android.com/studio/index.html. For Ubuntu, you need the Linux version, which is made available as a .zip file. Unpack the file in the location where you want the development kit to reside (for example, /home/matthew). Doing so creates a new directory called android-studio. Note where you put this directory; you will need the information later.

Navigate to the android-studio/bin/ directory and run studio.sh:

matthew@seymour:~$ studio.sh

The first time you run Android Studio, a wizard walks you through the initial setup procedure and then downloads and installs any basic components you need.

Creating Your First Android Application

After you have installed Android Studio and all the necessary SDK packages, you are ready to begin. Click Start a new Android Studio Project and use the wizard to enter the basic details of your new application.

Version Control Systems

Deciding whether to include information on version control systems in this chapter was difficult. On one hand, someone who only wants to scratch an itch quickly may not be interested in setting up a version control system. On the other hand, these systems are not difficult to set up, especially when used with the assistance of a code hosting site like the ones discussed in this chapter, and they are immensely valuable if code is to have a life outside your system.

Although you can use make to manage a small software project (see Chapter 38, “Using Programming Tools”), larger software projects require document management, source code controls, security, and revision tracking as the source code goes through a series of changes during its development. Version control systems provide utilities for this kind of large software project management. Changes to files placed in version control are tracked. Files can be checked out by one developer, changed in their local environment, and tested before those changes are saved in the version control system. Changes that are later discovered to be unwanted can be found and removed from the tracked files. Various version control systems manage projects differently; some use a central repository, others a distributed format where any and every copy could become the master copy.

The next few sections introduce the most commonly used version control systems at the moment: Git, Bazaar, Subversion, and Mercurial. You have certainly heard of others, and new ones crop up every few years. Each has strengths and benefits. At the end of the chapter, in the “References” section, you can find a list of resources for learning more about these version control systems to further your knowledge after you peruse this chapter’s short and basic introduction to each one.

Note

Subversion and Mercurial are still in heavy use, but most developers today have switched to Git and Bazaar for new projects. Keep this in mind as you read the next few sections.

Managing Software Projects with Git

Git, initially created by Linux kernel creator Linus Torvalds, was first released in 2005 to host all development files for the Linux kernel. It is now actively developed by a large team of developers led by Junio Hamano and is widely used by many other open source projects.

Git works without a central repository, and it comes from a different perspective than other version control systems while accomplishing the same goals. Every directory that is tracked by Git acts as an individual repository with full history and source changes for whatever is contained in it. There is no need for central tracking. Source code control is done from the command line, as shown in the following examples. You need to install Git from the Ubuntu software repositories, where it is called git.

To create a new repository, access the top-level directory for the project and enter the following:

matthew@seymour:~$ git init

To check out code from an existing central repository, you must first tell Git where that repository is:

matthew@seymour:~$ git remote add origin git://path_to_repository/directory/proj.git

Then you can pull the code from that repository to your local one:

matthew@seymour:~$ git pull git://path_to_repository/directory/proj.git

To add new files to the repository, use the following:

matthew@seymour:~$ git add file_or_dir_name

To delete files from the repository, use this:

matthew@seymour:~$ git rm file_or_dir_name

To check in code after you have made changes, you will need to set your email and name in your .gitconfig file using git config --global user.name and git config --global user.email for this to work.

Then, use the -m flag to add a note, which is a good idea to help others understand what the commit contains:

matthew@seymour:~$ git commit -m 'This fixes bug 204982.'

In Git, a commit does not change the remote files but only commits the change to your local copy. If you want others to see your changes, you must push the changes to them:

matthew@seymour:~$ git push git://path_to_repository/directory/proj.git

Many open source projects that use Git host their code using GitHub. You can find it at https://github.com.

Managing Software Projects with Bazaar

Bazaar was created by Canonical and first released in 2007 to host all development files for Ubuntu and other projects. It is actively developed and used by Canonical and Ubuntu developers and also by other open source projects. Launchpad, covered later in this chapter, uses Bazaar.

Bazaar supports working with or without a central repository. Changes are tracked over any and all files you check out, including multiple versions of files. Source code control is done from the command line, as shown in the following examples. You need to install Bazaar from the Ubuntu software repositories, where it is called bzr.

There are two ways to create a new repository. If you are starting with an empty directory, use the following:

matthew@seymour:~$ bzr init your_project_name

If you are creating a repository for an existing project, go to the top-level directory for the project and enter the following:

matthew@seymour:~$ bzr init
matthew@seymour:~$ bzr add .

To check out code from an existing central repository, use this:

matthew@seymour:~$ bzr checkout your_project_name

To check your changes before you check them in, you can use bzr diff or bzr cdiff. They do the same thing, but bzr cdiff does so with colored output:

matthew@seymour:~$ bzr cdiff

To check in code after you have made changes, use the -m flag to add a note, which is a good idea so that others know what the commit contains:

matthew@seymour:~$ bzr commit -m "This fixes bug 204982."

In Bazaar, a commit does not change the remote files but only commits the change to your local copy. If you want others to see your changes, you must push the changes to them:

matthew@seymour:~$ bzr push sftp://path.to.main/repository

To update the source code in your local repository from the main repository to make sure you have all the latest changes to the code from other developers, use the following:

matthew@seymour:~$ bzr pull

Many open source projects that use Bazaar host their code using Launchpad, which is where Ubuntu development takes place. You can find more about it later in this chapter and also at https://launchpad.net.

Managing Software Projects with Subversion

Subversion was first created in 2000 as a replacement for an older version control system called the Concurrent Versions System (CVS). At that time, CVS was 10 years old, and although it served its purpose well, it lacked some features that developers wanted. Subversion is now actively developed and widely used.

In Subversion, you check out a file from a repository where code is stored in a client/server fashion. Then, changes are tracked over any and all files you check out, including multiple versions of files. You can use Subversion to backtrack or branch off versions of documents inside the scope of a project. It can also be used to prevent or resolve conflicting entries or changes made to source code files by multiple developers. Source code control with Subversion is done from the command line, as shown in the following examples. You first need to install Subversion from the Ubuntu software repositories, where it is called subversion.

You can create a new repository as follows:

matthew@seymour:~$ svnadmin create /path/to/your_svn_repo_name

To add a new project to the repository, go to the top directory of the code that is going to be placed into the repository. Then create three subdirectories: branches, tags, and trunk. Move all of your files into trunk and enter the following:

matthew@seymour:~$ svn import project file:///your_svn_repo_name/your_project -m
"First Import"

To check out code from an existing central repository, use this:

matthew@seymour:~$ svn checkout file:///your_svn_repo_name/your_project/trunk
your_project

To check in code after you have made changes, use the -m flag to add a note, which is a good idea so that others know what the commit contains:

matthew@seymour:~$ svn commit -m "This fixes bug 204982."

To update the source code in your local repository from the main repository to make sure you have all the latest changes to the code from other developers, use this:

matthew@seymour:~$ svn update

To add new files to the repository, use the following:

matthew@seymour:~$ svn add file_or_dir_name

To delete files from the repository, use this:

matthew@seymour:~$ svn delete file_or_dir_name

Many open source projects that use Subversion host their code using SourceForge, which also works with Git. You can find it at https://sourceforge.net.

Continuous Integration and Continuous Delivery and DevOps Tools

Continuous integration and continuous delivery (CI/CD) is the combined practice of first merging all the various developers’ work into a shared main branch several times a day and then building, testing, and releasing that software with greater velocity. The practice has become an integral part of DevOps. The implementation is called a CI/CD pipeline. Let’s start with the high-level view.

Continuous integration (CI) involves a set of practices that your team agrees to use. When a team member is working on a feature or a bug fix, she checks out the most current master branch of the code from the code repository. The team member writes her new code and after checking that it works locally immediately checks the changes in to that master branch. Changes are agreed to be as small as possible (often called atomic changes). Small changes are easy to roll back if problems occur, and it is easy to track down what is causing the problem if only a small amount of code needs to be reviewed.

The goal of CI is to first establish a consistent way for teams to work, and then to automate the process of building, packaging, and testing applications. As this automation is created, consistency in the integration process is achieved, which makes it even easier for team members to check out and in small changes frequently. This stage typically implements a form of continuous testing into the CI.

The largest benefit comes when you add continuous delivery (CD), which automates the delivery of applications that pass the integration process. This could be delivery to a testing or staging environment or even to a production environment (this last option changes “continuous delivery” to “continuous deployment”).

CI/CD improves team collaboration, code quality, and pushes changes out to production much more quickly. Whether the changes are bug fixes or new/improved features, a greater velocity of change benefits and pleases customers.

CI/CD Tools

Many tool types are useful in a DevOps CI/CD context. Each has its place and some of the tools available fit into multiple categories. It is up to your team (or leadership) to select the toolchain that is right for your context.

The main type of tool is the automation server, which manages the CI/CD process. Popular options include Jenkins, Maven, Travis CI, and Spinnaker.

Another type of tool is for configuration management, which is important, especially in a cloud-computing context. The use of containers will inform the toolchain options your team selects. See Chapters 31, 32, and 33 for more on these technologies.

Chaos Engineering

Started by Netflix as a way to find out whether its servers could withstand production problems in its cloud host’s infrastructure, Chaos Engineering aims to test large-scale cloud deployments in ways no other testing can. The goal is to find systemic problems and their causes before there are customer-impacting failures. This is done by intentionally and carefully simulating failures in things like networking, DNS, or system resource use to see how those and other issues impact the system as a whole. When problems are found, mitigation schemes and automated failover methods can be implemented to enhance overall system reliability and resilience.

Most DevOps and Site Reliability Engineering teams are implementing Chaos Engineering into every part of their process, from early in the CI/CD pipeline through any testing or staging environments and even into production. Why production? Because today’s large-scale cloud applications are constantly changing and no testing or staging environment can accurately mimic what an application service or microservice will experience in production.

For transparency, the book’s author works for a company, Gremlin (https://gremlin.com), that provides a Software as a Service Chaos Engineering implementation. Many open source options are also available for teams to implement themselves.

Canonical-created Tools

The tools in this section are created and used by Canonical for Ubuntu development. They are not in widespread use elsewhere.

Launchpad

To get started with Launchpad, you need to sign up for an account on Launchpad. Launchpad is an infrastructure created to simplify communication, collaboration, and processes within software development.

Launchpad is where much of Ubuntu development takes place, although some has moved to Git. It integrates Bazaar, the version control system introduced earlier, to make keeping track of changes to software code much simpler and to permit those changes to be reverted when necessary while tracking who is performing the actions.

For developers using Launchpad, this means that the process has become a bit simpler. They can concentrate on writing and editing their code and let Launchpad deal with keeping track of the changes and creating their packages. This is useful for active developers who write and maintain big projects that need source code version control and so on. Launchpad also hosts bug reporting and tracking, mailing lists, software interface translation, and much more.

Launchpad users can create a personal package archive (PPA). This is a much simpler way to make programs available. Anyone with a PPA can upload source code to be built in to packages. Those packages will then be made available in an apt repository that can be added to any Ubuntu user’s list of source repositories and downloaded or removed using any of the standard package management tools in Ubuntu, such as apt, Ubuntu Software Center, and Synaptic. Instructions are included on the web page for each Launchpad PPA, describing how to add that repository, which makes this an easy way to share software that may be added and removed by even nontechnical end users.

Ubuntu Make

Ubuntu Make is a command-line tool that sets up your system for development use quickly and easily. It allows you to download the latest version of popular developer tools and their dependencies, enable multi-arch, and more. Install ubuntu-make to get started. Then run commands like this to install tools:

matthew@seymour:~$ umake android

After you enter this, you are prompted to accept the install path and Google license, and then Ubuntu Make downloads and installs Android Studio and the latest SDK, configures everything, and, if you are running on a 64-bit platform, it even adds a Unity launcher icon.

See https://wiki.ubuntu.com/ubuntu-make to learn more about the packages available and get started.

Creating Snap Packages

Snap packaging allows a single packaged application to be used across multiple Linux distributions. Although snap packages are not expected to replace traditional packaging formats like .deb, which we cover in Chapter 40, “Helping with Ubuntu Development,” it is reasonable to expect snaps to find wide use for applications provided by third-party vendors—for example, Mozilla is already committed to using snap packages for its Firefox web browser—and for applications intended for use on devices such as phones, routers, switches, and the new category of IoT (Internet of Things) devices (see https://en.wikipedia.org/wiki/Internet_of_things). For desktop applications, snap packaging enables a developer to submit free or even for-payment apps for review and inclusion in the Ubuntu Software application (see Chapter 9, “Managing Software”).

The tool used to create snap packages is Snapcraft, available from https://snapcraft.io. Snapcraft is designed to bundle your already-created application with any and all dependencies for easy installation and updating.

Another helpful community project related to snap packaging is Snappy Playpen, at https://github.com/ubuntu/snappy-playpen, which exists to share knowledge and best practices about snap packaging while helping test the packages that community members and others create.

Bikeshed and Other Tools

Bikeshed was started by Dustin Kirkland in September 2010 as a project to package a series of tools he wrote to scratch some personal itches that he had as an Ubuntu developer working on Canonical’s Ubuntu server team or that he thought would be useful to others. All good developers, system administrators, and DevOps gurus eventually write scripts to perform specific tasks they find useful. The Bikeshed project began when Dustin gathered his scripts together and made them accessible to the world. The wider Ubuntu community is invited to give suggestions or submit patches to make them better.

Bikeshed sometimes works as an incubator, housing specific tools until they are ready to stand alone as a separate package or until they are accepted into an existing package. All the tools run from the command line, and most have useful man pages. (Others are still being written.) The project describes itself as “a collection of random useful tools and utilities that either do not quite fit anywhere else, or have not yet been accepted by a more appropriate project. Think of this package as an ‘orphanage,’ where tools live until they are adopted by loving, accepting parents.” The slogan for Bikeshed on the Launchpad project page is “While others debate where some tool should go, we put it in the Bikeshed.”

Note

To give credit where credit is due, much of the content in this section comes, with permission, from Dustin’s blog, at https://blog.dustinkirkland.com, from direct communication with him, and from the tool man pages. Dustin also wrote Byobu, a tool that is covered at the end of Chapter 12 and that contains some of the tools that have graduated from Bikeshed.

You can get the following tools by installing the Bikeshed package from the Ubuntu repositories:

apply-patch—Wraps the patch utility and makes it a little easier to use by automatically detecting the patch strip level.

bch—Determines what files have been modified in the current Bazaar (bzr) tree, opens debian/changelog for editing, uses dch, and appends a changelog entry for the current list of modified files.

bzrp—Operates the same as bzr except that output is piped to a pager to make reading easier.

cloud-sandbox—Launches a cloud instance and connects directly to it by using ssh, with the cloud system running isolated, as what is generally called a sandbox.

dman—Remotely retrieves man pages from https://manpages.ubuntu.com but reads them on the local system. This is useful for reading the man page for a utility you do not have installed on the local system.

pbget—Retrieves content uploaded to a pastebin by pbput or pbputs.

pbput—Uploads text files, binary files, or entire directory structures to a pastebin. It is similar to pastebinit, described later, but adds support for binaries and only uses https://pastebin.com.

pbputs—Operates exactly like pbput, except the user is prompted for a passphrase for encrypting the content with gpg before uploading. pbget automatically prompts the user for the preshared passphrase when the file is requested.

release—Creates a release of a project for Ubuntu.

release-build—Takes project information for a bzr project in a Launchpad PPA that uses specific parameters and builds the project as an upstream project that can then be released to Ubuntu.

release—Creates a release of a project for Ubuntu.

socks-prox—Establishes an encrypted connection for tunneling traffic through a socks proxy.

system-search—Performs a unified search through a set of system commands, packages, documentation, and files.

uquick—Performs a quick server installation.

what-provides—Determines which package provides a specific binary in your path.

The contents of Bikeshed are expected to change over time. Some of these tools may graduate to standalone tools, merge into other existing packages, or get added to more official upstream packages. You can always check the Launchpad page to find a current list of Bikeshed’s contents.

The rest of the tools in this section are not actually part of Bikeshed but have either graduated from Bikeshed and been spun off as freestanding tools or were developed individually by Dustin or others in the Ubuntu community. All the tools run from the command line and have useful man pages.

Other useful tools that you can find in the Ubuntu repositories include the following:

pastebinit—Uploads a file or the result of a command to the pastebin you want and gives you the URL in return. It was written by Ubuntu developer Stéphane Graber, and you can find it at https://launchpad.net/pastebinit or from the Ubuntu repositories. By default, it uses https://pastebin.com, but it can be configured to use others, such as https://paste.ubuntu.com.

run-one—Runs no more than one unique instance of a command with a unique set of arguments. This is often useful with cron jobs, when you want no more than one copy running at a time but where a cron job has the potential to run long and finish after the next scheduled run. Also see run-one-constantly, run-one-until-failure, and run-one-until-success in the man page.

run-this-one—Operates exactly like run-one except that it uses pgrep and kill to find and kill any running processes owned by the user and matching the target commands and arguments. It blocks while trying to kill matching processes until all matching processes are dead.

keep-one-running—Operates exactly like run-one except that it respawns "COMMAND [ARGS]" any time COMMAND exits (zero or nonzero).

ssh-import-id—Uses a secure connection to contact a public key server (https://launchpad.net by default) to retrieve one or more users’ public keys and append them to the current user’s ~/.ssh/authorized_keys file.

bootmail—Called by cron to send an email any time a system is rebooted. It reads a list of one or more comma-separated email addresses from /etc/bootmail/recipients and then loops over a list of white space–separated files in /etc/bootmail/logs to construct the email. This is useful for knowing when remote systems are rebooted.

purge-old-kernels—Looks for old kernels on your system and removes them. This is a part of the Byobu package.

col1—Splits and prints a given column, where the column to print is the name of the script program you are running (col1 to col9). col2 to col9 are symlinks to col1; their behavior simply changes based on the name called. For example, instead of using awk '{print $5}', you can use col5. This used to be in Bikeshed but is now part of the Byobu package.

wifi-status—Monitors a wireless interface for connection and associated information. This used to be in Bikeshed but is now part of the Byobu package.

References

www.cprogramming.comA useful website for learning C and C++

https://gcc.gnu.orgThe main website for gcc, the GNU Compiler Collection

www.gnu.org/software/autoconf/autoconf.htmlMore information about the GNU Project’s autoconf utility and how to build portable software projects

www.qt.ioThe main Qt website

https://glade.gnome.orgHome page for the Glade GNOME developer’s tool

www.kdevelop.orgSite that hosts the KDevelop Project’s latest versions of the KDE graphical development environment, KDevelop

The C Programming Language by Brian W. Kernighan and Dennis M. Ritchie—The standard resource for learning C

The Annotated C++ Reference Manual by Margaret A. Ellis and Bjarne StroustrupAn excellent resource for C++

https://subversion.apache.orgThe main website for Subversion

https://bazaar.canonical.com/en/The main website for Bazaar

https://git-scm.comThe main website for Git

https://jenkins.ioThe main website for Jenkins

https://maven.apache.orgThe main website for Maven

https://travis-ci.orgThe main website for Travis CI

https://spinnaker.ioThe main website for Spinnaker

https://launchpad.netAn open source website that hosts code, tracks bugs, and helps developers and users collaborate and share

https://launchpad.net/ubuntu/+ppasPersonal package archives that allow source code to be uploaded and built into .deb packages and made available to others as an apt repository

https://glade.gnome.orgA user interface designer for GTK

https://launchpad.net/bikeshedThe main page for Bikeshed

https://developer.android.com/The main website for Android development. Most of this chapter could not exist if it were not for this site, which goes into much greater detail than this short introduction and is where we learned most of what we know on the subject.

https://developer.android.com/sdk/This is the main web page for Android Studio.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.157.142