Chapter 6. Additional Build Tooling and Skills

Because of the increase in popularity of ideas like DevOps and Site Reliability Engineering (SRE), the modern Java developer can rarely rely on coding exclusively in Java in order to accomplish tasks, particularly in relation to building, testing, and deploying code. In this chapter, you will learn more about operating systems and associated tooling for building and running diagnostics on Java applications.

Linux, Bash, and Basic CLI Commands

Linux, Bash, and command-line skills are essential for installing development tools, configuring external build steps, and understanding and managing the underlying operating system environment. Even if you work on a Microsoft Windows development machine, which has excellent PowerShell support as an alternative to Bash, it is useful to understand the Linux OS because many platforms utilize this. Knowledge acquired from learning core Bash skills can often be easily transferred to the Windows environment.

Users, Permissions, and Groups

Linux operating systems (OSs) have the ability to multitask in a manner similar to other operating systems, and from its inception Linux was designed to allow more than one user to have access to the system at the same time. In order for this multiuser design to work properly, there needs to be a method to protect users from each other. Understanding this concept of users is vital for implementing CD, as you will often be creating build pipelines that run as multiple, different users. Often, developers run applications in all environments as the root user—an all-powerful user that by default has access to all files and commands on a Linux OS—when running Java applications (perhaps because they are used to doing this locally). But you really should be using a specific user with minimal permissions for running applications in production.

Users and permissions

To create a new standard user, use the useradd command: useradd <name>. The useradd command utilizes a variety of variables:

-d <home_dir>

home_dir will be used as the value for the user’s login directory.

-e <date>

The date when the account will expire.

-f <inactive>

The number of days before the account expires.

-s <shell>

Sets the default shell type.

You will need to set a password for the new user by using the passwd command (note that you will need root privileges to change a user’s password): passwd <username>. The user will be able to change their password at any time by using the passwd command without specifying a username. A user account and associated password will allow authentication, but you will need to use permissions in order to authorize a user’s activities to manipulate files. Permissions are the “rights” to act on a file or directory. The basic rights are read, write, and execute:

Read

A readable permission allows the contents of the file to be viewed. A read permission on a directory allows you to list the contents of a directory.

Write

A write permission on a file allows you to modify the contents of that file. For a directory, the write permission allows you to edit the contents of a directory (e.g., add/delete files).

Execute

For a file, the executable permission allows you to run the file and execute a program or script. For a directory, the execute permission allows you to change to a different directory and make it your current working directory. Users usually have a default group, but they may belong to several additional groups.

To view the permissions on a file or directory, issue the command ls -l <directory/file>, as shown in Example 6-1.

Example 6-1. Examining file permissions
(master *+) conferencemono $ ls -l
total 32
-rw-r--r--  1 danielbryant  staff  9798 31 Oct 16:17 conferencemono.iml
-rw-r--r--  1 danielbryant  staff  2735 31 Oct 16:16 pom.xml
drwxr-xr-x  4 danielbryant  staff   136 31 Oct 09:16 src
drwxr-xr-x  6 danielbryant  staff   204 31 Oct 09:37 target

The first 10 characters show the access permissions. The first dash (-) indicates the type of file (d for directory, s for special file, and - for a regular file). The next three characters (rw-) define the owner’s permission to the file. In the preceding example, for the pom.xml file, the file owner danielbryant has read and write permissions only. The next three characters (r--) are the permissions for the members of the same staff group as the file owner, which in this example is read-only. The last three characters (r--) show the permissions for all other users, and in this example it is read-only.

You can change a file’s permissions and ownership with the chmod and chown commands, respectively. The command chmod is short for change mode, and can be used to change permissions on files and directories. By default, all files are “owned” by the user who creates them and by that user’s default group. To change the ownership of a file, use the chown command in the chown user:group /path/to/file format. To change the ownership of a directory and all the files contained inside, you can use the recursive option with the -R flag: chown -R danielbryant:staff /opt/application/config/. If a file is not owned by you, you will need root account access in order to change permissions or ownership; however, you don’t necessarily need to log in as root in order to achieve this.

Understanding sudo—superuser do

The root user acccount is the super user and has the ability to do anything on a system. Therefore, in order to have protection against potential damage, the sudo command is often used in place of root. sudo allows users and groups access to commands they normally would not be able to use, and allows a user to have administration privileges without logging in as root, from which it is all too easy to accidentally do all kinds of damage to the underlying OS and configuration.

A common example of the sudo command used within continuous delivery is when installing software into a virtual machine or container: sudo apt-get install <package> for Ubuntu or Debian, and sudo yum install <package> for Red Hat or CentOS distributions. To provide a user with sudo ability, their name will need to be added to the sudoers file. This file is important and should not be edited directly with a text editor; if the sudoers file is edited incorrectly, it could result in preventing access to the system. Accordingly, the visudo command should be used to edit the sudoers file. If you are initializing a system, you will need to log in as root and enter the command visudo. As soon as your user has sudo privileges, you can use sudo with visudo. Example 6-2 shows an example of a sudoers file.

Example 6-2. Portion of the sudoers file that shows the users with sudo access.
# User privilege specification
root    ALL=(ALL:ALL) ALL
danielbryant  ALL=(ALL:ALL) ALL
ashleybryant  ALL=(ALL:ALL) ALL

Working with groups

Groups in Linux are simply a (potentially) empty collection of users, which can be used to manage several users at once or allow multiple independent user accounts to collaborate and share files. Every user has a default or primary group, and control of group membership is administered through the /etc/group file, which shows a list of groups and its members. When a user logs in, the group membership is set for their primary group. This means that when a user launches a program or creates a file, both the file and the running program will be associated with the user’s current group membership. This is an important concept within continuous delivery, as it means that if your user starts a process (a Java application, a build server, test execution, or indeed any process), this process inherits your permissions. If you are running with generous permissions, it can mean that the process you started can do a lot of damage!

A user may access other files in other groups, as long as they are also a member of that group and the access permissions are set. To run programs or create a file in a different group, you must run the newgrp command to switch your current group (e.g., newgrp <group_name>). If you are a member of the group_name in the /etc/group file, the current group membership will change. It is important to note that any files created will now be associated with the new group rather than your primary group. You can also change your group by using the chgrp command: chgrp <newgroup>.

Working with the Filesystem

The most fundamental skills you need to master are moving around the filesystem and getting an idea of what is around you. When you log into your server (for example, a new build server), you are typically dropped into your user account’s home directory.

Navigating directories

A home directory is a directory set aside for your user to store files and create directories. To find out where your home directory is in relationship to the rest of the filesystem, you can use the pwd command. This command displays the directory that you are currently in, as shown in Example 6-3.

Example 6-3. Using pwd to see the location of your current directory in the filesystem
(master *+) conferencemono $ pwd
/Users/danielbryant/Documents/dev/daniel-bryant-uk/
oreilly-book-support/conferencemono

You can view the contents of the current directory with the ls command. The ls command has many useful flags, and it is common to use ls -lsa in order to view more details about the file (lsa lists files in long format, file sizes, and showing all files).

Example 6-4. Using ls in order to see the contents of the current directory
(master *+) conferencemono $ ls
conferencemono.iml pom.xml            src                target
(master *+) conferencemono $ ls -lsa
total 32
 0 drwxr-xr-x   7 danielbryant  staff   238 31 Oct 16:17 .
 0 drwxr-xr-x   8 danielbryant  staff   272 31 Oct 09:48 ..
 0 drwxr-xr-x  11 danielbryant  staff   374  3 Jan 09:30 .idea
24 -rw-r--r--   1 danielbryant  staff  9798 31 Oct 16:17 conferencemono.iml
 8 -rw-r--r--   1 danielbryant  staff  2735 31 Oct 16:16 pom.xml
 0 drwxr-xr-x   4 danielbryant  staff   136 31 Oct 09:16 src
 0 drwxr-xr-x   6 danielbryant  staff   204 31 Oct 09:37 target
(master *+) conferencemono $ 

You can navigate directories by using the cd <directory name> command, as shown in Example 6-5. cd .. moves you up one directory, and by combining the usage of this ls and pwd, you can easily view files and not get lost.

Creating and manipulating files

The most basic method of creating a file is with the touch command. This creates an empty file using the name and location specified: touch <file_name>. You will need to have write permissions for the directory in which you are currently located for this to succeed. You can also “touch” an existing file, and this will simply update the last accessed and last modified times of the file to the current time. Many operations within continuous delivery monitor the last modified date and use any change as a trigger for an arbitrary operation, and using touch can short-circuit the check and cause the operation to run. Similar to the touch command, you can use the mkdir command to create empty directories: mkdir example You can also create a nested directory structure by using the -p flag (otherwise, you will get an error, as mkdir can create a directory only within a directory that already exists): mkdir -p deep/nested/directories.

You can move a file to a new location by using the mv command mv file ./some/existing_dir. (This command succeeds only if the /some/existing_dir directories already exist.) Perhaps somewhat confusingly, mv can also be used to rename files: mv original_name new_name. You are responsible for ensuring that these operations will not do anything destructive—for example, mv can be used to overwrite existing files, which cannot be recovered! In a similar fashion, you can copy files by using the cp command: cp original_file new_copy_file. To copy directories, you must include the -r option to the command. This stands for recursive, as it copies the directory, plus all of the directory’s contents. This option is necessary with directories, regardless of whether the directory is empty: cp -r existing_directory location_​for_deep_copy. To delete a file, you can use the rm command. If you want to remove a non-empty directory, you will have to use the -r flag, which removes all of the directory’s contents recursively, plus the directory itself.

There Is Often No Undo on the CLI

Please be extremely careful when using any destructive command like rm, and potentially mv and cp. There is no “undo” command for these actions, so it is possible for you to accidentally destroy important files permanently. If you’re anything like us, you’ll do this a few times before you fully learn your lesson! One tactic we use now is to list files we want to delete/move/replace before actually issuing the command. For example, if we want to delete all the files with a *.ini extension, but leave everything else intact, we will navigate to the appropriate directory and list (by using ls *.ini) and check the resulting files before issuing the remove command: rm *.ini.

Viewing and Editing Text

In contrast to some operating systems, Linux and other Unix-like operating systems rely on plain-text files for vast portions of the system, so it is important that you learn how to view text files via the command line. The basic mechanism to view a file’s contents on your terminal is by using the cat application (e.g., cat /etc/hosts). This doesn’t work well with large files, or files with text you want to search, and therefore it is common to also use a pager like less (e.g., less /etc/hosts). This opens the less program with the contents of the /etc/hosts file. You can navigate through the pages in the file with Ctrl-F and Ctrl-B (think forward and back). To search for text in the document, you can type a forward slash, /, followed by the search term (e.g., /localhost). If the file contains more than one instance of the string being searched for, you can press N to move to the next search, and Shift-N to move back to the previous result. When you wish to exit the less program, you can type Q to quit.

When attempting to use continuous delivery or build tools, it is a common requirement to be either able to view the first few lines or the last few lines of a large file—this is especially common when looking at the first few lines of a configuration file, or the last few lines of a log file (and you might want to follow along as more lines are appended to the log file). The head and tail commands, respectively, can help a lot here: e.g., head /etc/hosts or tail server.log.

Programs like cat, less, head, and tail allow you only view or read-only access to a file’s contents. If you need to edit a text file by using the command line, you can use the vi, vim, emacs, or nano programs. Each Linux distribution provides different tools by default, although you can usually install your favorite via a package manager (providing your user has appropriate permissions).

Joining Everything Together: Redirects, Pipes, and Filters

Linux also has a powerful concept of redirects, pipes, and filters that allow you to combine simple command-line programs to perform complicated processing and filter the output (and text contents within the files) at any point within the processing steps. More information on this can be found in the Linux Pocket Guide (O’Reilly) by Daniel Barrett, but several examples are included here to demonstrate the power:

ls > output.log

Redirects (saves) the contents of the ls command to the text file output.log, overwriting any content that exists in this file.

ls >> output.log

Redirects (saves) the contents of the ls command to the text file output.log, appending the new text to the existing contents of the file.

ls | less

Pipes the output of the ls command to the less pager, allowing you to page up and down through long directory listings, as well as search the content.

ls | head -3

Pipes the output of the ls command to the head command (showing the top three lines only).

ls | head -3 | tail -1 > output.log

Pipes the output of the ls command to head, which takes the top three lines and pipes this to tail, which takes the bottom one line and redirects (saves) this to the output.log file.

cat < input.log

Redirects (loads) the contents on input.log into cat. This example appears trivial, as you don’t need to redirect the contents of a file into cat for it to be displayed, but the program can be more complicated than cat. For example, this command can be used to redirect (load) a database dump file into the MySQL command-line program.

Searching and Manipulating Text: grep, awk, and sed

Some of the most basic—but also the most powerful—tooling in Linux searches and manipulates text. grep is a command-line utility for searching plain-text data sets for lines that match a regular expression. awk is a programming language designed for text processing and typically used as a data extraction and reporting tool. sed is a stream editor utility that parses and transforms text, using a simple, compact programming language. These tools are extremely useful when building continuous delivery pipelines or diagnosing issues on Linux machines. The following are examples:

grep "literal_string" filename

Searches for the exact match of literal_string in the file specified by filename.

grep "REGEX" filename

Searches for the regular expression REGEX in the file specified by filename.

grep -iw "is" demo_file

A case-insensitive search (-i) for the exact match of the word (-wis. The word must have space or punctuation on either side of it to match, in the file specified by demo_file.

awk '{print $3 " " $4}' data_in_rows.txt

Prints to the terminal the third and fourth columns, separated by a tab ( ), of all rows of data within the file specified.

sed 's/regexp/replacement/g' inputFileName > outputFileName

Globally (/g) replaces (s/) the regular expression (regex) with the (replacement) string in the inputFileName file and redirects—or saves—the results in the outputFileName.

Diagnostic Tooling: top, ps, netstat, and iostat

The following list should be good to get you started diagnosing issues with Linux machines, and all of these tools should be available on a standard Linux distro (or easily installed via a package manager). You can find more details of each command in the Linux Pocket Reference of by using the man tool to view the corresponding manuals:

top

Allows you to view all processes running on the (virtual) machine.

ps

Lists all of the processes running on the (virtual) machine.

netstat

Allows you to view all network connections on the (virtual) machine.

iostat

Lists all of the I/O statistics of block devices (disk drives) connected to the (virtual) machine.

dig or nslookup

Provides information on DNS address.

ping

Checks whether an IP or domain names can be reached over the network.

tracert

Allows you to trace the route of an IP packet within the network (both an internal network and the internet).

tcpdump

Allows you to spy on TCP network traffic. This is typically an advanced tool, but can be useful in cloud environments where much of the communication occurs via TCP.

strace

Allows you to trace system calls. This is typically an advanced tool, but can be invaluable for debugging container or security issues.

If you are working a lot with containers, you may require additional tooling, as some of the preceding programs (and additional diagnostic tooling) do not work correctly with container technology. The following list includes several tools that we have found useful for understanding container runtimes:

sysdig

A useful container-aware diagnostic tool

systemd-cgtop

A systemd-specific tool to view top-like data within cgroups (containers)

atomic top and docker top

Useful utilities by Red Hat and Docker, respectively, that allow you to examine processes running within containers

Several Older Diagnostic Tools Are Not Container Friendly!

Several of the popular diagnostic tools were created before container technology like Docker was created (or became popular), and therefore they don’t work as you might expect. For example, containers run as a process, and then the programs within each container namespace also run as a process.  Some tools may have to be able to distinguish that all of the programs running  within the OS are separated via namespaces.

HTTP Calls and JSON Manipulation

Many of the third-party services you interact with use HTTP/S as the transport protocol and JSON as the data format. Therefore, it makes sense to become comfortable working with these technologies, and also to develop skills that allow you to quickly experiment and test ideas without needing to build an entire application in Java.

curl

The Linux curl command is a useful command for testing web-based REST-like APIs. Example 6-6 shows usage of curl against the GitHub API.

Example 6-6. curl making a request against the GitHub API repos endpoint
$ curl 'https://api.github.com/repos/danielbryantuk/
oreilly-docker-java-shopping/commits?per_page=1'
[
  {
    "sha": "3182a8a5fc73d2125022bf317ac68c3b1f4a3879",
    "commit": {
      "author": {
        "name": "Daniel Bryant",
        "email": "[email protected]",
        "date": "2017-01-26T19:48:46Z"
      },
      "committer": {
        "name": "Daniel Bryant",
        "email": "[email protected]",
        "date": "2017-01-26T19:48:46Z"
      },
      "message": "Update Vagrant Box Ubuntu and Docker Compose. Remove sudo usage",
      "tree": {
        "sha": "24eb583bd834734ae9b6c8133c99e4791a7387e8",
        "url": "https://api.github.com/repos/danielbryantuk/↵
        oreilly-docker-java-shopping/git/trees/↵
        24eb583bd834734ae9b6c8133c99e4791a7387e8"
      },
      "url": "https://api.github.com/repos/danielbryantuk/↵
      oreilly-docker-java-shopping/git/commits/↵
      3182a8a5fc73d2125022bf317ac68c3b1f4a3879",
      "comment_count": 0
    },
    "url": "https://api.github.com/repos/danielbryantuk/↵
    oreilly-docker-java-shopping/commits/↵
    3182a8a5fc73d2125022bf317ac68c3b1f4a3879",
    "html_url": "https://github.com/danielbryantuk/↵
    oreilly-docker-java-shopping/commit/↵
    3182a8a5fc73d2125022bf317ac68c3b1f4a3879",
    "comments_url": "https://api.github.com/repos/danielbryantuk/↵
    oreilly-docker-java-shopping/commits/↵
    3182a8a5fc73d2125022bf317ac68c3b1f4a3879/comments",
    "author": {
      "login": "danielbryantuk",
      ...
    },
    "committer": {
      "login": "danielbryantuk",
      ...
    },
    "parents": [
      {
        "sha": "05b73d1f0c9904e6904d3f1bb8f13384e65e7840",
        "url": "https://api.github.com/repos/danielbryantuk/↵
        oreilly-docker-java-shopping/commits/↵
        05b73d1f0c9904e6904d3f1bb8f13384e65e7840",
        "html_url": "https://github.com/danielbryantuk/↵
        oreilly-docker-java-shopping/commit/↵
        05b73d1f0c9904e6904d3f1bb8f13384e65e7840"
      }
    ]
  }
]

The preceding example made a GET request to the repos endpoint and displayed the JSON response. You can also use curl to get more detail from the endpoint response, such as the HTTP status code, the content length, and any additional header information like rate-limiting. The -I flag makes a HEAD request against the specified URI and displays the response, as shown in Example 6-7.

Example 6-7. Using curl to obtain additional information about an endpoint response (by making a HEAD request)
$ curl -I 'https://api.github.com/repos/danielbryantuk/↵
oreilly-docker-java-shopping/commits?per_page=1'
HTTP/1.1 200 OK
Server: GitHub.com
Date: Thu, 21 Sep 2017 08:28:06 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 3861
Status: 200 OK
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 51
X-RateLimit-Reset: 1505983279
Cache-Control: public, max-age=60, s-maxage=60
Vary: Accept
ETag: "4ce9e0d9cf4e2339bbc1f0fd028904c4"
Last-Modified: Thu, 26 Jan 2017 19:48:46 GMT
X-GitHub-Media-Type: github.v3; format=json
Link: <https://api.github.com/repositories/67352921/↵
commits?per_page=1&page=2>;
rel="next", <https://api.github.com/repositories/67352921/↵
commits?per_page=1&page=43>; rel="last"
Access-Control-Expose-Headers: ETag, Link, X-GitHub-OTP,↵
 X-RateLimit-Limit,
X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes,↵
 X-Accepted-OAuth-Scopes, X-Poll-Interval
Access-Control-Allow-Origin: *
Content-Security-Policy: default-src 'none'
Strict-Transport-Security: max-age=31536000;↵
 includeSubdomains; preload
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X-XSS-Protection: 1; mode=block
X-Runtime-rack: 0.037989
X-GitHub-Request-Id: FC09:1342:19A449:37EA9C:59C37816

If you want additional information about an endpoint’s response, but don’t want to make a HEAD request, you can also use the verbose mode of curl via the -v flag. This uses the HTTP method specified (the default of which is GET), but provides much more detail in the response in addition to the JSON payload, as shown in Example 6-8.

Example 6-8. Using curl with the verbose flag set
$ curl -v 'https://api.github.com/repos/danielbryantuk/↵
oreilly-docker-java-shopping/commits?per_page=1'
*   Trying 192.30.253.117...
* TCP_NODELAY set
* Connected to api.github.com (192.30.253.117) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.github.com
* Server certificate: DigiCert SHA2 High Assurance Server CA
* Server certificate: DigiCert High Assurance EV Root CA
> GET /repos/danielbryantuk/oreilly-docker-java-shopping/↵
commits?per_page=1 HTTP/1.1
> Host: api.github.com
> User-Agent: curl/7.54.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: GitHub.com

Finally, you can also use curl to download files over HTTP/S and FTP, as shown in Example 6-9.

Example 6-9. Using curl to download a file over HTTPS and FTP
curl -O https://domain.com/file.zip
curl -O ftp://ftp.uk.debian.org/debian/pool/main/alpha.zip

The curl command also supports ranges. Example 6-10 demonstrates how you would list files via FTP in the debian/pool/main directory whose filename starts with the letters a to c.

Example 6-10. Listing files via FTP
$ curl ftp://ftp.uk.debian.org/debian/pool/main/[a-c]/

The curl command is a powerful tool. It is available on all modern Linux and macOS distributions by default, and also installable on Windows. However, there is a newer tool that can be much more intuitive to use: HTTPie.

HTTPie

HTTPie is a command-line HTTP client with an intuitive UI, JSON support, syntax highlighting, wget-like downloads, plugins, and more. It can be installed on macOS, Linux, or Windows and provides an http command that provides expressive and intuitive command syntax and sensible defaults, as shown in Example 6-11.

Example 6-11. Using HTTPie to curl the GitHub API
$ http 'https://api.github.com/repos/danielbryantuk/↵
oreilly-docker-java-shopping/commits?per_page=1'
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: ETag, Link, X-GitHub-OTP,↵
 X-RateLimit-Limit,
X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes,↵
 X-Accepted-OAuth-Scopes, X-Poll-Interval
Cache-Control: public, max-age=60, s-maxage=60
Content-Encoding: gzip
Content-Security-Policy: default-src 'none'
Content-Type: application/json; charset=utf-8
Date: Thu, 21 Sep 2017 08:03:10 GMT
ETag: W/"4ce9e0d9cf4e2339bbc1f0fd028904c4"
Last-Modified: Thu, 26 Jan 2017 19:48:46 GMT
Link: <https://api.github.com/repositories/67352921/↵
commits?per_page=1&page=2>;
rel="next", <https://api.github.com/repositories/67352921/↵
commits?per_page=1&page=43>; rel="last"
Server: GitHub.com
Status: 200 OK
Strict-Transport-Security: max-age=31536000;↵
 includeSubdomains; preload
Transfer-Encoding: chunked
Vary: Accept
X-Content-Type-Options: nosniff
X-Frame-Options: deny
X-GitHub-Media-Type: github.v3; format=json
X-GitHub-Request-Id: F159:1345:23A3CA:4C9FC3:59C3723E
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 54
X-RateLimit-Reset: 1505983279
X-Runtime-rack: 0.029789
X-XSS-Protection: 1; mode=block

[
    {
        "author": {
            "avatar_url": "https://avatars2.githubusercontent.com/u/2379163?v=4",
            "events_url": "https://api.github.com/users/danielbryantuk/↵
                           events{/privacy}",
            "followers_url": "https://api.github.com/users/danielbryantuk/↵
                              followers",
            "following_url": "https://api.github.com/users/danielbryantuk/↵
                              following{/other_user}",
            "gists_url": "https://api.github.com/users/danielbryantuk/↵
                          gists{/gist_id}",
            "gravatar_id": "",
            "html_url": "https://github.com/danielbryantuk",
            "id": 2379163,
            "login": "danielbryantuk",
            "organizations_url": "https://api.github.com/users/danielbryantuk/orgs",
            "received_events_url": "https://api.github.com/users/danielbryantuk/↵
                                    received_events",
            "repos_url": "https://api.github.com/users/danielbryantuk/repos",
            "site_admin": false,
            "starred_url": "https://api.github.com/users/danielbryantuk/↵
                            starred{/owner}{/repo}",
            "subscriptions_url": "https://api.github.com/users/danielbryantuk/↵
                                  subscriptions",
            "type": "User",
            "url": "https://api.github.com/users/danielbryantuk"
        },
        "comments_url": "https://api.github.com/repos/danielbryantuk/↵
                         oreilly-docker-java-shopping/commits/↵
                         3182a8a5fc73d2125022bf317ac68c3b1f4a3879/comments",
        "commit": {
            "author": {
                "date": "2017-01-26T19:48:46Z",
                "email": "[email protected]",
                "name": "Daniel Bryant"
            },
            "comment_count": 0,
            "committer": {
                "date": "2017-01-26T19:48:46Z",
                "email": "[email protected]",
                "name": "Daniel Bryant"
            },
            "message":↵
            "Update Vagrant Box Ubuntu and Docker Compose. Remove sudo usage",
            "tree": {
                "sha": "24eb583bd834734ae9b6c8133c99e4791a7387e8",
                "url": "https://api.github.com/repos/danielbryantuk/↵
                oreilly-docker-java-shopping/git/trees/↵
                24eb583bd834734ae9b6c8133c99e4791a7387e8"
            },
            "url": "https://api.github.com/repos/danielbryantuk/↵
            oreilly-docker-java-shopping/git/commits/↵
            3182a8a5fc73d2125022bf317ac68c3b1f4a3879"
        },
        "committer": {
            "avatar_url": "https://avatars2.githubusercontent.com/u/2379163?v=4",
            "events_url": "https://api.github.com/users/danielbryantuk/↵
                           events{/privacy}",
            "followers_url": "https://api.github.com/users/danielbryantuk/↵
                              followers",
            "following_url": "https://api.github.com/users/danielbryantuk/↵
                              following{/other_user}",
            "gists_url": "https://api.github.com/users/danielbryantuk/↵
                          gists{/gist_id}",
            "gravatar_id": "",
            "html_url": "https://github.com/danielbryantuk",
            "id": 2379163,
            "login": "danielbryantuk",
            "organizations_url": "https://api.github.com/users/danielbryantuk/orgs",
            "received_events_url": "https://api.github.com/users/danielbryantuk/↵
                                    received_events",
            "repos_url": "https://api.github.com/users/danielbryantuk/repos",
            "site_admin": false,
            "starred_url": "https://api.github.com/users/danielbryantuk/↵
                            starred{/owner}{/repo}",
            "subscriptions_url": "https://api.github.com/users/danielbryantuk/↵
                                  subscriptions",
            "type": "User",
            "url": "https://api.github.com/users/danielbryantuk"
        },
        "html_url": "https://github.com/danielbryantuk/↵
        oreilly-docker-java-shopping/commit/↵
        3182a8a5fc73d2125022bf317ac68c3b1f4a3879",
        "parents": [
            {
                "html_url": "https://github.com/danielbryantuk/↵
                oreilly-docker-java-shopping/commit/↵
                05b73d1f0c9904e6904d3f1bb8f13384e65e7840",
                "sha": "05b73d1f0c9904e6904d3f1bb8f13384e65e7840",
                "url": "https://api.github.com/repos/danielbryantuk/↵
                oreilly-docker-java-shopping/commits/↵
                05b73d1f0c9904e6904d3f1bb8f13384e65e7840"
            }
        ],
        "sha": "3182a8a5fc73d2125022bf317ac68c3b1f4a3879",
        "url": "https://api.github.com/repos/danielbryantuk/↵
        oreilly-docker-java-shopping/↵
        commits/3182a8a5fc73d2125022bf317ac68c3b1f4a3879"
    }
]

HTTPie also supports making requests against an authenticated endpoint, as shown in Example 6-12 (additional Auth plugins can be found on the website).

Example 6-12. Using HTTPie with basic authentication
$ http -a USERNAME:PASSWORD POST https://api.github.com/repos/danielbryantuk/↵
oreilly-docker-java-shopping/issues/1/comments body='HTTPie is awesome! :heart:'

Sending headers in a request is also much easier to manage using HTTPie than with curl, as shown in Example 6-13.

Example 6-13. Sending headers in a request using HTTPie
$ http example.org User-Agent:Bacon/1.0 'Cookie:valued-visitor=yes;foo=bar'↵
X-Foo:Bar Referer:http://httpie.org/

Proxies can also be specified for both HTTP and HTTPS, as shown in Example 6-14.

Example 6-14. Using a proxy for HTTP and HTTPS
$ http --proxy=http:http://10.10.1.10:3128↵
--proxy=https:https://10.10.1.10:1080 example.org

Now that you are familiar with two tools for making requests against HTTP REST-like APIs, let’s look at a tool for manipulating JSON data: jq.

jq

jq is like sed for JSON data: you can use it to slice, filter, map, and transform structured data with the same ease that sed, awk, and grep let you manipulate with text. Example 6-15 queries the GitHub API for details on commits, but displays only the first (0-indexed) result. Note that all of the response data will be sent over the wire, as the jq command filters the data on the client side; this may be important when dealing with responses with a large payload.

Example 6-15. Piping the output of curl to jq and displaying on the first result
$ curl 'https://api.github.com/repos/danielbryantuk/↵
oreilly-docker-java-shopping/commits?per_page=1'| jq '.[0]'
{
  "sha": "9f3e6514a55011c26ca18a1a69111c0a418e6dea",
  "commit": {
    "author": {
      "name": "Daniel Bryant",
      "email": "[email protected]",
      "date": "2017-09-30T10:18:58Z"
    },
    "committer": {
      "name": "Daniel Bryant",
      "email": "[email protected]",
      "date": "2017-09-30T10:18:58Z"
    },
    "message": "Add first version of Kubernetes deployment config",
    "tree": {
      "sha": "7568df5f6bfe6725ad9fb82ac8cf8a0c0c4661ec",
      "url": "https://api.github.com/repos/danielbryantuk/↵
      oreilly-docker-java-shopping/git/trees/↵
      7568df5f6bfe6725ad9fb82ac8cf8a0c0c4661ec"
    },
    "url": "https://api.github.com/repos/danielbryantuk/↵
    oreilly-docker-java-shopping/git/commits/↵
    9f3e6514a55011c26ca18a1a69111c0a418e6dea",
    "comment_count": 0,
    "verification": {
      "verified": false,
      "reason": "unsigned",
      "signature": null,
      "payload": null
    }
  },
  "url": "https://api.github.com/repos/danielbryantuk/↵
  oreilly-docker-java-shopping/commits/9f3e6514a55011c26ca18a1a69111c0a418e6dea",
  "html_url": "https://github.com/danielbryantuk/↵
  oreilly-docker-java-shopping/commit/9f3e6514a55011c26ca18a1a69111c0a418e6dea",
  "comments_url": "https://api.github.com/repos/danielbryantuk/↵
  oreilly-docker-java-shopping/commits/9f3e6514a55011c26ca18a1a69111c0a418e6dea/↵
  comments",
  "author": {
  ...

jq can also be used to filter the JSON objects displayed. Example 6-16 builds on the preceding jq query by displaying only a few select fields within the first commit resource.

Example 6-16. curl against the GitHub API with jq filtering
$ curl 'https://api.github.com/repos/danielbryantuk/↵
oreilly-docker-java-shopping/commits?per_page=1'| jq '.[0] |↵
 {message: .commit.message, name: .commit.committer.name}'
{
  "message": "Update Vagrant Box Ubuntu and Docker Compose. Remove sudo usage",
  "name": "Daniel Bryant"
}

Using curl, HTTPie, and jq can allow for quick experimentation and prototyping against a REST-like API, which can be an invaluable skill for a Java developer working with this technology.

Basic Scripting

Learning the basics of Bash scripting can be a useful skill for a Java developer. This knowledge can often be combined with tools like curl and jq to expand on and automate basic experimentation, testing, and build processes. The Classic Shell Scripting book elaborates on this concept in much more detail, but let’s take a closer look at several useful examples.

xargs

The xargs command can be used to build and execute command lines from standard input. This can be used to download a list of URLs that is contained within a text file named urls.txt, as shown in Example 6-17.

Example 6-17. Using xargs to download multiple files as specified within the urls.txt file
$ xargs -n 1 curl -O < urls.txt

Pipes and Filters

Using pipes and filters can be a great way to chain simple commands to performance complicated processes. Example 6-18 shows how to use the curl command to make a silent HEAD request against http://www.twitter.com with the -L follow flag, which shows all of the steps within the HTTP flow of getting a response from the Twitter home page. The output of this command is then piped to grep in order to search for the pattern HTTP/.

Example 6-18. Using curl with grep to find the steps in the HTTP flow when accessing Twitter
$ curl -Is https://www.twitter.com -L | grep HTTP/
HTTP/1.1 301 Moved Permanently
HTTP/1.1 200 OK

The script in Example 6-19 can be used to extract the location of a shortened URL.

Example 6-19. Unfurling a URL from a shortened form
$ $ curl -sIL buff.ly/2xrgUwi  | grep ^Location;
Location: https://skillsmatter.com/↵
skillscasts/10668-looking-forward-to-daniel-bryant-talk?↵
utm_content=buffer887ce&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer

Loops

Simple loops using for within Bash can be used to repeatedly test an API quickly, perhaps confirming that the response is identical across multiple requests, or determining which status code is indicated when the API is broken. See Example 6-20.

Example 6-20. Using a loop in Bash to repeatedly curl a URI
#!/bin/bash

for i in `seq 1 10`; do
    curl -I http://www.example.com
done

Conditionals

You can also add in conditional logic; for example, to check the status code returned from an API. Example 6-21 uses HTTPie and a simple Bash case check to display to the terminal more details on the HTTP status code returned from the call to the example.com URI.

Example 6-21. Simple Bash script using HTTPie and case to display additional information based on the HTTP response code
#!/bin/bash

if http --check-status --ignore-stdin 
--timeout=2.5 HEAD example.org/health &> /dev/null; then
    echo 'OK!'
else
    case $? in
        2) echo 'Request timed out!' ;;
        3) echo 'Unexpected HTTP 3xx Redirection!' ;;
        4) echo 'HTTP 4xx Client Error!' ;;
        5) echo 'HTTP 5xx Server Error!' ;;
        6) echo 'Exceeded --max-redirects=<n> redirects!' ;;
        *) echo 'Other Error!' ;;
    esac
fi

Summary

In this chapter, you have learned the fundamentals of additional skills and build tooling that will benefit your work as a modern Java developer:

  • Linux, Bash, and command-line skills are essential for installing development tools, configuring external build steps, and understanding and managing the underlying operating system environment.

  • Learning the basics of OS diagnostics tooling like top, ps, and netstat allow you to debug applications more effectively in test and production.

  • The curl, jq, and HTTPie tools are essential for viewing, manipulating, and debugging REST-like APIs.

Now that you have a good understanding of build tooling and skills, you can learn more about how Java applications are packaged for deployment across a range of platforms: from traditional infrastructure, to cloud, to containers and serverless.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.148.113.229