Making Web Requests with cURL

cURL is a powerful command-line tool for interacting with web servers. Using cURL, you can make requests and view the responses, download files, grab information about remote servers, or interact with remote APIs. cURL does exactly what your web browser does, except that it doesn’t render the HTML. Let’s explore how it works.

You’ll use cURL to make a request to a URL. Execute this command in your terminal to make a request to Google’s home page:

 $ ​​curl​​ ​​http://google.com

Responses from web servers come in two parts: the response header and the response body. cURL shows the response body by default:

 <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
 <TITLE>301 Moved</TITLE></HEAD><BODY>
 <H1>301 Moved</H1>
 The document has moved
 <A HREF="http://www.google.com/">here</A>.
 </BODY></HTML>

In this case, the response body contains some HTML, and it looks like the Google homepage exists at a different address.

You can use the -I switch to request only the headers from a web server. You can use this to see the kind of HTTP response you got, as well as more information about the server’s response.

 $ ​​curl​​ ​​-I​​ ​​http://google.com

The response headers tell us a ton about the web server that hosts the page. You can see when the page was last modified, the character set of the response, and even the server that was used:

 HTTP/1.1 301 Moved Permanently
 Location: http://www.google.com/
 Content-Type: text/html; charset=UTF-8
 Date: Sun, 03 Mar 2019 14:46:24 GMT
 Expires: Sun, 03 Mar 2019 14:46:24 GMT
 Cache-Control: public, max-age=2592000
 Server: gws
 Content-Length: 219
 X-XSS-Protection: 1; mode=block
 X-Frame-Options: SAMEORIGIN

The first line shows you the HTTP status code. The code 200 means it was a successful request. The status code 404 means the page wasn’t found. You’ve probably seen that message before.

If you saw 500 then there was a problem with the server, perhaps caused by a misconfiguration on the server or an error in the server-side programming language powering the application on the server you’ve connected to. And if you see the codes 301 or 302, someone moved the page to a new URL.

In this case, you see HTTP/1.1 301 Moved Permanently. When a server sends that response back, it also sends another header named location, which specifies where you should find the page now—sure enough, that’s the second line of the response:

 Location: http://www.google.com/

From this, you can see that Google has set up a permanent redirection from http://google.com to http://www.google.com.

Many websites redirect requests from one page to another. Sometimes it’s to redirect people to the new location of some content when the URL changed. Other times it’s to redirect a request for an insecure resource to a secure resource. Web browsers take care of following those redirects for you automatically so you barely notice. Visit http://google.com/ in your browser and you’ll see that the URL does indeed change from http://google.com to http://www.google.com. Your browser inspected the headers and used the value of the location header.

You can make curl do this too, if you use the -L switch. Try it out:

 $ ​​curl​​ ​​-I​​ ​​-L​​ ​​http://google.com

You’ll see the first request like before, followed by a second request:

 HTTP/1.1 301 Moved Permanently
 Location: http://www.google.com/
 ...
 
 HTTP/1.1 200 OK
 Date: Sun, 03 Mar 2019 19:19:47 GMT

curl is great for inspecting headers and making requests. It’s also good for fetching files.

Downloading Files

Instead of displaying the response to the screen, you can use the redirection symbol (>) to push that content into a text file. You can grab a copy of the HTML response from Google this way:

 $ ​​curl​​ ​​http://google.com​​ ​​>​​ ​​google.html

The output now exists in the file google.html. You can verify this by using the cat command to view the file’s contents.

You can also directly download that file. The -o switch lets you specify the filename you want to save the file to:

 $ ​​curl​​ ​​-o​​ ​​google.html​​ ​​http://google.com

As you saw in Downloading Files, you can use cURL to download any file you want if you know the URL of the file. For example, if you needed Ubuntu 16.04 for a project, you could use cURL to download it like this:

 $ ​​curl​​ ​​-O​​ ​​-L​​ ​​http://releases.ubuntu.com/16.04/ubuntu-16.04.6-desktop-amd64.iso

You’re using the -O switch this time. That’s a capital letter O, not a zero. This tells cURL to use the remote filename so you don’t have to specify your own. You also use the -L switch, just in case you encounter any redirects.

You probably wouldn’t type that out yourself, but you might save it to a script and run it later.

You can use cURL for a lot more than just downloading and reading files though.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.144.170