Using ZAP's spider

Downloading a full site to a directory in our computer leaves us with a static copy of the information; this means that we have the output produced by different requests, but we neither have such requests nor the response states of the server. To have a record of that information, we have spiders, such as the one integrated in OWASP ZAP.

In this recipe, we will use ZAP's spider to crawl a directory in our vulnerable_vm and will check on the information it captures.

Getting ready

For this recipe, we need to have the vulnerable_vm and OWASP ZAP running, and the browser should be configured to use ZAP as proxy. This can be done by following the instructions given in the Finding files and folders with ZAP recipe in the previous chapter.

How to do it...

  1. To have ZAP running and the browser using it as a proxy, browse to http://192.168.56.102/bodgeit/.
  2. In the Sites tab, open the folder corresponding to the test site (http://192.168.56.102 in this book).
  3. Right click on GET:bodgeit.
  4. From the drop-down menu select Attack | Spider…
    How to do it...
  5. In the dialog box, leave all the default options and click on Start Scan.
  6. The results will appear in the bottom panel in the Spider tab:
    How to do it...
  7. If we want to analyze the requests and responses of individual files, we go to the Sites tab and open the site folder and the bodgeit folder inside it. Let's take a look at POST:contact.jsp(anticsrf,comments,null):
    How to do it...

    On the right side, we can see the full request made, including the parameters used (bottom half).

  8. Now, select the Response tab in the right section:
    How to do it...

    In the top half, we can see the response header including the server banner and the session cookie, and in the bottom half we have the full HTML response. In future chapters, we will see how obtaining such a cookie from an authenticated user can be used to hijack the user's session and perform actions impersonating them.

How it works...

Like any other crawler, ZAP's spider follows every link it finds in every page included in the scope requested and the links inside it. Also, this spider follows the form responses, redirects, and URLs included in robots.txt and sitemap.xml files. It then stores all the requests and responses for later analysis and use.

There's more...

After crawling a website or directory, we may want to use the stored requests to perform some tests. Using ZAP's capabilities, we will be able to do the following, among other things:

  • Repeat the requests that modify some data
  • Perform active and passive vulnerability scans
  • Fuzz the input variables looking for possible attack vectors
  • Replay specific requests in the web browser
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.59.192