Downloading a full site to a directory in our computer leaves us with a static copy of the information; this means that we have the output produced by different requests, but we neither have such requests nor the response states of the server. To have a record of that information, we have spiders, such as the one integrated in OWASP ZAP.
In this recipe, we will use ZAP's spider to crawl a directory in our vulnerable_vm and will check on the information it captures.
For this recipe, we need to have the vulnerable_vm and OWASP ZAP running, and the browser should be configured to use ZAP as proxy. This can be done by following the instructions given in the Finding files and folders with ZAP recipe in the previous chapter.
http://192.168.56.102/bodgeit/
.http://192.168.56.102
in this book).site
folder and the bodgeit
folder inside it. Let's take a look at POST:contact.jsp(anticsrf,comments,null)
:On the right side, we can see the full request made, including the parameters used (bottom half).
In the top half, we can see the response header including the server banner and the session cookie, and in the bottom half we have the full HTML response. In future chapters, we will see how obtaining such a cookie from an authenticated user can be used to hijack the user's session and perform actions impersonating them.
Like any other crawler, ZAP's spider follows every link it finds in every page included in the scope requested and the links inside it. Also, this spider follows the form responses, redirects, and URLs included in robots.txt
and sitemap.xml
files. It then stores all the requests and responses for later analysis and use.
After crawling a website or directory, we may want to use the stored requests to perform some tests. Using ZAP's capabilities, we will be able to do the following, among other things:
18.224.59.192