How it works...

In this recipe, we used multiple tools to gather different pieces of information about our target. We started running whois, this Linux command queries the domain registration details, and with it we can obtain the addresses of nameservers and owner details such as company, email address, phone number, and others. whois can also query information about IP addresses, showing information about the company owning the network segment the address belongs to. Next, we used dig to get information about the domain servers and then to perform a zone transfer and obtain the complete list of hosts resolved by the queried server; this works only on servers that are not correctly configured.

By using theharvester, we obtained email addresses, hostnames, and IP addresses related to the target domain. The options used in this recipe were -b all, to use all the supported search engines, and -d zonetransfer.me to specify the target domain.

We then used Netcraft to obtain information about the technologies used by the site and a brief history of updates and changes; this allowed us to further plan the testing process without having to query the actual site.

Wayback Machine is a service that stores static copies of internet sites and keeps a record of their updates and versions; here, we can see the information published in older versions of the site and maybe obtain information published previously and subsequently removed. Sometimes, an update to a web application may leak sensitive data and such an update is rolled back or replaced by a new version, hence the usefulness of being able to see previous versions of the applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.103.77