main()

Take a look at the main() method of this class, from where the execution cycle actually starts:

The main() method is the same piece of code that is used with both the CLI version and the GUI version of the code, so there are many parameters that would only be relevant when invoked with the GUI mode. We will discuss those that are needed in CLI mode in this section. We can see that the mode variable is initialized to c inside the definition of the main() method.

In the section highlighted as (1) in the following screenshot, we initialize an object for the texttable() Python module, which will be used to draw a table on the console window to display the project IDs for which service scanning can be performed. The second section collects all the completed projects from the database and section (3) adds the retrieved rows to the program variable to be displayed on the screen. The subsequent code is straightforward. At section (4), the functionality actually removes the earlier details of a project for which service scanning would have been completed already, so that the user can overwrite the results with a new service-scanning operation:

Section (5) creates a directory called <project_id> under the results folder. For example, if the current project ID is 744, the command init_project_directory(), will create a sub folder under <parent_folder_code_base>/results/<744_data>. All the log files, the scan configuration, and the final report will be placed in this folder. As we have already discussed, we have a preconfigured JSON file that contains a mapping between the service name and the test cases to be executed against that service. 

The following sections shows how the JSON file is configured. Let's take an example of an http service and see how the test cases are configured to be executed against the HTTP service:

As can be seen and classified from the preceding bifurcation, all the test cases for the service called http will be placed in a JSON list with the key as Commands. Each entry within the Commands list would be a JSON dictionary that has the following entries:{"args":[],"id":"","method":"","include":"","title":""}. Each dictionary formulates one test case to be executed. Let's try to understand each of the entries:

  • args: The args parameter is actually a list that contains the actual commands and NSE scripts to be executed against a target. All the commands/scripts that are to be executed are classified into five different categories that we will see in the method section. For now, it is enough to understand that args contain the actual commands to be executed on the Kali console with Python.
  • id: Each command to be executed is given a unique ID, which makes the enumeration easy. For all HTTP-based commands, we can see the IDs are http_1, http_2, http_3, and so on.
  • method: This particular entry is very important, as it refers to an actual Python method that should be invoked to execute this testcase. The methods are placed inside a Python file/module auto_commands.py and this class has different methods mapped to the JSON file. Generally, all the scripts to be executed are broken into five classes/categories, and each category has a corresponding method associated with it. The categories of scripts and their corresponding methods are as follows:
    • Single_line_comamnds_timeout: All the commands/scripts that require a one time invocation and that produce the output for you, without requiring any interaction in between, fall under this classification. For example, an NSE script can be executed as follows: nmap -p80 --script <scriptname.nse> 10.0.2.15; it would not require any other input and would just execute and give us the final output. Alternatively, a Perl script to perform directory enumeration can be invoked as follows: perl http-dir-enum.pl http://10.0.2.15:8000. Likewise, all the Python scripts, Bash commands, and Kali tools, such as Nikto or Hoppy, will fall under this category. All such scripts are handled by a Python method, singleLineCommands_timeout(), placed inside the auto_comamnds.py module. It should be noticed that all such scripts also need an additional timeout parameter. There are occasions when a single script hangs for some reasons (the host might be unresponsive, or it might encounter an unforeseen condition for which it was not tested), and the hanging of the script will cause the other scripts in the queue to be in the waiting state. To get around this condition, we specify a threshold parameter as the first argument in the args[] list, which is the maximum time in seconds for which we want the script to be executed. This is why, from the previous configuration, we can see that 500 seconds is specified as a timeout for the NSE script whose ID is http_5. If the script is not executed within 500 seconds, the operation is aborted and the next script in the queue is executed.
    • General_interactive: Apart from scripts that require a single-line command to be fired and executed, we also have other Bash commands, Kali tools, and open source scrips that require some interaction after being fired. A typical example would be to SSH to a remote server, where we usually pass two sets of commands. This can be done in a single shot, but, just for the sake of understanding, let's take the following example:

Another example could be tools such as SQLmap, or w3af_console, where some amount of user interaction is needed. Note that with this automation/scanning engine, we would have a workaround by which scripts would be automatically invoked and executed with Python. All scripts or testcases that require interaction are handled by a method called general_interactive(), which is placed under the Python module auto_comamnds.py.

    • General_commands_timeout_sniff: There are many occasions in which we need to execute a script or a bash command and at the same time we want Wireshark to sniff the traffic at the interface so that we can find out if the credentials are being passed in cleartext or not. During the execution of scripts in this category, the traffic must be sniffed as well. They can either be single-line scripts such NSE or interactive commands such as ssh root@<target_ip> as the first command and password:<my_password> as the second. All scripts that need this kind of invocation are handled by the Python method generalCommands_Tout_Sniff(), which is again present in the auto_comamnds.py module.
    • Metasploit_Modules: This is the category that will execute and handle all the Metasploit modules. Whenever we are required to execute any Metasploit module, that module (be it auxiliary or exploit) will be placed inside this classification. The method to which the execution is delegated, which is called custom_meta(), is placed under auto_commands.py.
    • HTTP_BASED: The final category contains all test cases that require an HTTP GET/POST request to be posted on the target to be tested, and such cases are handled by a method calledhttp_based(), which is again placed in the auto_commands.py module.
  • includeThe include parameter takes two values: True and False) If we don't wish the test case/script to be included in the list of testcases to be executed, we can set include=False. This feature is very useful when choosing scan profiles. There are certain occasions where we don't want to run time consuming testcases such as Nikto or Hoppy on our target and prefer to run only certain mandatory checks or scripts. To have that capability the include parameter is introduced. We will discuss this further when we look at scan profiles with the GUI version of our scanner.
  • title: This is an informative field, which gives information about the underlying script to be executed.

Now that we have a good understanding of the JSON file that will be loaded into our self.commandsJSON class variable, let's move ahead with our code.

The section highlighted as (6) reads that JSON file in our all_config_file program variable, which eventually goes to the self.commandsJSON class variableThe sections of code highlighted as (7), (8) and (9) load the scan profile to be used with the scan:

By default, the scan-profile with the command-line version of our code is mandatory profile. This profile by and large contains all the testcases that should be executed against the target; it just removes a few time-consuming ones. However, if we wish to change the definition of mandatory_profile, to add subtract test cases, we can edit the mandatory.json file, which lies at the same path as our code file, driver_meta.py.

The following are the entries present in the mandatory.json file for the http service:

The section highlighted as (9) will load all the results obtained from the port scanning of the project ID 744 for our example. The results are saved inside the database table IPtable_history and the following screenshot gives us an idea of which records will be loaded:

We can see from the preceding screenshot that there are basically three records that correspond to our scan with the ID 744. The schema of the table columns is (record_id,IP,port_range,status,project_id,Services_detected[CSV_format]).

The actual query executed at the backend is as follows:

The returned result would be a list of lists that can be iterated over. The 0th index of the first inner list will contain the services detected loaded in CSV. The format would be (host;protocol;port;name;state;product;extrainfo;reason;version;config;cpe), as can be verified from the preceding screenshot. All this information will be placed inside a results_ list.

In section (10), as shown in the folliwng snippet, we are iterating over the results_ list and splitting the string data over the new line . We are further splitting the returned list over ;, and finally placing all the results under a list, lst1 []:

For the current example, after section (11), lst1 will contain the following data:

lst1=[
[10.0.2.15,tcp,22,ssh,open,OpenSSH,protocol 2.0,syn-ack,OpenSSH-7.2p2 Debian 5,10,cpe:/o:linux:linux_kernel], [10.0.2.15,tcp,80,http,open,nginx,,syn-ack,nginx-1.10.2,10,cpe:/a:igor_sysoev:nginx:1.10.2],
[10.0.2.15,tcp,111,rpcbind,open,,RPC #100000,syn-ack,-2-4,10,],
[10.0.2.15,tcp,443,https,open,nginx,,syn-ack,nginx-1.10.2,10,cpe:/a:igor_sysoev:nginx:1.10.2],
[10.0.2.15,tcp,8000,http,open,nginx,,syn-ack,nginx-1.10.2,10,cpe:/a:igor_sysoev:nginx:1.10.2],
[10.0.2.15,tcp,8002,rtsp,open,,,syn-ack,-,10,]
]

Thus, lst1[0][0] will give us 10.0.2.15, lst1[2][2]=111 and so on.

In section (12) of the code, we are sorting the data in lst1 by the service type. We have declared a dictionary, lst={}, and we want to group all the hosts and ports according to their type of service, such that the output of section (12), (13) would be as follows:

lst = {
"ssh":[[10.0.2.15,22,open,OpenSSH-7.2p2 Debian 5;10]],
"http":[[10.0.2.15,80,open,nginx-1.10.2],[10.0.2.15,8000,open,nginx-1.10.2]],
"rcpbind":[[10.0.2.15,111,open,-2-4,10]],
"https":[[10.0.2.15,443,open,nginx-1.10.2]],
"rtsp":[[10.0.2.15,8002,open,-]]
}

In section (15), ss = set(lst_temp).intersection(set(lst_pre)), we are doing a set intersection between two structures that contain dictionary keys. One structure contains keys from the dictionary lst, which in turn contains all the services that our port scanner discovered. The other contains keys that are loaded from the preconfigured JSON file.The objective of this is for us to see all the discovered services for which test cases are mapped. All the discovered and mapped service keys/names go in the list SS, which stands for services to be scanned.

In section (16)ms=list(set(lst_temp) - set(lst_pre)), we are comparing the services that are not configured in the JSON file against the services discovered. Our JSON file is quite exhaustive in terms of commonly found services, but there are still cases in which Nmap might find a service during port scanning that is not preconfigured in our JSON file. In this section, we are trying to identify the services that Nmap has discovered but that do not have testcases mapped against them in our JSON file. To do this, we are doing a set difference between the two structures. We will tag those services as new, and the user can either configure testcases against them or analyze them offline to execute custom testcases. All these services will be placed in a list called ms, where ms stands for missed services.

In sections (17) and (18) as shown in the following code snippet, we are again restructuring the two missed and mapped services in two different dictionaries in the format mentioned earlier: {"ssh":[[10.0.2.15,22,open,OpenSSH-7.2p2 Debian 5;10]],...}. The discovered services will be placed in the dic dictionary and then into the self.processed_services class variable. The missed ones will be placed into ms_dic and finally into self.missed_services:

Finally, under section (19), we are invoking the parse_and_process() method, which will invoke the logic of displaying the discovered and missed services and will give the user the option to perform any reconfiguration if needed.

After reconfiguration is done, parse_and_process() will invoke another method, launchExploits(), which will actually read the method_name from the JSON configuration file, replace the <host> and <port> with the appropriate host IP and port discovered, and pass the control to the relevant method (based upon the method_name read) of the auto_command.py module.

Once all the testcases are executed for all the discovered hosts and ports, it's time to generate a report with screenshots and relevant data. This is the portion that is handled by sections (20) and (21), as shown in the following snippet:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.171.136