Feature preparation

In section, Feature extraction, of Chapter 2, Data Preparation for Spark ML, we reviewed a few methods for feature extraction and discussed their implementation on Apache Spark. All the techniques discussed there can be applied to our data here, especially the ones utilizing time series and feature comparison to create new features.

For this project, feature extraction is one of the most important tasks because all the fraud happens online and the web log is the most important and most recent data to predict frauds, which needs extraction to produce features ready for modeling.

Also, as we have features for transactions, users, bank accounts, and computer devices, a lot of work is needed to merge all these features together to form a complete data file ready for machine learning.

Feature extraction from LogFile

Log files are always unstructured, similarly to a collection of random symbols and numbers. One example of this is as follows:

May 23 12:19:11 elcap siu: 'siu root' succeeded for tjones on /dev/ttyp0 www.abccorp.com/pay w

Parsing them and making sense of them needs a lot of work as well as some subject knowledge. Most people will work manually with some sample data and then use the patterns discovered to develop codes in R or others to parse and turn extracted information into features.

For this project, our strategy is not to parse all the log files into as many features as possible but only to extract a few features useful for our machine learning.

Through some special programming in SparkSQL and R for this project, we were able to extract a few good features from the log file. These features include the number of clicks, the time between clicks, type of clicks, and others.

After feature extraction, we will perform some feature selection work on LogFile features as well as on other features from other datasets, which result in a selection of features, as summarized by the following table:

Category

Number of Features

Web Log

3

Account

4

Computer devise

3

User

5

Business

3

Total

18

Data merging

As discussed in section, Spark for fraud detection, we have five datasets for web log, accounts, computer devices, users, and business. In other words, for each transaction, there is always a user using one computer device to complete a payment transaction for a business to an account.

In the previous section, we extracted features from web logs and then selected features for each dataset.

Now, we need to merge all the data together to form a table with each feature organized together with the target variable, so we can build predictive models on them.

To merge them together, let's follow what was discussed in section, Joining data sets of Chapter 2, Data Preparation for Spark ML, for which we can use SparkSQL or the data.table R package.

After data gets merged, we can create some new features by comparison between features in different datasets. For example, we can do a comparison between addresses and computer device languages to form new features. Therefore, for this case, we added three more features to form a set of 21 features. Then, we can perform some feature reduction and selection to explore our feature space.

After the preceding, we will split the data into the training and test sets.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.229.111