Now, you have a dataset and a computer. For convenience, I have provided you a small anonymized and obfuscated sample of clickstream data with the book repository that you can get at https://github.com/alexvk/ml-in-scala.git. The file in the chapter01/data/clickstream
directory contains lines with timestamp, session ID, and some additional event information such as URL, category information, and so on at the time of the call. The first thing one would do is apply transformations to find out the distribution of values for different columns in the dataset.
Figure 01-1 shows screenshot shows the output of the dataset in the terminal window of the gzcat chapter01/data/clickstream/clickstream_sample.tsv.gz | less –U
command. The columns are tab (^I
) separated. One can notice that, as in many real-world big data datasets, many values are missing. The first column of the dataset is recognizable as the timestamp. The file contains complex data such as arrays, structs, and maps, another feature of big data datasets.
Unix provides a few tools to dissect the datasets. Probably, less, cut, sort, and uniq are the most frequently used tools for text file manipulations. Awk, sed, perl, and tr can do more complex transformations and substitutions. Fortunately, Scala allows you to transparently use command-line tools from within Scala REPL, as shown in the following screenshot:
Fortunately, Scala allows you to transparently use command-line tools from within Scala REPL:
[akozlov@Alexanders-MacBook-Pro]$ scala … scala> import scala.sys.process._ import scala.sys.process._ scala> val histogram = ( "gzcat chapter01/data/clickstream/clickstream_sample.tsv.gz" #| "cut -f 10" #| "sort" #| "uniq -c" #| "sort -k1nr" ).lineStream histogram: Stream[String] = Stream(7731 http://www.mycompany.com/us/en_us/, ?) scala> histogram take(10) foreach println 7731 http://www.mycompany.com/us/en_us/ 3843 http://mycompanyplus.mycompany.com/plus/ 2734 http://store.mycompany.com/us/en_us/?l=shop,men_shoes 2400 http://m.mycompany.com/us/en_us/ 1750 http://store.mycompany.com/us/en_us/?l=shop,men_mycompanyid 1556 http://www.mycompany.com/us/en_us/c/mycompanyid?sitesrc=id_redir 1530 http://store.mycompany.com/us/en_us/ 1393 http://www.mycompany.com/us/en_us/?cp=USNS_KW_0611081618 1379 http://m.mycompany.com/us/en_us/?ref=http%3A%2F%2Fwww.mycompany.com%2F 1230 http://www.mycompany.com/us/en_us/c/running
I used the scala.sys.process
package to call familiar Unix commands from Scala REPL. From the output, we can immediately see the customers of our Webshop are mostly interested in men's shoes and running, and that most visitors are using the referral code, KW_0611081618.
One may wonder when we start using complex Scala types and algorithms. Just wait, a lot of highly optimized tools were created before Scala and are much more efficient for explorative data analysis. In the initial stage, the biggest bottleneck is usually just the disk I/O and slow interactivity. Later, we will discuss more iterative algorithms, which are usually more memory intensive. Also note that the UNIX pipeline operations can be implicitly parallelized on modern multi-core computer architectures, as they are in Spark (we will show it in the later chapters).
It has been shown that using compression, implicit or explicit, on input data files can actually save you the I/O time. This is particularly true for (most) modern semi-structured datasets with repetitive values and sparse content. Decompression can also be implicitly parallelized on modern fast multi-core computer architectures, removing the computational bottleneck, except, maybe in cases where compression is implemented implicitly in hardware (SSD, where we don't need to compress the files explicitly). We also recommend using directories rather than files as a paradigm for the dataset, where the insert operation is reduced to dropping the data file into a directory. This is how the datasets are presented in big data Hadoop tools such as Hive and Impala.
3.141.28.116