Tools! Tools! We need tools!

D. Spinellis    Athens University of Economics and Business, Athens, Greece

Abstract

Science has always progressed mightily through the use of tools. A key element for the application of data science in software engineering is the availability of suitable tools. Such tools allow us to obtain data from novel sources, measure processes and products, and analyze all that data to derive insights that can advance science and everyday practice. Given the importance of tools in conducting software engineering research, hone your tool-building skills, test your tools thoroughly, and share the results of your efforts as open source software.

Keywords

Mining tools; Metrics; Tool-building; ckjm; CScout; qmcalc; GHTorrent; Git

Tools in Science

In 1908, Ernest Rutherford won the Nobel Prize in Chemistry “for his investigations into the disintegration of the elements, and the chemistry of radioactive substances.” In support of his candidacy, the Nobel Committee for Chemistry wrote about the elegant experiments he performed to show that alpha particles were in fact doubly-charged helium atoms. Rutherford was able to show this through a simple but ingenious device. He had a glassblower create a tube with an extremely thin wall that allowed the alpha particles emanating from the radon gas it contained to escape. Surrounding that tube was another from which he had emptied the air. After some days he found that the material accumulated in the outer tube produced the spectrum of helium [1].

Science has always progressed mightily through the use of tools. These are increasingly designed by scientists, but built by engineers and technicians. Telescopes allow us to see stars at the edge of our universe, imaging satellites uncover the workings of our Earth, genome sequencers and microscopes let us examine cells and molecules, and particle accelerators peer into the nature of atoms. Currently the world’s largest single machine is a tool explicitly built to advance our scientific understanding of matter: CERN’s 27 km-round Large Hadron Collider, which more than 10,000 scientists and engineers from over a 100 countries built over a period of 10 years.

The Tools We Need

The availability and use of large data sets associated with software development has transformed software engineering in ways described in other chapters of this book. A key element for the application of data science in software engineering is the availability of suitable tools. Such tools allow us to obtain data from novel sources, measure processes and products, and analyze all that data to derive insights that can advance science and everyday practice. By definition, scientific advancement happens through work beyond the state of the art, so it should come as no surprise that a lot of effort in data science involves building and refining tools. In the following paragraphs I outline important types of tools and best practices for building them. In order to provide insights on the building of tools, the description is mostly based on personal experience.

First we need tools for obtaining metrics. Although software metrics have been with us for decades, tools for obtaining them reliably are often hard to come by. I’ve seen research work where the collection of metrics was treated as an afterthought, apparently delegated to inexperienced undergraduate students. This is often evident from the quality of the corresponding tools, which may not scale, may produce erroneous results, or may be difficult to build upon.

Partly as a result of such problems, in 2005 I built ckjm,1 a tool that derives Chidamber and Kemerer metrics from Java programs [2]. These are the weighted methods per class, the depth of the inheritance tree, the number of children per class, the coupling between object classes, the response for a class, and the lack of cohesion in methods [3]. Designing ckjm to work as a Unix-style filter allowed it to analyze arbitrarily-large projects, an advantage appreciated by many of its users.

Also during 2000–10 I built CScout,2 a source code analyzer and refactoring browser for collections of C programs. It can process workspaces of multiple projects (a project is defined as a collection of C source files that are linked together) mapping the complexity introduced by the C preprocessor back into the original C source code files. CScout takes advantage of modern hardware advances (fast processors and large memory capacities) to analyze C source code beyond the level of detail and accuracy provided by current compilers, linkers, and other source code analyzers [4]. The analysis CScout performs takes into account the identifier scopes introduced by the C preprocessor and the C language proper scopes and namespaces. After the source code analysis CScout can:

 perform accurate cross-project identifier renames,

 process sophisticated queries on identifiers, files, and functions,

 locate unused or wrongly-scoped identifiers,

 identify header files that don’t need to be included, and

 create call graphs spanning both C functions and function-like macros.

The implementation of CScout required developing a theory behind the analysis of C code in the presence of the preprocessor [5], and the detailed handling of many compiler extensions and edge cases. I used CScout to compare four operating system kernels [6] and later look at the optimization of header-file included directives [7]. Both tasks required months of work in order to adjust CScout to the requirements of the analysis task. Despite its sophistication, CScout has seen considerably less use than ckjm, probably because of the considerable work required to put it to work.

More recently, in order to analyze the use and evolution of C language constructs and style, I adopted a simpler approach, and built qmcalc:3 a tool that will perform lexical analysis of C source code and calculate and print numerous metrics associated with it. The program reads a single C file from its standard input and outputs raw figures and diverse quality metrics associated with the code. These include the number of functions, lines, and statements; the number of occurrences of various keywords; the use of comments and preprocessing; the number and length of identifiers; the Halstead and cyclomatic complexity per function; the use of spacing for indentation; a measure of style inconsistency; and numbers associated with probable style infractions. What qmcalc lacks in sophistication, it offers in versatility, as it can process any code thrown at it, including code with errors or obscure undocumented constructs. This made it easy to analyze millions of lines of diverse code [8].

A second category of tools are those we use to obtain or synthesize data from processes and running products, which can then be distilled into metrics. Such tools bridge the gap between the utilitarian data formats used to support software developers and the needs of data science for software engineering. Given that computers are reflective machines, the possibilities for data collection are endless. One example in this category is GHTorrent,4 a system that obtains data through GitHub’s event API (whose raison d’être is the automation of software development processes) and makes them available as a database [9,10]. Another is a set of tools5 used to synthesize a Git repository6 containing 44 years of Unix evolution from software distribution snapshots and diverse configuration management repositories [11]. The development of both tools demonstrated the difficulties associated with processing big, incomplete, and fickle data sets. The associated tools must be able to handle perverse cases, such as dates lying several years into the future or several-kilobytes-long file names. Other interesting data generation tools are those that instrument integrated development environments to obtain usage details [12]. These can give us valuable insights on how developers actually work, minimizing the risk of self-report bias. Instrumenting programs, libraries, and middleware can also provide valuable data. For example, by modifying memory allocation functions and a call graph profiler’s function call processing code, I obtained data to illustrate memory fragmentation and stack size variability [13].

Finally, a third category of tools are those we use to analyze data. Thankfully in this segment there are many general-purpose mature tools and libraries that we can readily use. These include R, Python’s data tools,7 and relational database management systems often (mis)used to perform online analytical processing. Skimping on the effort required to master these tools in favor of ad hoc approaches is a mistake. Then, there are also specialized platforms, such as Alitheia Core [14], Evolizer [15], and Tesseract [16], that can analyze software engineering data. These can be very helpful if the research question matches closely the tool’s capabilities. Otherwise, their complexity often makes tailoring them more expensive than developing bespoke tooling.

Recommendations for Tool Building

Given the importance of tools in conducting software engineering research, the most important piece of advice is to hone your tool-building skills. I have written tools in Perl, the Unix shell, C++, and Java. C++ can be beneficial when extreme performance is required (in some cases I have run processing jobs that took many months to complete). Java can be useful when interacting with other elements in its ecosystem, for example the Eclipse platform. Perl has the advantage of a huge library of mature components that can cover even the most specialized needs, such as processing legacy SCCS (source code control system) files, but the underlying language shows its age. Using the Unix shell benefits from the power of the hundreds of tools available under it, and can be a particularly good choice when the heavy lifting will be performed by such tools. Otherwise, a modern scripting language, such a Python or Ruby, can offer the best balance between versatility, programmer efficiency, and performance. Choose the language that appeals to your taste and requirements, and sharpen your skills in its use.

Given that many of the tools used are bespoke contraptions rather than mature software, testing them thoroughly is a must. Thankfully, the practice of unit testing provides methods for performing this task in an organized and systematic fashion. According to the software’s change logs, when developing qmcalc, 130 unit tests uncovered more than 15 faults. Without these tests some of these faults might have resulted in erroneous results when the tool was used. Given the large data sizes processed, testing can often be optimized through appropriate sampling. This allows the data input and output to be carefully inspected by hand in order to validate the tool’s operation.

Finally, when developing tools consider sharing the results of your efforts as open source software and contributing to other similar endeavors. This allows our field to progress by standing on each other’s shoulders rather than toes. It is also a practice that aids the reproducibility of research, as others can easily obtain and reuse the tools used for conducting it. In addition, the knowledge that your tool will be shared as open source software where the whole world will be able to see and judge it, puts pressure on you to develop it from the beginning, not as a quick and dirty throwaway hack, but as the high-quality piece of software it deserves to be.

Historians have commented that when Rutherford’s glassblower, Otto Baumbach, was interned during the First World War, experimental physics at the University of Manchester where he had set up shop were brought to a halt [17]. Such is the power of tools to advance great science.

References

[1] Rutherford E., Royds T. The nature of the alpha particle from radioactive substances. Philos Mag. 1909;17(6):281–286.

[2] Spinellis D. Tool writing: a forgotten art? IEEE Softw. 2005;22(4):9–11 http://dx.doi.org/10.1109/MS.2005.111.

[3] Chidamber S.R., Kemerer C.F. A metrics suite for object oriented design. IEEE Trans Softw Eng. 1994;20:476–493.

[4] Spinellis D. CScout: a refactoring browser for C. Sci Comput Program. 2010;75(4):216–231 http://dx.doi.org/10.1016/j.scico.2009.09.003.

[5] Spinellis D. Global analysis and transformations in preprocessed languages. IEEE Trans Softw Eng. 2003;29(11):1019–1030.

[6] Spinellis D. A tale of four kernels. In: Schafer W., Dwyer M.B., Gruhn V., eds. ICSE’08: proceedings of the 30th international conference on software engineering. New York: Association for Computing Machinery; 2008:381–390 http://dx.doi.org/10.1145/1368088.1368140.

[7] Spinellis D. Optimizing header file include directives. J Softw Maint Evol Res Pract. 2009;21(4):233–251 http://dx.doi.org/10.1002/smr.369.

[8] Spinellis D., Louridas P., Kechagia M. An exploratory study on the evolution of C programming in the Unix operating system. In: ESEM’15: proceedings of the 9th international symposium on empirical software engineering and measurement. New York, USA: IEEE; 2015:54–57.

[9] Gousios G., Spinellis D. GHTorrent: Github’s data from a firehose. In: Lanza M., Di Penta M., Xie T., eds. Proceedings of the 9th IEEE working conference on mining software repositories (MSR). New York, USA: IEEE; 2012:12–21 http://dx.doi.org/10.1109/MSR.2012.6224294.

[10] Gousios G. The GHTorrent dataset and tool suite. In: MSR’13: proceedings of the 10th working conference on mining software repositories; 2013:233–236.

[11] Spinellis D. A repository with 44 years of Unix evolution. In: MSR’15: proceedings of the 12th working conference on mining software repositories. New York, USA: IEEE; 2015:13–16 http://dx.doi.org/10.1109/MSR.2015.6.

[12] Murphy G.C., Kersten M., Findlater L. How are Java software developers using the Eclipse IDE? IEEE Softw. 2006;23(4):76–83.

[13] Spinellis D. Code quality: the open source perspective. Boston, MA: Addison-Wesley; 2006.

[14] Gousios G., Spinellis D. Conducting quantitative software engineering studies with Alitheia Core. Empir Softw Eng. 2014;19(4):885–925 http://dx.doi.org/10.1007/s10664-013-9242-3.

[15] Gall H.C., Fluri B., Pinzger M. Change analysis with evolizer and changedistiller. IEEE Softw. 2009;26(1):26–33.

[16] Sarma A., et al. Tesseract: interactive visual exploration of socio-technical relationships in software development. In: ICSE 2009: IEEE 31st international conference on software engineering. New York, USA: IEEE; 2009.

[17] Gall A. Otto Baumbach—Rutherford’s glassblower. Newsletter published by History of Physics. Group of the Institute of Physics (UK & Ireland); 2008; (23). p. 44–55.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.195.111