© Matt Wiley and Joshua F. Wiley 2016

Matt Wiley and Joshua F. Wiley, Advanced R, 10.1007/978-1-4842-2077-1_3

3. Programming Automation

Matt Wiley and Joshua F. Wiley1

(1)Elkhart Group Ltd. & Victoria College, Columbia City, Indiana, USA

It has been said that performing the same action and expecting a different result is a definition of insanity. While a brief search of Oxford’s dictionary does not turn up that definition, we have a similar premise: to prevent insanity, leave it to your computer to perform the same action repeatedly, and expect the same results!

The goal of this chapter is for you to begin to build automation into your code. Part of the power of code is that it cheerfully performs the same action as often as required without stumbling or tiring. The other useful feature of code is the capability to stitch logic into the flow of the programming. Humans, at our best, naturally use such logic. For example, if father’s keys are on the hook, he must be home. Otherwise, father must still be out and about. Here’s another example: if p is less than alpha, reject the null hypothesis; otherwise, do not reject the null hypothesis.

Because such automation allows an enormous number of repeats , care must be given to efficiency. How long does it take the code to run? Could the code be written differently to make the operations occur faster? In programming, as in life itself perhaps, we often have fewer perfect answers and more trade-offs between choices and consequences. We live in a brave new world in which new and more powerful hardware often is cheaper than the human cost required to squeeze out a bit more efficiency. On the other hand, our brave new world also has data sets with billions of entries; shaving a millisecond off each calculation could save hundreds of hours of compute time. We do our best to take a balanced approach, demonstrating some of the easier-to-understand constructs first and then presenting faster methods.

As we look at the first of these automation methods, we remind you that coding is as much an art as it is a science. The cleaner, more readable code may not be the fastest. The only essential definition of fast is fast enough. If a particular type of code makes more sense to you and thus makes you more likely to remember and use it, that may be good enough. Of course, if it is not, a bit of research is in order to uncover quicker alternatives.

Loops

Loopsrepeat the code inside them. The concern is to avoid getting into a case of an infinite loop—one that lasts forever. Most loops have a built-in means to attempt a stop (not that those always work as planned). In the next section, we discuss ways to exit a loop manually. Another concern with loops has to do with the fact that they are functions. Although we have used that word already, we are postponing a frank discussion of functions until the next chapter. For now, just keep in mind that anything created inside a function may disappear when that function ends. In the case of loops, this happens when they stop repeating. So, if you want to hold onto results, you need to build the container for those results outside the loop. Again, we go deeper into specifics in the next chapter.

The for loop is the first (and perhaps most controversial in R) automator we discuss. In many computer languages, for()is considered rather fast, but not so in R. Nevertheless, this function is easy to understand and use. Human-readable code is not something to eschew needlessly. However, there is usually more than one way to do things, and it would be silly to ignore those. We use the function proc.time()in our code to measure different times; we will calculate the runtime by finding the difference between the stop and start time. Take a look at the following code and output:

x <- seq(1,100000,1)              
head(x)
[1] 1 2 3 4 5 6
forTime<- proc.time()
xCube <- NULL
for (i in 1:length(x) ) {
+   xCube[i] = x[i]^3
+ }
forTime <- proc.time()-forTime
head(xCube)
[1]   1   8  27  64 125 216
forTime
   user  system elapsed
  11.04    0.17   11.25

You can see that the numbers 1 through 100,000 are in the variable x (the first six display by calling the head()function). Next, we create a variable to hold the cubes of all those variables. We also store the current time in a variable so that we can time the operation. Now comes the magic of the for() function! This function takes an index, which is often called i, and a range that the index operates over. We start i at 1, and the loop runs. At the end of the loop, i is automatically incremented by 1, and the loop is repeated. The loop repeats until it reaches the last part of the range. Notice that we do not choose a hard limit; rather, we make our loop adaptable based on the length of x. You can see that the loop does, in fact, result in each term of x being cubed. It takes 11.25 seconds on our system.

This time, though, can be improved. Notice that although we created a variable xCube, it is a null variable. Each iteration of our for loop not only needs to cube x, but also has to increase the size of the xCube variable by 1. This turns out to not be a trivial task. If we were to create xCube to be already the size we need, performance would significantly improve. We first remove xCube, so that we are not being deceived by any prior data:

rm(xCube)              
forTime2<- proc.time()
xCube <- vector(mode = "numeric", length = length(x))
for (i in 1:length(x) ) {
+   xCube[i] = x[i]^3
+ }
forTime2 <- proc.time()-forTime2
head(xCube)
[1]   1   8  27  64 125 216
forTime2
   user  system elapsed
   0.15    0.00    0.15

Notice that now our time is only 0.15 seconds—much improved! Of note is that even if we had created a dummy xCubethat wasn’t quite long enough, or if we made one that was too long and needed some null entries deleted off the end, this would still be faster. Thus, even when it may not be possible or convenient to know the size of the variable needed before running the loop, it may be beneficial to create a container that is likely large enough to store the data in first.

Now, for this particular example, a for loop is the wrong way to go. R has powerful and fast underlying code for some of its simplest functions. Notice that there is almost no elapsed time at all (of course, there is some, but depending on the operating system, some differences are not noticeable). Also, notice that the head()function can take negative numbers, to count backward from the end, to show the following:

xCube <- NULL              
vTime <- proc.time()
xCube <- x^3
vTime <- proc.time()-vTime
head(xCube, -99994)
[1]   1   8  27  64 125 216
vTime
   user  system elapsed
      0       0       0

Other types of loops exist. The for loop is best suited to running a process a certain number of times, and, when it gets to the end, stopping. The whileloop, on the other hand, is best suited to running a process an uncertain number of times until a stop condition happens. Often, it is possible to gain the same results with different types of loops (or as you have just seen, without a loop at all). All the same, each type of loop tends to have a more natural use under certain conditions. As an exercise on your own, after studying the while loop example we give, see if you can duplicate the previous cubing results.

Perhaps an example near and dear to the heart of researchers everywhere is simulation . After all, if we can simulate data, that may be the first step to understanding what our real-world results may be. Statistics are baked into R, and the function rnorm() takes up to three inputs. The first controls the number of elements we want to return for our sample. The second and third control the population-level mean and standard deviation from which the sample is randomly selected. Let us a look at what rnormgives us:

for (i in 1:5 ) {              
+   x <- rnorm(5,4,2)
+   print(x)
+   print(paste0("Xbar: ",mean(x)))
+   print(paste0("StdDev: ",sd(x)))
+ }
[1] 2.0287126 4.5664489 0.1310789 3.6347667 4.2465319
[1] "Xbar: 2.92150781627209"
[1] "StdDev: 1.84077667180739"
[1] 5.806652 2.707348 4.673238 4.646211 6.496605
[1] "Xbar: 4.86601066066082"
[1] "StdDev: 1.43952629897616"
[1] 1.6476375 0.8618630 0.2519337 0.6902068 1.0062915
[1] "Xbar: 0.8915864810408"
[1] "StdDev: 0.508764053558126"
[1] 5.162450 3.513966 6.503733 3.736576 5.503810
[1] "Xbar: 4.88410690747199"
[1] "StdDev: 1.25287757407054"
[1] 4.719578 1.197946 2.751889 4.451728 2.487819
[1] "Xbar: 3.12179207747434"
[1] "StdDev: 1.46300892878659"

What we find are 5 samples randomly pulled from the same population with a mean of 4 and a standard deviation of 2. However, notice that each sample has means ranging from 0.89 to 4.88. Suppose we need a pseudorandom sample such as this, but want to demonstrate to our students how random sampling-point estimators require caution. Let us say we want to find a random sample pulled from our population, but whose sample mean is in fact 10 or higher. Well, a while loop might be just the answer:

y <- 3              
while(mean(y)< 10){y <- rnorm(5, mean = 4.0, sd = 2)}

We allowed this code to run for about a minute before we put a stop to it. The reason, of course, is that this type of search could take a while, and runs dangerously close to being an infinite loop. Normal populations whose means are 4 with standard deviations of 2 tend to have few elements that are 10 or higher. Randomly sampling enough of those elements that the sample mean is 10 or higher starts to be unlikely. We set y to 3 just so that the while code would run at least once. Here we repeat our loop with something more reasonable—a mean of 6:

y <- 3              
while(mean(y)< 6){y <- rnorm(5, mean = 4.0, sd = 2)}
mean(y)
[1] 6.188268
y
[1] 11.126657  4.807775  3.703673  3.966180  7.337056

Notice that y is now a rather strange sample, compared to what we know the population to be. This is, of course, an argument for larger sample sizes, but it also gives us data that both fits our facts and tests our assumptions. Counterexamples and odd data can be quite useful to stress test code or models.

The last loop we will look at for the moment is repeat()which can be different from while and for loops. In particular, repeat, unlike while or for, does not take any arguments that would ever have a chance to stop it. This function keeps repeating over and over. Of course, there has to be a way to stop this. Otherwise, we are in trouble. We discuss such techniques in the next section, “Flow Control.” However, first, some observations about loops.

Loops are meant to perform a task as many times as required. In R, we control these tasks in three ways. If the task should be done to a certain count, a for loop is likely best. An integer iterator controls the start and stop of the for loop.If the task is one that we want to be done until a certain condition occurs, a while loop may be best. In particular, a while loop tests the condition that determines whether the loop runs, before the loop runs. This is critical! If the condition is already met, the while loop never runs. In the short examples you’ve seen so far, this may not seem possible or apparent, and that is okay. Our goal here is to introduce these in a way that easily brings them to mind when you are facing a new problem. All the same, it bears repeating: a while loop first checks whether it needs to run, and only then will it run once, and check again, until the check stop condition happens. The last loop we discuss, repeat(), is different. It simply runs. Now, we have to stop it somehow, and we do that manually in the next section. The point is, repeat never checks whether it needs to run; it just runs. So if you want to run a process at least once, repeat may be the function you seek.

We hold off on giving an example of a repeat for now, mostly to avoid infinite loops. We’ll show an example soon. First, we need a way to control the flow of our code.

Flow Control

Code flow control happens in several ways, but our interests lie in four of them (and just three). This control works by performing tests to decide whether one action or another should be taken, or by modifying the behavior of the loops you have just seen. We present each of the commands in turn, and, at the end of this section, we’ll get back to that function mentioned in the prior section.

If/else statements are a standard part of logic, and they make up a standard part of programming as well. In fact, if/else is used so often that it comes in two flavors. When the else part is to carry on as usual, it is standard practice not to include that part in the code at all.

Let us take that sample of values for which we forced a sample mean greater than 6. Say we want to see a box plot and whisker plot for that if we are successful, but otherwise want to see a histogram. We might start off with the following broken code, along with its Figure 3-1 output:

A393929_1_En_3_Fig1_HTML.jpg
Figure 3-1. The else output of both the preceding and the following code
if(y>6){                
+   boxplot(y)
+ }
Warning message:
In if (y > 6) { :
  the condition has length > 1 and only the first element will be used


else{
Error: unexpected 'else' in "else"
   histogram(y)
}
Error: unexpected '}' in "}"

We broke this code for two reasons. The first is to demonstrate that although flow control can be a useful feature, it is only as good as your logic. We meant to test for mean(y) > 6, and instead have tested something else. R is helpful enough to warn us that all may not be well. All the same, we should be cautious. The second reason is to note that this if/else statement is coded incorrectly. The else portion always runs; R does its best to warn us again that the else was not expected, and thus we should take heed. This modified set of code creates two images, shown in Figure 3-1 and Figure 3-2. We show the code as follows, along with Figure 3-2, which is the output of the successfully executed if portion:

A393929_1_En_3_Fig2_HTML.jpg
Figure 3-2. The if output of the preceding code, which has been fixed on the control level but not the else portion
if(mean(y)>6){                
+   boxplot(y)
+ }


else{
Error: unexpected 'else' in "else"
   histogram(y)
}
Error: unexpected '}' in "}"

This is interesting for three reasons. First, as you can see in our first broken example , if is fully able to be a stand-alone piece of flow control. The control portion evaluates, and if true, the code is run, and if false, the code is not run. Second, although less usefully, else is also a stand-alone piece of code (which controls nothing when used solo). Third, it is a good object lesson that both the logic of the flow control and the structure of the code must be set up correctly to work properly. Next, we show the code as we originally envisioned it. It outputs only the box plot and whisker plot of Figure 3-2. Notice that the else needs to be on the same line as the closing } of the if:

if(mean(y)>6){              
+   boxplot(y)
+ } else{
+   histogram(y)
+ }

The other two functions we use are somewhat similar to each other. We use next to skip one pass through a loop, and we break to end the loop entirely. Both of these keep the code running; they just control what happens inside the loop in which they are called. Take a look at the following code to see what happens:

i <- 0              
while(i < 5){
+   i <- i + 1
+   if(i ==2) {next()}
+   print(i)
+ }
[1] 1
[1] 3
[1] 4
[1] 5

It is important to recognize that the location of next is significant.. We could order those three lines of code inside the while function in six ways. At least one way creates an infinite loop, and at least one way has no noticeable influence whatsoever. Of course, as you just saw, there is also a way that makes it skip the number 2. This also demonstrates that while can be a lot like a for function.

We close this section with a real look at the repeat function . This function is ideal when it is unclear how many times an action may need to be repeated. Again, repeat is similar to while, with the exception that the code runs at least once. It is also easier perhaps to have more than one exit criteria. Granted, break can be used in while as well to create more than one exit rule. However, for human-readable code, it may be easier to have multiple break statements near each other. In the following code, we also recycle our iterator i so that it is easy to see that it takes 377 attempts to find our randomly drawn value that is over three standard deviations away from mean:

z <- NULL              
i <-1
repeat{
+   z <- rnorm(1,4,2)
+   i <- i + 1
+   if(z > 10){break}
+ }
z
[1] 10.69076
i
[1] 378

*apply Family of Functions

Our last set of functions for this chapter are the *apply functions . Generally speaking, these functions take two primary types of input. One is an input that has several elements , and the second is a function applied to the elements in turn. The various flavors handle different use cases, and we also introduce some error checking where possible.

We start with lapply, which takes a list input and applies the function given to each element. As its prefix suggests, this function returns a list with the results of the application of the entered function:

xL <- 1:5                
xL
[1] 1 2 3 4 5


## returns a list
lapply(xL, is.integer)
[[1]]
[1] TRUE


[[2]]
[1] TRUE


[[3]]
[1] TRUE


[[4]]
[1] TRUE


[[5]]
[1] TRUE

As you can see, this is fairly messy. To simplify, we use almost the same function call, but with the s prefix (for simple) as follows:

sapply(xL, is.integer)              
[1] TRUE TRUE TRUE TRUE TRUE

Although both of these are helpful enough in their own right, they depend on correct input. Of course, our code depends on such correct input (the phrase garbage in, garbage out comes to mind). Nevertheless, we can signal to R the type of results we are expecting with the vapplyfunction. It takes a third input, which is set to the type of output we are expecting. What happens when the function does not return the correct type of value? Here we run two lines of code, telling our function to expect logical level inputs because by default NA is a logical. In our second attempt, R still expects the same type, but gets a double instead:

vapply(xL, is.integer, FUN.VALUE = NA)                
[1] TRUE TRUE TRUE TRUE TRUE


vapply(xL, mean, FUN.VALUE = NA)
Error in vapply(xL, mean, FUN.VALUE = NA) :
  values must be type 'logical',
 but FUN(X[[1]]) result is type 'double'

We run a similar set of code, this time telling R to expect each result to be a vector of length 2. Instead, it gets a vector of length 1, which is the correct type, yet wrong length:

vapply(xL, is.integer, FUN.VALUE = c(NA, NA))              
Error in vapply(xL, is.integer, FUN.VALUE = c(NA, NA)) :
  values must be length 2,
 but FUN(X[[1]]) result is length 1

Let us take a look at a more interesting list. Notice that we have both integers and characters in this list yL:

yL <- list(  a = 1:5,  b = c("6", "7", "8"),  c = c(9:10))                
yL
$a
[1] 1 2 3 4 5


$b
[1] "6" "7" "8"


$c
[1]  9 10
yL$b[2]
[1] "7"
is.character(yL$b[2])
[1] TRUE
is.integer(yL$b[2])
[1] FALSE

Suppose we are interested in summary statistics information for each element in our list. We could certainly apply the summary()function to each element, and R would cheerfully do so, and not even throw an error at us!

lapply(yL, summary)                
$a
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
      1       2       3       3       4       5


$b
   Length     Class      Mode
        3 character character


$c
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
   9.00    9.25    9.50    9.50    9.75   10.00

Notice, however, that there are some issues . This is an easy sort of data-collection typo to have, depending on how information was collected in the field and coded. Luckily, if we use a better *apply function and give R a heads-up on what to expect, we can potentially save ourselves some grief later. Building some form of data validation into code can be helpful. Of course, this does not prevent all issues, by any means; it simply tends to make life a bit safer.

vapply(yL, summary, c(Min. = 0, `1st Qu.` = 0, Median = 0, Mean = 0, `3rd Qu.` = 0, Max. = 0))              
Error in vapply(yL, summary, c(Min. = 0, `1st Qu.` = 0, Median = 0, Mean = 0,  :
  values must be length 6,
 but FUN(X[[2]]) result is length 3

The mtcarsdata frame comes from a 1974 magazine. We are interested in the miles per gallon and the number of cylinders. Using this data set is a useful way to understand various functions in R, as it is readily accessible in R. Here we take a look at the first six entries to see and better understand our raw data:

head(mtcars[, c("mpg", "cyl")])              
                   mpg cyl
Mazda RX4         21.0   6
Mazda RX4 Wag     21.0   6
Datsun 710        22.8   4
Hornet 4 Drive    21.4   6
Hornet Sportabout 18.7   8
Valiant           18.1   6

We run tapply on the mtcars data frame, and we factor by cylinders. In Figure 3-3, we show what this looks like as a bar plot.

A393929_1_En_3_Fig3_HTML.jpg
Figure 3-3. mtcars data frame broken out by number of cylinders

Notice that in our function call this time, we are explicitly stating each of the inputs by name. That is to say, X, INDEX, and FUNare the original names of the data, the factor input, and the function that is applied, respectively. We also show almost the same code with the simplify = FALSE option, which returns a list:

tapply(                
+   X = mtcars$mpg,
+   INDEX = mtcars$cyl,
+   FUN = mean)
       4        6        8
26.66364 19.74286 15.10000


tapply(
+   X = mtcars$mpg,
+   INDEX = mtcars$cyl,
+   FUN = mean,
+   simplify = FALSE)
$`4`
[1] 26.66364


$`6`
[1] 19.74286


$`8`
[1] 15.1

When our data structure is a data frame, matrix, or array , the apply() function is often a good choice. Because these tend to have more dimensions, this function takes three inputs of the data, the margin we are interested in applying, and of course, the function to be implemented. We perform this on the entire mtcars data set first by columns and then by rows to get the standard deviation. We also first show what the mtcars data looks like in its entirety. Notice that it perhaps makes more sense to perform this by columns. Because we are neither car enthusiasts nor particularly mechanically inclined, this last set of code and output may not make much sense :

head(mtcars)                
                   mpg cyl disp  hp drat    wt  qsec vs am gear carb
Mazda RX4         21.0   6  160 110 3.90 2.620 16.46  0  1    4    4
Mazda RX4 Wag     21.0   6  160 110 3.90 2.875 17.02  0  1    4    4
Datsun 710        22.8   4  108  93 3.85 2.320 18.61  1  1    4    1
Hornet 4 Drive    21.4   6  258 110 3.08 3.215 19.44  1  0    3    1
Hornet Sportabout 18.7   8  360 175 3.15 3.440 17.02  0  0    3    2
Valiant           18.1   6  225 105 2.76 3.460 20.22  1  0    3    1
apply(mtcars, MARGIN = 2, FUN = sd)
        mpg         cyl        disp          hp        drat          wt        qsec          vs          am        gear        carb
  6.0269481   1.7859216 123.9386938  68.5628685   0.5346787   0.9784574   1.7869432   0.5040161   0.4989909   0.7378041   1.6152000


apply(mtcars, MARGIN = 1, FUN = sd)
          Mazda RX4       Mazda RX4 Wag          Datsun 710      Hornet 4 Drive   Hornet Sportabout             Valiant
           53.53888            53.51210            38.86999            79.40933           113.70330            69.95729
         Duster 360           Merc 240D            Merc 230            Merc 280           Merc 280C          Merc 450SE
          122.86626            44.43599            46.68811            57.31739            57.33609            92.42901
         Merc 450SL         Merc 450SLC  Cadillac Fleetwood Lincoln Continental   Chrysler Imperial            Fiat 128
           92.40957            92.46342           147.34689           145.04382           141.16366            28.06702
        Honda Civic      Toyota Corolla       Toyota Corona    Dodge Challenger         AMC Javelin          Camaro Z28
           25.10753            26.43470            42.34211            99.84844            96.05271           120.45178
   Pontiac Firebird           Fiat X1-9       Porsche 914-2        Lotus Europa      Ford Pantera L        Ferrari Dino
          124.56688            27.95293            41.26033            40.34473           123.53038            62.68824
      Maserati Bora          Volvo 142E
          126.32051            44.49888

When we want to apply the same function to more than one set of data, we manage a multivariate scenario. In this case, mapply()is our function call. Unlike the prior examples, this one takes the function we want to apply as our first variable, and from there takes inputs of strictly positive length. We show the plotted results of the following code in Figure 3-4. As you may recall from Beginning R, the par()function provides a way to fit multiple graphs into a single image:

A393929_1_En_3_Fig4_HTML.jpg
Figure 3-4. The function plot applied to two subsets of mtcars
par(mfrow = c(2, 1))                
mapply(plot, mtcars[, c("mpg", "hp")], mtcars[, c("disp", "qsec")])
$mpg
NULL


$hp
NULL

Recursion is a powerful programming idea, and we explore it further in later chapters. For now, consider this example: Suppose a line of people wraps around the corner of a building; you are not sure how long the line stretches. You could step out of your place in line and walk slowly to the front, counting the number of people. This is regular iteration—which just like the loops, the apply functions have been mostly doing for us. On the other hand, you could keep your place in line and ask the person in front of you, “How many people are in front of you?” If that person repeats your question, eventually it would reach the front of the line, and that person might turn around and say, “Zero!” Moving back through the line to you, it would be possible to get an answer.

Let’s Take a look at list zL. What if we try lapply() on it? This would not be good, because lapply() passes each element to mean(), and mean() cannot handle lists! We show this in the following code, which shows both our list and our first effort to find the arithmetic average:

zL <- list( a = list(1, 2, 3:5), b = list(8:12))                
zL
$a
$a[[1]]
[1] 1


$a[[2]]
[1] 2


$a[[3]]
[1] 3 4 5


$b
$b[[1]]
[1]  8  9 10 11 12


lapply(zL, FUN = mean)
$a
[1] NA


$b
[1] NA


Warning messages:
1: In mean.default(X[[i]], ...) :
  argument is not numeric or logical: returning NA
2: In mean.default(X[[i]], ...) :
  argument is not numeric or logical: returning NA

The recursive apply function, rapply(), is the solution. Depending on the particular results wanted, this function has some interesting options. A check of the help files is highly suggested, although we do not delve deeper into this function at this time.

rapply(zL, f = mean)              
a1 a2 a3  b
 1  2  4 10

It is possible to use “fancier” functions than just mean() or sd(). In the next chapter, we delve more deeply into what these functions might look like and what they might do. For now, let’s take a moment to discuss environment.

Environment is the “world” in which our functions and variables live. We have been mostly residing in the global environment. It is true that our loops have created mini-environments after a fashion. This is why, if we want to keep information available to us after a loop is done, we create those null placeholder variables. We can take a look at our environment and list the variables that are currently in play by using the environment()and ls()functions. Notice that we are living in the global environment, and most the variables we have been using are present:

environment()                
<environment: R_GlobalEnv>


ls()
 [1] "forTime"  "forTime2" "i"        "vTime"    "x"        "xCube"    "xL"       "y"        "yL"       "z"        "zL"      

But of course, this is simply one more list. At times, it may be convenient to manipulate and use this as well. Thus, our last function is eapply(). Here we take a look at the class of all objects in our environment; these types should be familiar from prior chapters:

eapply(env = environment(), FUN = class)                
$xCube
[1] "numeric"


$x
[1] "numeric"


$forTime2
[1] "proc_time"


$y
[1] "numeric"


$z
[1] "numeric"


$forTime
[1] "proc_time"


$yL
[1] "list"


$vTime
[1] "proc_time"


$i
[1] "numeric"


$xL
[1] "integer"


$zL
[1] "list"

Final Thoughts

You have seen several powerful tools for automation—from loops of various sorts that control what happens repeatedly (and indeed, how often), to more specialized tools that are streamlined to cope with the usual suspects in data sets that we are likely to encounter. These techniques are helpful and are often used. However, they are not enough. What if new techniques are developed? How does mean() work? What if we need to create our own functions? In the next chapter, we explore functions both in terms of their theoretical framework and their practical applications. The combination of custom functions and loops is the bread and butter of coercing computers to do the busy work for us.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.226.121