Sampling from distributions

Observing the outcome of trials that involve a random variable, a variable whose value changes due to chance, can be thought of as sampling from a probability distribution—one that describes the likelihood of each member of the sample space occurring.

That sentence probably sounds much scarier than it needs to be. Take a die roll for example.

Sampling from distributions

Figure 4.1: Probability distribution of outcomes of a die roll

Each roll of a die is like sampling from a discrete probability distribution for which each outcome in the sample space has a probability of 0.167 or 1/6. This is an example of a uniform distribution, because all the outcomes are uniformly as likely to occur. Further, there are a finite number of outcomes, so this is a discrete uniform distribution (there also exist continuous uniform distributions).

Flipping a coin is like sampling from a uniform distribution with only two outcomes. More specifically, the probability distribution that describes coin-flip events is called a Bernoulli distribution—it's a distribution describing only two events.

Parameters

We use probability distributions to describe the behavior of random variables because they make it easy to compute with and give us a lot of information about how a variable behaves. But before we perform computations with probability distributions, we have to specify the parameters of those distributions. These parameters will determine exactly what the distribution looks like and how it will behave.

For example, the behavior of both a 6-sided die and a 12-sided die is modeled with a uniform distribution. Even though the behavior of both the dice is modeled as uniform distributions, the behavior of each is a little different. To further specify the behavior of each distribution, we detail its parameter; in the case of the (discrete) uniform distribution, the parameter is called n. A uniform distribution with parameter n has n equally likely outcomes of probability 1 / n. The n for a 6-sided die and a 12-sided die is 6 and 12 respectively.

For a Bernoulli distribution, which describes the probability distribution of an event with only two outcomes, the parameter is p. Outcome 1 occurs with probability p, and the other outcome occurs with probability 1 - p, because they are collectively exhaustive. The flip of a fair coin is modeled as a Bernoulli distribution with p = 0.5.

Imagine a six-sided die with one side labeled 1 and the other five sides labeled 2. The outcome of the die roll trials can be described with a Bernoulli distribution, too! This time, p = 0.16 (1/6). Therefore, the probability of not rolling a 1 is 5/6.

The binomial distribution

The binomial distribution is a fun one. Like our uniform distribution described in the previous section, it is discrete.

When an event has two possible outcomes, success or failure, this distribution describes the number of successes in a certain number of trials. Its parameters are n, the number of trials, and p, the probability of success.

Concretely, a binomial distribution with n=1 and p=0.5 describes the behavior of a single coin flip—if we choose to view heads as successes (we could also choose to view tails as successes). A binomial distribution with n=30 and p=0.5 describes the number of heads we should expect.

The binomial distribution

Figure 4.2: A binomial distribution (n=30, p=0.5)

On average, of course, we would expect to have 15 heads. However, randomness is the name of the game, and seeing more or fewer heads is totally expected.

How can we use the binomial distribution in practice?, you ask. Well, let's look at an application.

Larry the Untrustworthy Knave—who can only be trusted some of the time—gives us a coin that he alleges is fair. We flip it 30 times and observe 10 heads.

It turns out that the probability of getting exactly 10 heads on 30 flips is about 2.8%*. We can use R to tell us the probability of getting 10 or fewer heads using the pbinom function:

  > pbinom(10, size=30, prob=.5)
  [1] 0.04936857

It appears as if the probability of this occurring, in a correctly balanced coin, is roughly 5%. Do you think we should take Larry at his word?

Note

*If you're interested

The way we determined the probability of getting exactly 10 heads is by using the probability formula for Bernoulli trials. The probability of getting k successes in n trials is equal to:

The binomial distribution

where p is the probability of getting one success and:

The binomial distribution

If your palms are getting sweaty, don't worry. You don't have to memorize this in order to understand any later concepts in this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.12.52