Menu

Law of Large Numbers – A Deep Dive into the World of Statistics

The Law of Large Numbers (LLN) is a fundamental theorem in probability and statistics, serving as the basis for many concepts and practices in the field. If you’ve ever heard the saying “the more the better,” you can think of LLN as the mathematical rendition of this proverb.

In this blog post, we’ll dive into the depths of the Law of Large Numbers, its implications, and its importance in statistical analysis.

In this blog post we will learn:

  1. Simple Explanation of the Law of Large Numbers
  2. Coin flips Python simulation of LLM
  3. What is the Law of Large Numbers (LLN)?
  4. Two primary forms of the LLN
    4.1. Weak Law of Large Numbers (WLLN)
    4.2. Strong Law of Large Numbers (SLLN)
  5. Implications and Importance
  6. Common Misconceptions
  7. Practical Applications
  8. Conclusion

1. Simple Explanation of the Law of Large Numbers

Imagine you flip a coin. It can land either heads or tails. Now, if you flip it just once or twice, you might get heads both times, even though we expect heads and tails to be equally likely. So, from just a couple of flips, you might think that getting heads is more common.

But, if you keep flipping the coin, say 1000 times, you’ll likely find that the number of heads and tails becomes closer to even – roughly 500 heads and 500 tails.

The Law of Large Numbers basically says that the more times you repeat a random experiment (like flipping a coin), the closer the average outcome (like the percentage of heads) will get to the expected value (50% heads and 50% tails, in this case).

In simpler words: In the short run, randomness can seem unpredictable and chaotic, but given enough repetitions, things tend to average out to what we expect.

2. Coin flips Python simulation of LLM

import random
random.seed(7)

def coin_flip():
    # This function simulates a coin flip: "H" for heads and "T" for tails.
    return "H" if random.random() < 0.5 else "T"

def simulate_flips(n):
    # This function simulates 'n' coin flips and returns the proportion of heads.
    heads = sum(1 for _ in range(n) if coin_flip() == "H")
    return heads / n

# Simulate coin flips for different numbers of flips:
for num_flips in [10, 100, 1000, 10000, 100000]:
    proportion_heads = simulate_flips(num_flips)
    print(f"After {num_flips} flips, the proportion of heads is: {proportion_heads:.4f}")
After 10 flips, the proportion of heads is: 0.7000
After 100 flips, the proportion of heads is: 0.5500
After 1000 flips, the proportion of heads is: 0.5290
After 10000 flips, the proportion of heads is: 0.4993
After 100000 flips, the proportion of heads is: 0.5014

When you run this code, you might get varying results for smaller numbers of flips (like 10 or 100). But as you increase the number of flips to larger values (like 10,000 or 100,000), you should notice that the proportion of heads gets closer and closer to 0.5 (which is the expected proportion since a fair coin has a 50% chance of landing heads).

3. What is the Law of Large Numbers (LLN)?

The LLN states that as the size of a sample drawn from a population increases, the sample mean approaches the population mean. In simpler terms, the more observations you have, the closer your average will get to the ‘true’ average of the entire population.

4. Two primary forms of the LLN

4.1. Weak Law of Large Numbers (WLLN)

As the number of trials or observations increases, the sample average converges in probability to the expected value. This means that the probability that the sample average deviates from the expected value by more than a small ε approaches zero as the number of trials goes to infinity.

4.2. Strong Law of Large Numbers (SLLN)

This states that the sample average almost surely converges to the expected value. In technical terms, the probability that the sample average converges to the expected value is 1.

5. Implications and Importance

  • Consistency in Averages: In real-world scenarios, if you’re flipping a fair coin, the LLN suggests that the longer you flip the coin, the closer your proportion of heads (or tails) will get to 0.5. This doesn’t mean you’ll get an equal number of heads and tails, but the proportion will tend towards 0.5.

  • Risk and Uncertainty: For businesses and investors, the LLN helps explain why diversification reduces risk. As you invest in more diverse assets, the overall risk of the portfolio gets averaged out, tending towards the mean expected return of all investments.

  • Sampling: In statistics, it’s not feasible to measure an entire population most of the time. Instead, we take samples. The LLN assures us that if our sample is large enough, our sample statistics (like the mean) will be a good estimator of the population statistics.

6. Common Misconceptions

  • It does not guarantee a particular outcome: The LLN does not say that if you’ve flipped heads 10 times in a row, the next flip is more likely to be tails. Each coin flip is independent.

  • Does not eliminate the possibility of streaks: Even with large numbers, you can still have streaks or patterns that seem non-random. However, over the long run, these will average out.

7. Practical Applications

1. Insurance: Insurance companies rely heavily on the LLN. They can’t predict when a particular individual will make a claim, but with a large group, they can predict with reasonable accuracy the number of claims in a given period.

2. Quality Control: Manufacturing companies produce thousands of items. By examining a sample, and leveraging the LLN, they can infer the quality of the entire batch.

3. Polling and Surveys: Political and market research polls don’t survey everyone. They rely on the LLN to ensure that their sample estimates reflect the entire population.

8. Conclusion

The Law of Large Numbers tells us that while randomness can lead to unpredictable results in the short term, over many repetitions, outcomes tend to stabilize and become more predictable.

Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science