Menu

What does Python Global Interpreter Lock – (GIL) do?

The Global Interpreter Lock (GIL) of Python allows only one thread to be executed at a time. It is often a hurdle, as it does not allow multi-threading in python to save time. This post will tell you what exactly is GIL and why is it needed. This will also walk you through the alternate options possible to deal with GIL.

What is GIL?

The Global Interpreter Lock (GIL) is a python process lock. As you can guess, it “locks” something from happening. The something here is “Multi-threading”. Basically, GIL in Python doesn’t allow multi-threading which can sometimes be considered as a disadvantage. To understand why GIL is so infamous, let’s learn about multithreading first.

So, What is Multithreading?

A thread refers to a separate flow of execution.

Multithreading means that there are two or more things happening at the same time. This helps in saving a large amount of data space and computation time. All the individual threads will share the same resources for efficiency.

Multithreading seems so amazing, right? Unfortunately, we can’t achieve this in Python. There is a good reason for it.

In python, you can only execute one thread at a time as it has GIL. While many programs we execute are single-threaded, there are some which have a multi-threaded architecture. In these cases, GIL causes a negative impact on multi-threaded programs. I will demonstrate this impact in later sections with examples.

Why does python need GIL?

Till now, we know that GIL restricts parallel programming and reduces efficiency. Despite these reasons, Python uses GIL. Why?

Unlike the other programming languages, Python has a “reference-counter” for memory management. When an object is declared in python, there’s a reference-counter variable dedicated to it. This will keep track of the number of references that point to the particular object. Consider the below example. You can get the reference count through sys.getrefcount() function.

import sys
my_variable = 'apple'
x = my_variable
sys.getrefcount(my_variable)

#> 3

Observe the above code and output. The object my_variable is referenced 3 times. First, when it was initialized, then assigned to x. Lastly, when it was passed as an argument to getrefcount().

When this count becomes 0, the variable/object is released from memory. I hope you are clear about the reference counter now. This reference counter needs to be protected in order for it from being accidentally released from memory, which is what GIL does.

What will happen to the reference counter in case of MultiThreading ?

In the case of Multithreading, there is a possibility that the two threads might increase or decrease the counter’s value at the same time. Because of this, the variable might be incorrectly released from the memory while a reference to that object still exists.

It can cause leaked memory, even end up in system crash or numerous bugs. Hence, GIL protects the reference counter by disabling multi-threading in Python.

Why GIL is chosen as the solution?

The previous section explained why multi-threading has to be restricted. But, it didn’t explain why to chose GIL as the solution.

Let’s look more into this section. Some of the reasons were :

  1. Python is used extensively because of the variety of packages it provides. Many of these packages are written in C or C++. These C extensions were prone to inconsistent changes. GIL can provide a thread-safe memory management which was much required.

  2. It’s a simple design as only one lock has to be managed.

  3. GIL also provides a performance boost to the single-threaded programs.

  4. It makes it possible to integrate many C libraries with Python. This is a main reason which made it popular.

You can see how many problems GIL solved for Python!

But then, every coin has two sides. In the next section, I shall demonstrate it’s negative impact too.

Impact of GIL on Multi-threaded problems

We already know that GIL does not allow multi-threading and decreases the inefficiency. Let’s look more in detail here. First thing to know, there are two types of programs: CPU-bound and I/O bound.

What are CPU-bound and I/O bound programs?

CPU-Bound means that the majority of time taken for completion of the program(bottleneck) depends upon the CPU(central processing unit).

Mathematical operations such as mathematical computations like matrix multiplications, searching, image processing, etc fall under CPU-bound.

Whereas, I/O bound means the program is bottlenecked by input/output (I/O). This includes tasks such as reading or writing to disk, processing inputs, network, etc. The I/O bound programs depend upon source and user. Python’s GIL mainly impacts the CPU-bound programs.

In the case of CPU-bound programs, multi-threading can save huge time and resources. If you have multiple CPU cores, you can execute each thread using separate cores and take advantage. But, GIL stops all this. Python threads cannot be run in parallel on multiple CPU cores due to the global interpreter lock (GIL).

Let’s see an example that demonstrates it.

Consider the below code, which is a CPU-bound program. It is a single-thread code. The main bottleneck of the code is the upgrade() function, which depends on CPU power.

What upgrade() does is, it simply increments the number in a while loop until it reaches 400M.

Let’s record the time taken for this execution.

# A single-threaded CPU-bound program
import time
from threading import Thread

number = 0

# The bottleneck of the code which is CPU-bound
def upgrade(n):
    while number < 400000000:
        number=number+1


# Recording the time taken to excecute
start = time.time()
upgrade(number)
end = time.time()

print('Time taken in seconds ', end - start)


  #>  Time taken in seconds - 2.6532039642333984

You can see the time taken here.

Now, let’s see how the multithread architecture for the same program will be. The above is modified to perform the same task in two threads parallelly. I am recording the execution time here too for comparison.

# A multithreaded program in python
import time
from threading import Thread

num= 0

# The bottleneck of the code which is CPU-bound
def upgrade(n):
    while num<400000000:
        num=num+1

# Creation of multiple threads
t1 = Thread(target=upgrade, args=(num//2,))
t2 = Thread(target=upgrade, args=(num//2,))

# multithread architecture, recording time
start = time.time()
t1.start()
t2.start()
t1.join()
t2.join()
end = time.time()

print('Time taken in seconds -', end - start)

The time taken is the same as before! This proves that multithreading wasn’t allowed by GIL. If GIL is not there, you can expect a huge reduction in the time taken in this case. You can try various examples with more number of threads or CPU cores to confirm.

How to deal with GIL?

The last sections told us the problems GIL created especially in the case of CPU-bound programs. There have been attempts to remove GIL from Python. But, it destroyed some of the C extensions which caused more problems. Other solutions decreased the efficiency and performance of single-threaded programs. Hence, GIL is not removed. So, let’s discuss some ways you could deal with it.

The most common way is to use a multiprocessing approach instead of multithreading. We use multiple processes instead of multiple threads. In this case, python provides a different interpreter for each process to run. In short, there are multiple processes, but each process has a single thread.

Each process gets its own Python interpreter and memory space which means GIL won’t stop it.

The below code is a demonstration of how multi-processing works.

from multiprocessing import Pool
import time

number= 0

# The bottleneck of the code which is CPU-bound
def upgrade(n):
    while number < 400000000:
        number=number+1

if __name__ == '__main__':
    pool = Pool(processes=2)
    start = time.time()
    r1 = pool.apply_async(upgrade, [number//2])
    r2 = pool.apply_async(upgrade, [number//2])
    pool.close()
    pool.join()
    end = time.time()
    print('Time taken in seconds -', end - start)


   #> Time taken in seconds - 0.10114145278930664    

It’s definitely an improvement!

I hope you found this article useful. You might also be interested in our article on parallel processing in python.

Stay tuned to ML+ for more updates!

Course Preview

Machine Learning A-Z™: Hands-On Python & R In Data Science

Free Sample Videos:

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science

Machine Learning A-Z™: Hands-On Python & R In Data Science