ANOVA (Analysis of variance) simply explained
2021-07-30
Understanding Model Calibration and Brier Score
2021-08-03
Show all

Make Python faster using Numba

15 mins read

As you may know, python is an interpreted language. This means that python code is not directly compiled to machine code but is interpreted in real time by another program called an interpreter (cpython in most cases ).

This is one of the reasons why python gives so much flexibility (dynamic typing, runs everywhere, …) compared to compiled languages. However, this is also why python is terribly slow.

Solutions to slow python

There are actually multiple solutions to python slowness:

  • use cython: a programming language that is a superset of python
  • use C/C++ language in combination with ctypes , pybind11 or CFFI to write python bindings
  • extend Python with C/C++
  • use other compiled languages like rust

As you can see, all these methods, require using another language than python and compiling the code for it to work with python. Though these are valid options, they are not the most beginner-friendly ways to make python faster and are not necessarily easy to set up.

Python decorators 

With numba, the compilation of a python function is triggered by a decorator. If you already know what’s a decorator, you can just skip to the next section. Otherwise, please read on.

A python decorator is a function that takes another function as input, modifies it, and returns the modified function to the user. I realize that this sentence sounds tricky, but it’s not. We’ll go over a few examples of decorators and everything will become clear! Before we get started, it’s important to realize that in python, everything is an object. Functions are objects, and classes are objects too. For instance, let’s take this simple function:

def hello():
  print('Hello world')

hello is a function object, so we can pass it to another function like this one:

def make_sure(func): 
  def wrapper():
    while 1:
      res = input('are you sure you want to greet the world? [y/n]')
      if res=='n':
        return
      elif res=='y':
        func()
        return
  return wrapper  


This is a decorator! make_sure takes an input function and returns a new function that wraps the input function.

Below, we decorate the function hello , and whello is the decorated function:

whello = make_sure(hello)
whello()

# are you sure you want to greet the world? [y/n]y
# Hello world

Of course, we can use the make_sure decorator on any function that has the same signature as func (can work without arguments, and no need to retrieve the output).

We know enough about decorators to use numba. still, just one word about the syntax. We can also decorate a function in this way:

@make_sure
def hello():
  print('Hello world')
  
hello()

# are you sure you want to greet the world? [y/n]y
# Hello world

There is really nothing mysterious about this, it’s just a nice and easy syntax to decorate the function as soon as you write it.

Just-In-Time (JIT) compilation with Numba

Meet numba a python package that will make your code much faster without having to renounce the convenience of python:

Numba is an open source JIT compiler that translates a subset of Python and NumPy code into fast machine code.

Numba is able to compile python code into bytecode optimized for your machine, with the help of LLVM. You don’t really need to know what LLVM is to follow this tutorial, but here is a nice introduction to LLVM in case you’re interested.

Here is how the code is compiled:

[Source]

numba uses Just-in-time (JIT) compilation (which means it’s compiled at run time during the execution of python code and not before) and before you ask, no you don’t even need to have a C/C++ compiler installed. All you need to do is install it with pip/conda:

pip install numba

Here is a function that can take a bit of time. This function takes a list of numbers and returns the standard deviation of these numbers.

import math

def std(xs):
  # compute the mean
  mean = 0
  for x in xs: 
    mean += x
  mean /= len(xs)
  # compute the variance
  ms = 0
  for x in xs:
     ms += (x-mean)**2
  variance = ms / len(xs)
  std = math.sqrt(variance)
  return std

As we can see in the code, we need to loop twice on the sample of numbers: first to compute the mean, and then to compute the variance, which is the square of the standard deviation.

Obviously, the more numbers in the sample, the more time the function will take to complete. Let’s start with 10 million numbers, drawn from a Gaussian distribution of unit standard deviation:

import numpy as np
a = np.random.normal(0, 1, 10000000)

std(a)
# 0.9998272967173739

The function takes a couple of seconds to compute the standard deviation of the sample.

Now, let’s import the njit decorator from numba, and decorate our std function to create a new function:

from numba import njit
c_std = njit(std)

c_std(a)

0.9998272967173739

The performance improvement might not seem striking, maybe due to some overhead related with interpreting the code in the notebook. Also, please keep in mind that the first time the function is called, numba will need to compile the function, which takes a bit of time.

But we can quantify the improvement using the timeit magic function, first for the interpreted version of the std function, and then for the compiled version:

%timeit std(a)
# 1 loop, best of 3: 4.62 s per loop

%timeit c_std(a)
# 10 loops, best of 3: 31.6 ms per loop

The compiled function is 100 times faster!

But obviously, we did not have to go into such trouble to compute the standard deviation of our array. For that, we can simply use numpy:

a.std()
# 0.9998272967174134

%timeit a.std()
# 10 loops, best of 3: 49.5 ms per loop

We see that numba is even faster than numpy in this particular case, and we will see below that it is much more flexible.

Calculation of pi 

the number pi can be estimated with a very elegant Monte Carlo method.

Just consider a square of side L=2, centered on (0,0). In this square, we fit a circle of radius R=1, as shown in the figure below.

The ratio of the circle area to the square area is

r = \frac{A_c}{A_s} = \frac{\pi R^2}{L^2} = \pi / 4

so

\pi = 4 r

So if we can estimate this ratio, we can estimate pi!

And to estimate this ratio, we will simply shoot a large number of points in the square, following a uniform probability distribution. The fraction of the points falling in the circle is an estimator of r.

Obviously, the more points, the more precise this estimator will be, and the more decimals of pi can be computed.

Let’s implement this method, and use it with an increasing number of points to see how the precision improves.

import random 

def pi(npoints): 
  n_in_circle = 0 
  for i in range(npoints):
    x = random.random()
    y = random.random()
    if (x**2+y**2 < 1):
      n_in_circle += 1
  return 4*n_in_circle / npoints

npoints = [10, 100, 10000, int(10e6)]
for number in npoints:
  print(pi(number))

# 3.2
# 3.04
# 3.1472
# 3.1414724

As you can see, even with N=10 million points, the precision is not great. More specifically, the relative uncertainty on pi can be calculated asδ=1/N

Here is how the uncertainty evolves with the number of points:

import math
# defining the uncertainty function 
# with a lambda construct
uncertainty = lambda x: 1/math.sqrt(x)
for number in npoints:
  print('npoints', number, 'delta:', uncertainty(number))

# npoints 10 delta: 0.31622776601683794
# npoints 100 delta: 0.1
# npoints 10000 delta: 0.01
# npoints 10000000 delta: 0.00031622776601683794

Clearly, we’ll need a lot of points. How fast is our code?

%timeit pi(10000000)
# 1 loop, best of 3: 3.49 s per loop

A few seconds for 10 million points. This algorithm is O(N), so if we want to use 1 billion points, it will take us between 5 and 10 minutes. We don’t have that much time, so let’s use numba!

@njit
def fast_pi(npoints): 
  n_in_circle = 0 
  for i in range(npoints):
    x = random.random()
    y = random.random()
    if (x**2+y**2 < 1):
      n_in_circle += 1
  return 4*n_in_circle / npoints

fast_pi( int(1e9) )
# 3.141527256

This took about 10 s, instead of 7 minutes!

A more involved example: Finding the closest two points 

Numpy features an efficient implementation for most array operations. Indeed, you can use numpy to map any function to all elements of an array or to perform element-wise operations on several arrays.

I would say: If numpy can do it, just go for it.

But sometimes, you’ll come up with an expensive algorithm that cannot easily be implemented with numpy. For instance, let’s consider the following function, which takes an array of 2D points, and looks for the closest two points.

import sys
def closest(points):
  '''Find the two closest points in an array of points in 2D. 
  Returns the two points, and the distance between them'''
  
  # we will search for the two points with a minimal
  # square distance. 
  # we use the square distance instead of the distance
  # to avoid a square root calculation for each pair of 
  # points
  
  mindist2 = 999999.
  mdp1, mdp2 = None, None
  for i in range(len(points)):
    p1 = points[i]
    x1, y1 = p1
    for j in range(i+1, len(points)):
      p2 = points[j]
      x2, y2 = p2
      dist2 = (x1-x2)**2 + (y1-y2)**2
      if dist2 < mindist2:
        # squared distance is improved, 
        # keep it, as well as the two 
        # corresponding points
        mindist2 = dist2
        mdp1,mdp2 = p1,p2
  return mdp1, mdp2, math.sqrt(mindist2)

You might be thinking that this algorithm is quite naive, and it’s true! I wrote it like this on purpose.

You can see that there is a double loop in this algorithm. So if we have N points, we would have to test NxN pairs of points. That’s why we say that this algorithm has a complexity of order NxN, denoted O(NxN).

To improve the situation a bit, please note that the distance between point i and point j is the same as the distance between point j and point i! So there is no need to check this combination twice. Also, the distance between point i and itself is zero, and should not be tested…That’s why I started the inner loop at i+1. So the combinations that are tested are:

  • (0,1), (0,2), … (0, N)
  • (1,2), (1,3), … (1, N)

Another thing to note is that I’m doing all I can to limit the amount of computing power needed for each pair. That’s why I decided to minimize the square distance instead of the distance itself, which saves us a call to math.sqrt for every pair of points.

Still, the algorithm remains O(NxN).

Let’s first run this algorithm on a small sample of 10 points, just to check that it works correctly.

points = np.random.uniform((-1,-1), (1,1), (10,2))
print(points)
closest(points)

[[-0.21489559  0.2758845 ]
 [ 0.46014884  0.22870144]
 [-0.0188096   0.77782864]
 [-0.69480477  0.0058198 ]
 [-0.41380634 -0.45217708]
 [-0.66116743 -0.37505363]
 [-0.60690115  0.97901302]
 [-0.22480242 -0.64687904]
 [-0.80867721 -0.04126105]
 [-0.03447232 -0.51784939]]

(array([-0.69480477,  0.0058198 ]),
 array([-0.80867721, -0.04126105]),
 0.12322150643489199)

Ok, this looks right, the two points indeed appear to be quite close. Let’s see how fast is the calculation:

%timeit closest(points)

# 10000 loops, best of 3: 76.8 µs per loop

Now, let’s increase a bit the number of points in sample. You will see that the calculation will be much slower.

points = np.random.uniform((-1,-1), (1,1), (2000,2))
closest(points)

(array([-0.9197511 ,  0.43622966]),
 array([-0.91919405,  0.43550372]),
 0.0009150308173721398)

%timeit closest(points)
1 loop, best of 3: 2.86 s per loop


Since our algorithm is O(NxN), if we go from 10 to 2,000 points, the algorithm will be 200×200 = 40,000 times slower.

And this is what we have measured: 2.97 / 78e-6 = 38,000 (the exact numbers may vary).

Now let’s try and speed this up with numba’s just-in-time compilation:

c_closest = njit(closest)
c_closest(points)

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
TypeError: cannot convert native ?array(float64, 1d, C) to Python object

The above exception was the direct cause of the following exception:

SystemError                               Traceback (most recent call last)
<ipython-input-29-a3b4fe54a342> in <module>()
      1 c_closest = njit(closest)
----> 2 c_closest(points)

SystemError: CPUDispatcher(<function closest at 0x7f2389a4e400>) returned a result with an error set

It does not work! But no reason to panic. The important message is :

TypeError: cannot convert native ?array(float64, 1d, C) to Python object

That’s a bit cryptic, but one can always google error messages. What this means is that numba has trouble converting types between the compiled function and python.

To solve this issue, we will use numba’s just-in-time compiler to specify the input and output types. In the example below, we specify that the input is a 2D array containing float64 numbers and that the output is a tuple with two float64 1D arrays (the two points), and one float64, the distance between these points.

The nopython argument instructs numba to compile the whole function. Actually, this is the recommended practice for maximum performance, and so far, we have used the njit decorator, which is equivalent to

@jit(nopython=True)
from numba import jit
@jit('Tuple((float64[:], float64[:], float64))(float64[:,:])', 
      nopython=True)
def c_closest(points):
  mindist2 = 999999.
  mdp1, mdp2 = None, None
  for i in range(len(points)):
    p1 = points[i]
    x1, y1 = p1
    for j in range(i+1, len(points)): 
      p2 = points[j]
      x2, y2 = p2
      dist2 = (x1-x2)**2 + (y1-y2)**2
      if dist2 < mindist2: 
        mindist2 = dist2
        mdp1 = p1
        mdp2 = p2
  return mdp1, mdp2, math.sqrt(mindist2)

c_closest(points)

(array([-0.9197511 ,  0.43622966]),
 array([-0.91919405,  0.43550372]),
 0.0009150308173721398)

%timeit closest(points)
# 1 loop, best of 3: 2.92 s per loop

%timeit c_closest(points)
10 loops, best of 3: 37.1 ms per loop

Again, the compiled code is 100 times faster!

We’ve talked enough, let’s try an example ( from numba ‘s documentation ). We want to compute an estimation of π using a Monte Carlo simulation:

import random

# import njit from numba
from numba import njit


def monte_carlo_pi_without_numba(nsamples):
    acc = 0
    for i in range(nsamples):
        x = random.random()
        y = random.random()
        if (x ** 2 + y ** 2) < 1.0:
            acc += 1
    return 4.0 * acc / nsamples

# Add numba's decorator to make the function faster
@njit
def monte_carlo_pi_with_numba(nsamples):
    acc = 0
    for i in range(nsamples):
        x = random.random()
        y = random.random()
        if (x ** 2 + y ** 2) < 1.0:
            acc += 1
    return 4.0 * acc / nsamples

Notice how using numba, just requires us to import a decorator (njit ) and it will do all the rest.

Running this code, to time the two versions, shows that numba is 30 times faster than regular python :

%timeit monte_carlo_pi_with_numba(100_000)
# 1.24 ms ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit monte_carlo_pi_without_numba(100_000)
# 40.6 ms ± 814 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Some caveats

Presented like this, numba almost sounds too good to be true. But it sure has its drawbacks:

  • There is an overhead, the first time a numba decorated function is run. This is because numba will try to figure out the types of parameters and compile the function the first time it is executed. So it will be a bit slower the first time it’s run.
  • Not all python code will be compiled with numba , for example, if you use mixed types for the same variable or for list elements, you will get an error.

Pandas on steroïds

numba is made specifically with numpy in mind and is very friendly to numpy arrays. You know what else is built on numpy ?
You guessed it: pandas . This makes for crazy optimizations when using user-defined functions or even performing different Dataframe operations.

Let’s see some examples, starting from this DataFrame:

import numpy as np
import pandas as pd

n = 1_000_000

df = pd.DataFrame({
    'height': 1 + 1.3 * np.random.random(n),
    'weight': 40 + 260 * np.random.random(n),
    'hip_circumference': 94 + 14 * np.random.random(n)
})

User-defined functions

Another method of numba is vectorize , which makes creating numpy universal functions ( ufuncs ) a breeze.

A Simple example is computing the square of the height column in our dataset:

from numba import vectorize

def get_squared_height_without_numba(height):
  return height ** 2

@vectorize
def get_squared_height_with_numba(height):
  return height ** 2


%timeit df['height'].apply(get_squared_height_without_numba)
# 279 ms ± 7.31 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


%timeit df['height'] ** 2
# 2.04 ms ± 229 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

# We convert the column to a numpy array first
# since numba is compatible with numpy ( not pandas )
%timeit get_squared_height_with_numba(df['height'].to_numpy())
# 1.6 ms ± 51.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Basic operations

Another example (using njit ), is to compute the BMI (Body Mass Index) using the following code:

from numba import njit

@njit
def get_bmi(weight_col, height_col):
  n = len(weight_col)
  result = np.empty(n, dtype="float64")

  # numba's loops are very fast compared to python loops
  for i, (weight, height) in enumerate(zip(weight_col, height_col)):
    result[i] = weight / (height ** 2)

  return result


# don't forget to convert columns to numpy 
%timeit df['bmi'] = get_bmi(df['weight'].to_numpy(), df['height'].to_numpy())
# 6.77 ms ± 230 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit df['bmi'] = df['weight']  / (df['height'] ** 2)
# 8.63 ms ± 316 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

You can see that even for basic operations, numba still takes less time than raw pandas with (6.77ms vs 8.63ms)

Final thoughts

Using numba is a very straightforward way to make your code much faster without much effort. Sometimes, it may take a few tries, before your code is compiled successfully, but generally, it will work out of the box. Compiling for the CPU can already boost performance by a factor of 100.

References:

https://towardsdatascience.com/this-decorator-will-make-python-30-times-faster-715ca5a66d5f

https://towardsdatascience.com/speed-up-your-algorithms-part-2-numba-293e554c5cc1

https://thedatafrog.com/en/articles/make-python-fast-numba/

Amir Masoud Sefidian
Amir Masoud Sefidian
Machine Learning Engineer

Comments are closed.