Python Internals Serie : Int (Long) Object

In this article, we will read and discuss the implementation details of ints in CPython.

Altough using integers in python is fairly easy :

a = 1

b = a + 5

The implementation file contains more than 5000 code lines.

Since the file contains roughly 116 functions/macros, I’ll probably skip some (most of ?) functions.

Without further ado, let’s dive into the code !


First we need to find a starting point, we know that all pythons objects are stored in cpython/Objects.

Unfortunetly, we don’t seem to have an intobject.c file. That’s because the underlying C object for python integers is PyLongObject.

The file we’re interessted in is : longobject.c

From now one I will refer to python’s integers implementation both using integers or longs, which is not technically accurate since in C those are different types.

For the rest of this article, we’ll revisit some of the most used operations/functions related to integers.


/* convert a PyLong of size 1, 0 or -1 to an sdigit */
#define MEDIUM_VALUE(x) (assert(-1 <= Py_SIZE(x) && Py_SIZE(x) <= 1),   \
         Py_SIZE(x) < 0 ? -(sdigit)(x)->ob_digit[0] :   \
             (Py_SIZE(x) == 0 ? (sdigit)0 :                             \

The comment is pretty explanatory, the macro convert a given long to an sdigit. But what’s an sdigit ? Well it depends :

#if PYLONG_BITS_IN_DIGIT == 30 /* signed variant of digit */
typedef int32_t sdigit
typedef short sdigit; /* signed variant of digit */
#error "PYLONG_BITS_IN_DIGIT should be 15 or 30

The PYLONG_BITS_IN_DIGIT is defined either at configure time or in pyport.h.

We have an assert to protect us from accidently casting a big integer, which is not small enough to fit in an sdigit, to an sdigit, which may result in a loss of information, therefore potentiely issues which can be hard to detect.

Curiously, the assert check that the size of x is bigger than -1, but can be less than 0 ? My guess is that size is unsigned to represent both the size and the sign of the integer, for exemple : PY_SIZE(-15) = -2. This seems to be confirmed with Py_SIZE(x) < 0 ? -(sdigit)(x)->ob_digit[0].

ob_digit looks like an array containing our integer. which can be confirmed in the file longintrepr.h :

struct _longobject {
    digit ob_digit[1];

One curious fact, that I can’t explain, is the small size of the array (one element ?), maybe it’s somehow changed at runtime.


#define IS_SMALL_INT(ival) (-NSMALLNEGINTS <= (ival) && (ival) < NSMALLPOSINTS)

Pretty straightforward function.

NSMALLPOSINTS and NSMALLNEGINTS is defined in the pycore_interp.h file :

/* interpreter state */

#define _PY_NSMALLPOSINTS           257
#define _PY_NSMALLNEGINTS           5

To undesrtand why those 2 magic numbers and where this macro is used, let’s dive into the next function.

static PyObject * get_small_int(sdigit ival)

static PyObject *
get_small_int(sdigit ival)
    PyInterpreterState *interp = _PyInterpreterState_GET();
    PyObject *v = (PyObject*)interp->small_ints[ival + NSMALLNEGINTS];
    return v;

This function implements what is commonly known as the Flyweight pattern, as explained here :, the flyweight pattern is used in python to create at the initialisation phase all the integers in the range [-5, 256]. At the runtime, whenever you ask for a number in this range, you’ll always get the same number.

Which can be easily checked :

20 + 30 is 50 # True
200 + 300 is 500 # False

Those integers objects are stored in the interpreter state.

According to :

An interpreter state is a group of threads along with the data specific to this group. Threads share such things as loaded modules (sys.modules), builtins (builtins.dict) and the import system (importlib).

Apparently flyweight integers are stored there.

We also use Py_INCREF to increment the reference count of the returned integer, recall that reference counting is what is used by the CPython garbage collector to detect which objects to free (Well not just that, since reference counting alone doesn’t resolve circular references, but we’ll discuss the gc in more details in future articles).


Straitforward function, we check a given integer is small enough to fit into an sdigit, if it’s the case we downcast it to an sdigit using the MEDIUM_VALUE. After downcasting we check if the integer is in the flyweight range [-5, 257], if it’s the case we decrement the referece counting (since we don’t need two PyLong objects) and we return a reference on the already allocated number.

How are longs created

Longs are created using the function PyLongObject * _PyLong_New(Py_ssize_t size), size here refer to the number of digit of the target long.

if (size > (Py_ssize_t)MAX_LONG_DIGITS) {
					"too many digits in integer");
	return NULL;

Well, looks like we can’t have an indefinely big integer, but how big can our integers be ?

    ((PY_SSIZE_T_MAX - offsetof(PyLongObject, ob_digit))/sizeof(digit))

If you don’t already know it, offsetof is a C function wich will basicaly return an offset of the member (ob_digit) from the structure (PyLongObject), if you recall correctly, since our struture only contains PyObject_VAR_HEAD and digit ob_digit. so the offset is the memorry taken with VAR_HEAD.

So a integer has a maximal size of roughly Py_SSIZE_T_MAx.

According to this answer : Py_SSIZE_T_MAX = sys.maxsize, which is according to the official documentation ( ) equal to 2**31 - 1 on 32 bits machine or 2**63 - 1.

2**63 - 1 is a huge number of digits.

You can check this limit yourself by doing :

import sys

If the size is fine, we allocate enough memory to store our integer :

result = PyObject_MALLOC(offsetof(PyLongObject, ob_digit) +
if (!result) {
	return NULL;

As a good practice you should ALWAYS check that an allocation had correctly been performed, which may not be the case if you don’t have enough memory for example, your futur self will thank you.

Adding two integers

static PyObject *
long_add(PyLongObject *a, PyLongObject *b)
    PyLongObject *z;

    CHECK_BINOP(a, b);

    if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
        return PyLong_FromLong(MEDIUM_VALUE(a) + MEDIUM_VALUE(b));
    if (Py_SIZE(a) < 0) {
        if (Py_SIZE(b) < 0) {
            z = x_add(a, b);
            if (z != NULL) {
                /* x_add received at least one multiple-digit int,
                   and thus z must be a multiple-digit int.
                   That also means z is not an element of
                   small_ints, so negating it in-place is safe. */
                assert(Py_REFCNT(z) == 1);
                Py_SET_SIZE(z, -(Py_SIZE(z)));
            z = x_sub(b, a);
    else {
        if (Py_SIZE(b) < 0)
            z = x_sub(a, b);
            z = x_add(a, b);
    return (PyObject *)z;

First we check that the two integers are valid.

#define CHECK_BINOP(v,w)                                \
    do {                                                \
        if (!PyLong_Check(v) || !PyLong_Check(w))       \
            Py_RETURN_NOTIMPLEMENTED;                   \
    } while(0)

One curious thing you may have noticed is the :

do {
} while(0)

Which seems exactly the same as simply :


The purpose of doing this is macros in C are not really smart, the preprocessor will simply do a find/replace and hope for the best. The problem with this can be illustrated with the following macro :

#define fct() \
	INSTR1; \
	INSTR2; \

But what happens if you do :


the preprocessor will replace this with :


which is not valid C, since in C if you omit the braces, the if body will only consists of the first instruction which is not what you would expect.

The do while solve this by enclosing all our instructions in one scope.

But why not just use :


As far as i know this is simply a syntaxic choice, since writing MACRO(); is more natural than MACRO() (not that in the second case we have no semicolon).

The rest is :

  • if both a and b are negative we return -|a + b|
  • if a is negative but not b we compute b - a
  • if b is negative and a is positive we compute a - b
  • if both are positive we compute |a + b|

This looks fairly complicated for a simple a + b, so why all the burden ?

Well recall that in Python integers can be very large (2^63 digits on 64 bits machines), this is achieved by storing the integers in arrays (each element representing a digit).

We dive in the x_add code we see :

for (i = 0; i < size_b; ++i) {
        carry += a->ob_digit[i] + b->ob_digit[i];
        z->ob_digit[i] = carry & PyLong_MASK;
        carry >>= PyLong_SHIFT;
    for (; i < size_a; ++i) {
        carry += a->ob_digit[i];
        z->ob_digit[i] = carry & PyLong_MASK;
        carry >>= PyLong_SHIFT;
    z->ob_digit[i] = carry;


And that’s it, the long_sub is fairly similar, and you can always (i encourage you to do so) check multiplication and division code.

We rarely stop to think about basic operations like integer operations, the longobject file shows use how complex and optimised long/integers implementation is in python.