Linked lists are often overused in introductory algorithmics courses, due to a heavy focus on theorical complexities. Unfortunatly, in practice, computers are complex beasts. They don’t execute instructions sequentially 1 with the same cost 2. This means that a data structure with faster theorical complexities does not necessarily translate to a more efficient data structure in practice.
In this article I’ll illutrate this by comparing the real performances of linked lists versus a contiguous vector, and I’ll show that even in some use cases where the list seems to be favored the vector is still a better choice. Of course, this doesn’t mean you should never use a linked list, but that you should be aware of its practical limitations and more informed when making the decision to use it or not.
Although linked lists are not specific to a particular language, the following article we’ll focus on C++ STL’s list vs. C++ STL’s vector.
All the examples and measurements were ran on the following platform:
Ubuntu 20.04 (Linux 5.13.0-44-generic).
Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz
C++ 14
Clang 12
We’ll also use google’s benchmark3 to make the measurements.
Finding (the last) element
Let’s start with a simple scenario: Finding an element in our data structure. We’ll focus on the worst case, when the element is at the end of the container.
The theorical complexities are:
For list: O(n)
For vector: O(n)
You may expect the measurement to give us a similar value.
We’ll measure using the following code:
This gives us the following results:
Surprisingly enough, the vector version is almost 90 times faster than the list version for 8K.
How can we explain this big difference?
Let me introduce spatial locality and cache.
Caches and locality
When you run a program (like our benchmarking example), you usually communicate (a lot) with the main memory. Although computers CPUs speed has improved a lot in the recent years, memory latencies did not. A typical memory access has a 100 ns latency 4. To improve overall performances, all modern CPUs now have multiple levels of caches. The cache is a blazingly fast, but very small, memory. On intel’s cpus you usually have 3 levels of cache (L1 -> L3), L1 being the fastest and smallest, and L3 (also commonly called LLC) being the slowest and biggest.
To show the cache’s size for your computer, you can use:
You can see that I have two types of L1 cache: L1d for data and L1i for instructions.
The CPU will keep data based on locality. We have two types of locality: spatial and temporal.
Spatial locality: When you access an address in memory, you’ll probably need the content of the addresses closest to it.
Temporal locality: When you access an address in memory, you’ll probably need the same value in the near future.
But what does any of this have with our initial problem ?
Well the memory layout of the vector heavily benefit from the spatial locality, this is what happens in our benchmarking example:
We Retrieve the first element value from memory (cost ~100 ns).
The CPU expects us to use the near addresses’ contents, he’ll then retrieve a whole cache line from memory (64 bytes, or 16 ints on my computer).
The next iteration, we try to access the content of the second element, since the values are contiguous in memory the value is in cache (cost ~7ns).
For the linked list, things are a little different:
We Retrieve the first element content from memory (cost ~100 ns).
The CPU expects us to use the near addresses’ contents, he’ll then retrieve a whole cache line from memory (again 64 bytes).
Then we fetch the next pointer, the next address is not necessarily the one next to the last one (cost 7ns to 100ns, in practice probably 100ns).
This is why I added the tmp_list, inserting all the elements at once will just put all the elements next to each other, which is not a realistic scenario.
When the CPU has to fetch from main memory instead of the cache, we say that the CPU did a cache miss.
Counting cache misses
We can count the number of cache misses using jevents from pmu_tools5.
I’ll explain in a future article how the PMU/PMC works. For now you just need to understand that the CPU has a bunch of “counters” that we can use to count specific events, in our case the cache misses event.
We can use the following code:
You may find it odd that we have less than one cache miss for some rows, I mean we should at least need to access DRAM once right ?
This is due google benchmarks running multiple iterations, first iteration will put the data in cache (precisly L2 and L3 that are big enough to keep it) and latter iterations will just use those low level caches. We can remove this effect by increasing the number of elements inserted in tmp_list.
As you can see for 8K we make, in average 941 cache misses.
A linked list on contiguous memorry
We can now remove the tmp_list to have a linked list on contiguous memory, although in practice you’ll rarely use a list for a similar use case:
The vector version is only 7 times faster.
The remaining difference can be explained by the size of the list node (32 bytes), a cache line can only hold 2 nodes while it can hold 16 integers.
We can count the number of cache misses:
We can see that the number of cache misses has greatly decreased.
There is also the factor that by constructing the list once, we only do one malloc, instead of n mallocs which is also a major drawback of using a list instead of vector.
What about inserting in the middle of the list ?
We measured the performance differences in the use case where both data structures have the same theoretical complexity, but what about when the list is supposed to clearly win ? Let see what happens if we try to insert in the middle of the data structure.
We can measure using the following code:
The results (at this point I’m starting to think that plots would have been better):
As you can see the vector is still faster on a lot of sizes, the list data structures only become faster when we start inserting 4k elements!
Why? Well each time we insert we pay a malloc, and malloc can be really costly.
People tend to focus on the cost of moving the whole vector just to insert one element, but in practice (unless you’re doing it too often), the mallocs cost will still be greater.
Conclution
Should we then stop using linked lists? No, there are still use cases where list will outperform a vector (as we saw earlier if you have a lot of insertions), but the point of this article is to show that we need to be more careful and informed before committing to using it, and one single insert in the middle of the list doesn’t justify the cost. It’s important to measure and to understand that theoretical model may not take into consideration things like cache costs and mallocs costs.
The same reasoning can be used on other data structures as well, like trees (std::map and std::set are usually implemented using a red black tree).
Footnotes
See Instruction Level Paralelism or Out of Order Execution. ↩
A typical load from memory is around 100 slower than a simple ADD. ↩