
On Tue, 16 Nov 2021 14:09:07 +1300, Michael Cree wrote:
Excepting that binary search turns a sequential O(n) search into a O(log n) search, and the memory access timings don't change that fact, so for a sufficiently large array a binary search will always outperform a linear search even if access time is much worse for individual accesses in the binary search.
I’m thinking that successive elements in the sequential search are more likely to be in the same cache line, instead of bouncing around between different cache lines in the binary-search situation. With a factor of a couple of magnitudes in access time, that could make quite a difference.
It is better to consider modern "random access memory" (or RAM) is grossly misnamed. It should be called "streaming memory". It is extremely efficient to stream sequential memory locations into the CPU but it is very inefficient (100s of CPU clock cycles) to do a single random access to memory.
And sequential search seems optimized for precisely such streaming behaviour.