Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> How could it not be, given that any physical medium has finite information density, and information cannot propagate faster than the speed of light?

You could have a tape of infinite length, and if you only ever request "the next one" then clearly the latency is constant.

> And on a practical computer the size of N ...

Don't be silly. N is the size of the input set, not the size of the universe.



I feel we must be talking past each other. I thought the examples so far were about random access of N units of memory. The standard convention of pretending this can be done at O(1) always struck me as bizarre, because it's neither true for practical, finite, values of N (memory hierarchies are a thing) nor asymptotically for any physically plausible idealization of reality.


Random access times are irrelevant for radix sort since no random access to the input data is needed.


You still need to write each element to the target, which costs O(sqrt(N)), so the "strict" running time of radix sort would be O(N^1.5) in layered memory hierarchies. In flat memory hierarchies (no caching) as found in microcontrollers, it reduces back to O(N).


But radix sort doesn't write randomly to the output array like that. Yes, each element ends up in the right place, but through a series of steps which each have better locality of reference than random writes.

In this way, radix sort is usually faster than an "omniscient sort" that simply scans over the input writing each element to its final location in the output array.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: