Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The constant is the biggest number in the array


If runtime depends on the input, it’s not constant time.


It's constant with regards to the length of the input array, which is the usual metric for sort algorithm performance. That is, d(runtime)/d(len(input)) can be constant, even though d(runtime)/d(max(input)) is not, if you assume that len(input) is independent from max(input) (which is inconsistent with input having uniformly distributed random members, which is probably the more common assumption).


It's constant with respect to the number of elements, just not the maximum magnitude of those elements.


I don't see how it is. Different, but the same number, of elements is a different time. There's no constant time between [0, 1] and [0, 10], both having two elements.


[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]

is just as fast as

[9]

If you really want to make it strictly constant time, just append INT32_MAX to the end of the array before sorting, and then pop it after.


Oh yeah, because assumin maximum values in your domain doesn't move all algorithms into constant time and renders complexity theory void.


It doesn't though. How would merge sort become constant time if you assume a maximum value? It's also a joke...


The main "value" in the domain of sorting problems is the number of elements in your collection.

A subproblem also considers the elements to be integers, then they become another "value" domain. (But in general, sorting problems only need their elements to be comparable, not necessarily integers.)


It’s not constant; the algorithm runs in pseudo-polynomial time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: