What is the time complexity of the sleep sort?

Time Complexity

Time Complexity Problem Overview


Given this sort algorithm how do you express its time complexity?

Originally presented here (partial archive).

#!/bin/bash
function f() {
sleep "$1"
echo "$1"
}
while [ -n "$1" ]
do
    f "$1" &
    shift
done
wait

example usage:
./sleepsort.bash 5 3 6 3 6 3 1 4 7

Time Complexity Solutions


Solution 1 - Time Complexity

O(max(input)+n)

The complexity just appears awkward to express because most sorting algorithms are data agnostic. Their time scales with the amount of data, not the data itself.

FWIW, as pointed out here, this is not a reliable algorithm for sorting data.

Solution 2 - Time Complexity

One point which nobody seems to have addressed is how those sleeps are implemented. Ultimately they end up in a scheduler somewhere, and the operational complexity will depend on the scheduling algorithm used. For example, if the sleeps are put as events in a priority queue you will likely end up with something equivalent to heapsort, with complexity O(n log n). A naive scheduling algorithm might result in O(n^2).

Solution 3 - Time Complexity

I think paxdiablo is nearest, but not for the right reason. Time complexity ignores issues on real hardware such as cache sizes, memory limits and in this case the limited number of processes and the operation of the scheduler.

Based on the Wikipedia page for Time complexity I'd say the answer is that you can't determine the runtime complexity because if it is defined as:

> Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform. Thus the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor.

Then we can't talk about the runtime complexity of this algorithm because time which the elementary operations take is so vastly different, that the time taken would differ by more than a constant factor.

Solution 4 - Time Complexity

Both the time complexity and the process complexity of that algorithm are O(braindead):

  • With a sufficiently large value in the data set, you'll be waiting for an answer until the sun explodes (264 seconds is a little over half a trillion years).
  • With a sufficiently large data set size, you'll (a) hit your process limit; and (b) find that early sleeps will finish before the latter ones begin, meaning the set (2,9,9,9,9,9,...,9,9,1) will not sort the 1 and 2 correctly.

Time complexity is irrelevant in this case. You can't get any less optimised than "wrong". It's okay to use complexity analysis to compare algorithms as the data set size changes, but not when the algorithms are ludicrous in the first place :-)

Solution 5 - Time Complexity

I'm with Jordan, except that I think the wall-clock-time complexity is better expressed as O(2^m) where m is the size of each item, rather than O(max(input)).

If each item has size m, the largest item will have the integer value 2^m (minus one, but nobody cares about that). By construction, the algorithm requires the set-up time to be smaller than 1, a constant.

So wall-clock-time complexity O(2^m), operation-count complexity O(n).

A modified algorithm taking into account set-up time would likely have wall-clock-time complexity O(2^m + n). For instance, it could note the current time at the beginning, calculate base_time = start_time + k*len(list) (for some appropriate constant k), then have the threads sleep until time base_time+i. Then k*len(list) is clearly O(n) and i is O(2^m) as before, for a total of O(2^m+n).

Solution 6 - Time Complexity

If you read the thread you'll see that your question is already answered. The time complexity is O(max(input)) and the operational complexity (number of operations) is O(n).

Solution 7 - Time Complexity

Though looks like linear, I think the complexity is still O(log(n) * max(input)).

When we talk about asymptotic time complexity, it means how much time is taken when n grows infinitely large.

A comparasion-based sorting algorithm cannot be faster than O(n * log(n)), and the Sleep-Sort, is actually comparasion-based:

The processes sleep n seconds and wake. The OS need to find the least remaining sleeping time from all the sleeping process, and wake the one up if it's about time.

This will need a priority queue, which takes O(logN) time inserting an element, and O(1) finding the minimum element, and O(logN) removing the minimum element.

When n gets very large, it will take more than 1 second to wake up a process, which makes it larger than O(n).

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionjustinhjView Question on Stackoverflow
Solution 1 - Time ComplexityblahdiblahView Answer on Stackoverflow
Solution 2 - Time ComplexityhammarView Answer on Stackoverflow
Solution 3 - Time ComplexityjustinhjView Answer on Stackoverflow
Solution 4 - Time ComplexitypaxdiabloView Answer on Stackoverflow
Solution 5 - Time ComplexityJiří BaumView Answer on Stackoverflow
Solution 6 - Time ComplexityJordan RunningView Answer on Stackoverflow
Solution 7 - Time ComplexityRnMssView Answer on Stackoverflow