sleep(1e66 - (time() - startTime) will give us a constant complexity until n became too big. However, one may just increase that scalability constant and get C again.
Well, I'd like to make it as clean as possible. If they have the cosmological constant in physics, why don't have a scalability constant in computer science? It will simplify everything.
*dull mode on* No, it doesn't. As stated in comments above the time complexity of the sleepsort depends not on the quantity of the elements in input data, but on elements' values. Taking the O for the sleep(n) itself as O(n) we get the n=MAXn(Vi) for the best case (enough CPUs) and ...
...n=SUMn(Vi) for the worst case - on a single core.
It actually makes big-O notation useless here, since it cannot show how the algorithm scales on elements quantity. Actually, it's impossible to find any asymptotic function for such case, since the input data values unknown
PS. It's better keep in mind the "time" in "big-O time complexity" isn't about a time in seconds or any time units, but about the operations/steps quantity. And not the absolute quantity but a function, which would asymptotically close to the quantity. It's about scalability, not the absolute time
I haven't looked at what sleepsort is, but if it's linear then that shouldn't matter.
if you're comparing two algorithms and they're the same O(n) time complexity, one is either always faster or always slower.
that's a very good point. I just went and read the algorithm, and to be honest I can't off the top of my head figure out what the best way to define time complexity on sleep sort is. so you're probably onto something.
n in the O notation is the size of the input. Worst case for sleepsort is a list with one very large element. The size is the logarithm of that element, so the time complexity is actually O(2^n).
No, the worst case for sleep sort is two very large, nearly equal elements, since a list of size 1 is already sorted, and no reductions can be performed on elements that are close together without risking a race condition.
And O() can take any function as an input, as long as the output of that function is tied to the time complexity of the function. In the case of O(max(n)), n is the set of elements to be sorted, not the size of that set.
An example is the complexity of Dijkstra's algorithm, O((V+E) lg V)
even the best sorting algorithms always scales non-linearly — if it takes n units of time to sort n items, it will always take more than n+1 units of time to sort n+1 items. here, youre cheating it by just adding more sleep time to make it scale linearly
If anyone is wondering "does this really work?": no, a real linear-time sorting algorithm would continue to be linear for all sufficiently large n, but this algorithm will eventually require n log n time again when n becomes very large. But if your lists are never that large, you may never find out.
> Even the best sortin algorithms scale non-linearly
Well not quite, there are natural sorting algorithm like Gravity Sort which would sort in linear time (though in practice a naive implementation is quadratic on a computer)
the algorithm takes however long and then sleeps for ~11 days, so it will almost always take the exact same amount of time. only problem is that it takes ~11 days no matter what
Comments
sleep(1e66 - (time() - startTime) will give us a constant complexity until n became too big. However, one may just increase that scalability constant and get C again.
I like that.
Redefining the comparison operators to 'what the hell do I care' will get you there as well.
1 > 0... Sure.
0 > 1... Might as well be correct.
1 = 0 why not... 🤷
https://www.cs.princeton.edu/courses/archive/fall13/cos226/lectures/52Tries.pdf
The algorithm theoretically still satisfies linear time
It actually makes big-O notation useless here, since it cannot show how the algorithm scales on elements quantity. Actually, it's impossible to find any asymptotic function for such case, since the input data values unknown
if you're comparing two algorithms and they're the same O(n) time complexity, one is either always faster or always slower.
An example is the complexity of Dijkstra's algorithm, O((V+E) lg V)
one might actually need this, in cryptography, against some attacks. But it won't be this simple.
Potential O(n).
Just a bit of luck required.
https://bsky.app/profile/serpent7776.bsky.social/post/3lcxq3k75os2c
https://github.com/Gaming32/ArrayV/blob/main/src/main/java/io/github/arrayv/sorts/distribute/QuickBogoSort.java
This is the result:
1) Stalin sort
But tends to 'lose' items to be sorted...
Well not quite, there are natural sorting algorithm like Gravity Sort which would sort in linear time (though in practice a naive implementation is quadratic on a computer)
https://youtu.be/zfYd4qUSRu4?si=e4WASv51cdTjWYoR
First known example comes from a post on 4chan in 2011: https://archive.fo/xhGo#selection-203.0-203.4
The answer actually gives the applicant useful feedback on how they respond. Do they have geeky knowledge and a sense of humour?
Alt text: https://explainxkcd.com/3026#Transcript