Are there any O(1/n) algorithms?

TheoryComplexity TheoryBig O

Theory Problem Overview


Are there any O(1/n) algorithms?

Or anything else which is less than O(1)?

Theory Solutions


Solution 1 - Theory

This question isn't as silly as it might seem to some. At least theoretically, something such as O(1/n) is completely sensible when we take the mathematical definition of the Big O notation:

Now you can easily substitute g(x) for 1/x … it's obvious that the above definition still holds for some f.

For the purpose of estimating asymptotic run-time growth, this is less viable … a meaningful algorithm cannot get faster as the input grows. Sure, you can construct an arbitrary algorithm to fulfill this, e.g. the following one:

def get_faster(list):
    how_long = (1 / len(list)) * 100000
    sleep(how_long)

Clearly, this function spends less time as the input size grows … at least until some limit, enforced by the hardware (precision of the numbers, minimum of time that sleep can wait, time to process arguments etc.): this limit would then be a constant lower bound so in fact the above function still has runtime O(1).

But there are in fact real-world algorithms where the runtime can decrease (at least partially) when the input size increases. Note that these algorithms will not exhibit runtime behaviour below O(1), though. Still, they are interesting. For example, take the very simple text search algorithm by Horspool. Here, the expected runtime will decrease as the length of the search pattern increases (but increasing length of the haystack will once again increase runtime).

Solution 2 - Theory

Yes.

There is precisely one algorithm with runtime O(1/n), the "empty" algorithm.

For an algorithm to be O(1/n) means that it executes asymptotically in less steps than the algorithm consisting of a single instruction. If it executes in less steps than one step for all n > n0, it must consist of precisely no instruction at all for those n. Since checking 'if n > n0' costs at least 1 instruction, it must consist of no instruction for all n.

Summing up: The only algorithm which is O(1/n) is the empty algorithm, consisting of no instruction.

Solution 3 - Theory

sharptooth is correct, O(1) is the best possible performance. However, it does not imply a fast solution, just a fixed time solution.

An interesting variant, and perhaps what is really being suggested, is which problems get easier as the population grows. I can think of 1, albeit contrived and tongue-in-cheek answer:

Do any two people in a set have the same birthday? When n exceeds 365, return true. Although for less than 365, this is O(n ln n). Perhaps not a great answer since the problem doesn't slowly get easier but just becomes O(1) for n > 365.

Solution 4 - Theory

That's not possible. The definition of Big-O is the not greater than inequality:

A(n) = O(B(n))
<=>
exists constants C and n0, C > 0, n0 > 0 such that
for all n > n0, A(n) <= C * B(n)

So the B(n) is in fact the maximum value, therefore if it decreases as n increases the estimation will not change.

Solution 5 - Theory

From my previous learning of big O notation, even if you need 1 step (such as checking a variable, doing an assignment), that is O(1).

Note that O(1) is the same as O(6), because the "constant" doesn't matter. That's why we say O(n) is the same as O(3n).

So if you need even 1 step, that's O(1)... and since your program at least needs 1 step, the minimum an algorithm can go is O(1). Unless if we don't do it, then it is O(0), I think? If we do anything at all, then it is O(1), and that's the minimum it can go.

(If we choose not to do it, then it may become a Zen or Tao question... in the realm of programming, O(1) is still the minimum).

Or how about this:

programmer: boss, I found a way to do it in O(1) time!
boss: no need to do it, we are bankrupt this morning.
programmer: oh then, it becomes O(0).

Solution 6 - Theory

No, this is not possible:

As n tends to infinity in 1/n we eventually achieve 1/(inf), which is effectively 0.

Thus, the big-oh class of the problem would be O(0) with a massive n, but closer to constant time with a low n. This is not sensible, as the only thing that can be done in faster than constant time is:

void nothing() {};

And even this is arguable!

As soon as you execute a command, you're in at least O(1), so no, we cannot have a big-oh class of O(1/n)!

Solution 7 - Theory

For anyone whose reading this question and wants to understand what the conversation is about, this might help:

|    |constant |logarithmic |linear|  N-log-N |quadratic|  cubic  |  exponential  |
|  n |  O(1)   | O(log n)   | O(n) |O(n log n)|  O(n^2) |  O(n^3) |     O(2^n)    |
|  1 |       1 |          1 |     1|         1|        1|       1 |             2 |
|  2 |       1 |          1 |     2|         2|        4|       8 |             4 |
|  4 |       1 |          2 |     4|         8|       16|      64 |            16 |
|  8 |       1 |          3 |     8|        24|       64|     512 |           256 |
| 16 |       1 |          4 |    16|        64|      256|   4,096 |         65536 |
| 32 |       1 |          5 |    32|       160|    1,024|  32,768 | 4,294,967,296 |
| 64 |       1 |          6 |    64|       384|    4,069| 262,144 |   1.8 x 10^19 |

Solution 8 - Theory

What about not running the function at all (NOOP)? or using a fixed value. Does that count?

Solution 9 - Theory

I often use O(1/n) to describe probabilities that get smaller as the inputs get larger -- for example, the probability that a fair coin comes up tails on log2(n) flips is O(1/n).

Solution 10 - Theory

O(1) simply means "constant time".

When you add an early exit to a loop[1] you're (in big-O notation) turning an O(1) algorithm into O(n), but making it faster.

The trick is in general the constant time algorithm is the best, and linear is better then exponential, but for small amounts of n, the exponential algorith might actually be faster.

1: Assuming a static list length for this example

Solution 11 - Theory

many people have had the correct answer (No) Here's another way to prove it: In order to have a function, you have to call the function, and you have to return an answer. This takes a certain constant amount of time. EVEN IF the rest of the processing took less time for larger inputs, printing out the answer (Which is we can assume to be a single bit) takes at least constant time.

Solution 12 - Theory

I believe quantum algorithms can do multiple computations "at once" via superposition...

I doubt this is a useful answer.

Solution 13 - Theory

If solution exists, it can be prepared and accessed in constant time=immediately. For instance using a LIFO data structure if you know the sorting query is for reverse order. Then data is already sorted, given that the appropriate model (LIFO) was chosen.

Solution 14 - Theory

You can't go below O(1), however O(k) where k is less than N is possible. We called them sublinear time algorithms. In some problems, Sublinear time algorithm can only gives approximate solutions to a particular problem. However, sometimes, an approximate solutions is just fine, probably because the dataset is too large, or that it's way too computationally expensive to compute all.

Solution 15 - Theory

Which problems get easier as population grows? One answer is a thing like bittorrent where download speed is an inverse function of number of nodes. Contrary to a car, which slows down the more you load it, a file-sharing network like bittorrent speeds the more nodes connected.

Solution 16 - Theory

O(1/n) is not less then O(1), it basically means that the more data you have, the faster algorithm goes. Say you get an array and always fill it in up to a 10100 elements if it has less then that and do nothing if there's more. This one is not O(1/n) of course but something like O(-n) :) Too bad O-big notation does not allow negative values.

Solution 17 - Theory

What about this:

void FindRandomInList(list l)
{
    while(1)
    {
        int rand = Random.next();
        if (l.contains(rand))
            return;
    }
}

as the size of the list grows, the expected runtime of the program decreases.

Solution 18 - Theory

As has been pointed out, apart from the possible exception of the null function, there can be no O(1/n) functions, as the time taken will have to approach 0.

Of course, there are some algorithms, like that defined by Konrad, which seem like they should be less than O(1) in at least some sense.

def get_faster(list):
    how_long = 1/len(list)
    sleep(how_long)

If you want to investigate these algorithms, you should either define your own asymptotic measurement, or your own notion of time. For example, in the above algorithm, I could allow the use of a number of "free" operations a set amount of times. In the above algorithm, if I define t' by excluding the time for everything but the sleep, then t'=1/n, which is O(1/n). There are probably better examples, as the asymptotic behavior is trivial. In fact, I am sure that someone out there can come up with senses that give non-trivial results.

Solution 19 - Theory

Most of the rest of the answers interpret big-O to be exclusively about the running time of an algorithm. But since the question didn't mention it, I thought it's worth mentioning the other application of big-O in numerical analysis, which is about error.

Many algorithms can be O(h^p) or O(n^{-p}) depending on whether you're talking about step-size (h) or number of divisions (n). For example, in Euler's method, you look for an estimate of y(h) given that you know y(0) and dy/dx (the derivative of y). Your estimate of y(h) is more accurate the closer h is to 0. So in order to find y(x) for some arbitrary x, one takes the interval 0 to x, splits it up until n pieces, and runs Euler's method at each point, to get from y(0) to y(x/n) to y(2x/n), and so on.

So Euler's method is then an O(h) or O(1/n) algorithm, where h is typically interpreted as a step size and n is interpreted as the number of times you divide an interval.

You can also have O(1/h) in real numerical analysis applications, because of floating point rounding errors. The smaller you make your interval, the more cancellation occurs for the implementation of certain algorithms, more loss of significant digits, and therefore more error, which gets propagated through the algorithm.

For Euler's method, if you are using floating points, use a small enough step and cancellation and you're adding a small number to a big number, leaving the big number unchanged. For algorithms that calculate the derivative through subtracting from each other two numbers from a function evaluated at two very close positions, approximating y'(x) with (y(x+h) - y(x) / h), in smooth functions y(x+h) gets close to y(x) resulting in large cancellation and an estimate for the derivative with fewer significant figures. This will in turn propagate to whatever algorithm you require the derivative for (e.g., a boundary value problem).

Solution 20 - Theory

I guess less than O(1) is not possible. Any time taken by algo is termed as O(1). But for O(1/n) how about the function below. (I know there are many variants already presented in this solution, but I guess they all have some flaws (not major, they explain the concept well). So here is one, just for the sake of argument:

def 1_by_n(n, C = 10):   #n could be float. C could be any positive number
  if n <= 0.0:           #If input is actually 0, infinite loop.
    while True:
      sleep(1)           #or pass
    return               #This line is not needed and is unreachable
  delta = 0.0001
  itr = delta
  while delta < C/n:
    itr += delta

Thus as n increases the function will take less and less time. Also it is ensured that if input actually is 0, then the function will take forever to return.

One might argue that it will be bounded by precision of machine. thus sinc eit has an upper bound it is O(1). But we can bypass that as well, by taking inputs of n and C in string. And addition and comparison is done on string. Idea is that, with this we can reduce n arbitrarily small. Thus upper limit of the function is not bounded, even when we ignore n = 0.

I also believe that we can't just say that run time is O(1/n). But we should say something like O(1 + 1/n)

Solution 21 - Theory

OK, I did a bit of thinking about it, and perhaps there exists an algorithm that could follow this general form:

You need to compute the traveling salesman problem for a 1000 node graph, however, you are also given a list of nodes which you cannot visit. As the list of unvisitable nodes grows larger, the problem becomes easier to solve.

Solution 22 - Theory

I see an algorithm that is O(1/n) admittedly to an upper bound:

You have a large series of inputs which are changing due to something external to the routine (maybe they reflect hardware or it could even be some other core in the processor doing it.) and you must select a random but valid one.

Now, if it wasn't changing you would simply make a list of items, pick one randomly and get O(1) time. However, the dynamic nature of the data precludes making a list, you simply have to probe randomly and test the validity of the probe. (And note that inherently there is no guarantee the answer is still valid when it's returned. This still could have uses--say, the AI for a unit in a game. It could shoot at a target that dropped out of sight while it was pulling the trigger.)

This has a worst-case performance of infinity but an average case performance that goes down as the data space fills up.

Solution 23 - Theory

In numerical analysis, approximation algorithms should have sub-constant asymptotic complexity in the approximation tolerance.

class Function
{
    public double[] ApproximateSolution(double tolerance)
    {
        // if this isn't sub-constant on the parameter, it's rather useless
    }
}

Solution 24 - Theory

It may be possible to construct an algorithm that is O(1/n). One example would be a loop that iterates some multiple of f(n)-n times where f(n) is some function whose value is guaranteed to be greater than n and the limit of f(n)-n as n approaches infinity is zero. The calculation of f(n) would also need to be constant for all n. I do not know off hand what f(n) would look like or what application such an algorithm would have, in my opinion however such a function could exist but the resulting algorithm would have no purpose other than to prove the possibility of an algorithm with O(1/n).

Solution 25 - Theory

If the answer is the same regardless of the input data then you have an O(0) algorithm.

or in other words - the answer is known before the input data is submitted

  • the function could be optimised out - so O(0)

Solution 26 - Theory

Big-O notation represents the worst case scenario for an algorithm which is not the same thing as its typical run time. It is simple to prove that an O(1/n) algorithm is an O(1) algorithm . By definition,

O(1/n) --> T(n) <= 1/n, for all n >= C > 0
O(1/n) --> T(n) <= 1/C, Since 1/n <= 1/C for all n >=C
O(1/n) --> O(1), since Big-O notation ignores constants (i.e. the value of C doesn't matter)

Solution 27 - Theory

Nothing is smaller than O(1) Big-O notation implies the largest order of complexity for an algorithm

If an algorithm has a runtime of n^3 + n^2 + n + 5 then it is O(n^3) The lower powers dont matter here at all because as n -> Inf, n^2 will be irrelevant compared to n^3

Likewise as n -> Inf, O(1/n) will be irrelevant compared to O(1) hence 3 + O(1/n) will be the same as O(1) thus making O(1) the smallest possible computational complexity

Solution 28 - Theory

inline void O0Algorithm() {}

Solution 29 - Theory

Here's a simple O(1/n) algorithm. And it even does something interesting!

function foo(list input) {
  int m;
  double output;

  m = (1/ input.size) * max_value;	
  output = 0;
  for (int i = 0; i < m; i++)
	output+= random(0,1);

  return output;
}

O(1/n) is possible as it describes how the output of a function changes given increasing size of input. If we are using the function 1/n to describe the number of instructions a function executes then there is no requirement that the function take zero instructions for any input size. Rather, it is that for every input size, n above some threshold, the number of instructions required is bounded above by a positive constant multiplied by 1/n. As there is no actual number for which 1/n is 0, and the constant is positive, then there is no reason why the function would constrained to take 0 or fewer instructions.

Solution 30 - Theory

I don't know about algorithms but complexities less than O(1) appear in randomized algorithms. Actually, o(1) (little o) is less than O(1). This kind of complexity usually appears in randomized algorithms. For example, as you said, when the probability of some event is of order 1/n they denote it with o(1). Or when they want to say that something happens with high probability (e.g. 1 - 1/n) they denote it with 1 - o(1).

Solution 31 - Theory

I don't understand the mathematics but the concept appears to be looking for a function that takes less time as you add more inputs? In that case what about:

def f( *args ): 
  if len(args)<1:
    args[1] = 10

This function is quicker when the optional second argument is added because otherwise it has to be assigned. I realise this isn't an equation but then the wikipeadia pages says big-O is often applied to computing systems as well.

Solution 32 - Theory

There are sub-linear algorithms. In fact, the Bayer-Moore search algorithm is a very good example of one.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionShalmaneseView Question on Stackoverflow
Solution 1 - TheoryKonrad RudolphView Answer on Stackoverflow
Solution 2 - TheoryTobiasView Answer on Stackoverflow
Solution 3 - TheoryAdrianView Answer on Stackoverflow
Solution 4 - TheorysharptoothView Answer on Stackoverflow
Solution 5 - TheorynonopolarityView Answer on Stackoverflow
Solution 6 - TheoryEd JamesView Answer on Stackoverflow
Solution 7 - TheoryCraig O'ConnorView Answer on Stackoverflow
Solution 8 - TheorySpliFFView Answer on Stackoverflow
Solution 9 - TheoryDaveView Answer on Stackoverflow
Solution 10 - TheoryLapTop006View Answer on Stackoverflow
Solution 11 - TheoryBrian PostowView Answer on Stackoverflow
Solution 12 - TheoryJeff Meatball YangView Answer on Stackoverflow
Solution 13 - TheoryLarssonView Answer on Stackoverflow
Solution 14 - TheoryHao Wooi LimView Answer on Stackoverflow
Solution 15 - TheoryNiklas RosencrantzView Answer on Stackoverflow
Solution 16 - TheoryvavaView Answer on Stackoverflow
Solution 17 - TheoryShalmaneseView Answer on Stackoverflow
Solution 18 - TheoryCasebashView Answer on Stackoverflow
Solution 19 - TheoryAndrew LeiView Answer on Stackoverflow
Solution 20 - Theoryuser1953366View Answer on Stackoverflow
Solution 21 - TheoryShalmaneseView Answer on Stackoverflow
Solution 22 - TheoryLoren PechtelView Answer on Stackoverflow
Solution 23 - TheorySam HarwellView Answer on Stackoverflow
Solution 24 - TheoryGregView Answer on Stackoverflow
Solution 25 - TheoryproView Answer on Stackoverflow
Solution 26 - TheoryLawrence BarsantiView Answer on Stackoverflow
Solution 27 - Theoryuser112831View Answer on Stackoverflow
Solution 28 - TheoryStewartView Answer on Stackoverflow
Solution 29 - TheoryejspencerView Answer on Stackoverflow
Solution 30 - TheoryA. MashreghiView Answer on Stackoverflow
Solution 31 - TheorySpliFFView Answer on Stackoverflow
Solution 32 - TheoryEsteban ArayaView Answer on Stackoverflow