This is standard terminology within programming and cryptography. This is the only practical definition to use, as cryptography commonly deals with arbitrarily sized inputs, which can of course not all run in O(1).
A simple example of constant-time within this definition, as well as why constant-time does not mean O(1) in this context, is that of a comparator:
The most normal, and fastest, approach is to go through data in the largest chunks the CPU can handle, returning 1 immediately a mismatch is found, and 0 at the end under the assumption that no termination = no mismatch. However, this reveals information about the values compared, as the time it takes to compare reveals where the mismatch is located.
For constant time, your goal is to ensure that all data is processed equally regardless of outcome. You do this by effectively making the comparison a-b=c, or a^b=c, where a zero-result means equality. In reality, this would be implemented by XOR'ing the largest chunk the CPU can handle together, and then OR'ing the result with a global value starting at 0, overwriting it. This continues without termination to the very end, where you then simply return the running OR'd counter, which is either 0 for equality, or an arbitrary value for the opposite.
Both of course leak the length of the values compared, but the lengths in these cases are commonly known by the adversary, making its protection less relevant.
This is the only practical definition to use, as cryptography commonly deals with arbitrarily sized inputs, which can of course not all run in O(1).
It doesn't necessarily follow that any algorithm on a arbitrarily-sized input can't run in O(1) - for a trivial example there is an algorithm for "determine if the input number is odd" which is O(1).
> for a trivial example there is an algorithm for "determine if the input number is odd" which is O(1).
I would argue that this particular example cannot be considered a case of arbitrarily-sized input: There is only a single meaningful bit of information, which is the single bit accessed.
However, if you must, we can expand to clarify that the input is meant to include arbitrary meaningful bits of information that require processing and cannot simply be ignored.
That's not how this works. What you're doing here is defining away the scope of the problem to make O(1) complexity analysis redundant. A bound of O(1) conventionally means that we can exploit some structural component of the input and the given problem to find a solution independent of the input size. If you redefine input size to the more narrow sense of inputs which don't have some sort of structural feature like that, then yes of course nothing is constant time. But then you're just shifting the difficulty of the problem around without much of a gain.
I see how you may think that, but I would argue that I am not redefining anything, as things turn nonsensical without this assumption.
It is not an input if it does not affect your output (I am sure a better formal definition exists—I usually hear "meaningful", hence my use of that). Without this restriction, any input could be arbitrarily padded, inflating input sizes and making an algorithms complexity appear lower.
I find the even/odd example to be an exceptionally clear case of an algorithm that takes a fixed 1-bit input: Its implementation on a little-endian machine does not know the bounds of its input, only the location of the least significant byte (due to byte-addressing).
Without a defined bound, such an implementation must be assumed to either take a single byte as input (here a byte read is an implementation detail, despite the algorithm only needing one bit), or the full system memory from the location specified as input regardless of contents (due to having used the word "arbitrary", the input is allowed to not fit in registers).
However, I admit that I am now entering deeper theoretical waters with regards to definition details that I am normally comfortable with. I do however not find any other definition to line up with practice.
> I see how you may think that, but I would argue that I am not redefining anything, as things turn nonsensical without this assumption.
The point I'm making is that this statement reduces to the claim that defining an algorithm's worst case time complexity as O(1) or constant time is nonsensical.
Do you disagree that adding two numbers is a constant time operation?
> Do you disagree that adding two numbers is a constant time operation?
It is constant time by crypto programming definition in both theory and practice, and O(1) only in theory (bigints are a pain).
I did not try to claim that O(1) is nonsensical. Rather, that certain O(1) variable-input algorithms are in fact simply fixed input algorithms due to not considering its input.
I also find these "non-sensical" O(1) algorithms to be outliers.
Do you disagree that adding two numbers is a constant time operation?
I do. A general algorithm for adding two numbers, at least one represented in the conventional way as a series of digits in some base, has a lower bound of O(log n).
I think it would be tough to find a suitable definition of "meaningful". When searching for an element in an ordered list, how many items are meaningful? One if it's present, zero otherwise? Always log(n)? Always n?
If "meaningful" is a non-trivial property, as difficult to determine as an algorithmic lower bound, then it's not terribly useful.
And it also leaves us with no way to talk about non-constant time algorithms for determining whether a number is odd. Can we say one runs in exponential time when we only talk about "meaningful" bits of input?
> I think it would be tough to find a suitable definition of "meaningful".
I do not believe it is tough to find a suitable practical definition of meaningful: If a piece of information has no effect on the output of the algorithm, regardless of its value, then it is not meaningful.
Examples of meaningful values would be any value in a list being sorted. Non-meaningful values would be any other bit than the lowest bit when determining if a number is even or odd.
I find it to be an incredibly trivial definition when considering algorithms.
> If "meaningful" is a non-trivial property, as difficult to determine as an algorithmic lower bound, then it's not terribly useful.
I find the case where the full input is not meaningful to be an outlier that does not at all affect how we normally deal with algorithms and bounds.
However, when the two do not match, one can start discussing what your input actually is. I would, for example, argue that an even/odd number checker only ever takes a single bit as input. It may consume more due to architectural limitations around bit accesses on modern architectures, but that is not part of the core algorithm.
With this definition, different parts of the input might be "meaningful" for two algorithms that solve the same problem. This makes the size of the "meaningful" part of the input ill suited for comparing efficiency.
(It also makes the statement "no program can run faster than linear time in the size of the meaningful input" true directly by definition, which makes it a pretty boring statement...)
For an equality operation, all bits are meaningful, as there are input configurations in which any bit can lead to a change in output.
However, I do see your point in that whether a bit flip in input leads to a change in output depends on the particular input. My counter to that is that it is not the specific input, but the potential inputs that matter.
> This is standard terminology within programming and cryptography
Within cryptography, yes, but in programming in general, absolutely not.
The cryptographic definition of constant-time postdates the algorithmic one by decades, i think. It's a shame the cryptographers didn't pick a different term.
>The cryptographic definition of constant-time postdates the algorithmic one by decades, i think.
The normal usage of the phrase "constant time" predates CS by centuries, and it is much closer in usage to the crypto version than the complexity version.
Crypto uses the term "constant time" to mean same time for any input, while the general CS community uses O(1) to mean bounded time and also often uses complexity to hide log N terms. It also allows variable time, as long as it's bounded (and that sometimes allows hiding terms like log N or log log N...).
A good example is most algorithms treat addition as a constant time operation, but, for example, addition of indices for lookups is not constant time when inputs are unbounded. So if you compute an index by addition and ignore the logN time to do the addition, then you fudged. Yet this is commonplace in complexity theory, which models many basic operations as constant time when they are not for unbounded inputs.
For example, a "constant time" hash table operation, which often indexes arbitrarily large indices with addition, ignores the log N needed to add arbitrarily large indices.
Thus crypto uses the phrase in the more precise, centuries old usage.
> Within cryptography, yes, but in programming in general, absolutely not.
I cannot think of any algorithms with arbitrary sized inputs that have truly constant execution time. The reason for that I do not find this possible with classical computers is that no matter what you do, reading data from memory is an unavoidable variable time process.
Do not that arbitrary sized inputs here imply that the input is meaningful and must all be processed. I do not see algorithms that do not process their inputs as candidates here, such as the elsewhere stated example of even/odd test which just read a single bit. I consider an algorithm that only reads a fixed amount of information from its input to be a fixed-input algorithm, regardless of the "full" size of its input.
Of course, if you place an upper bound on the input, then you can pad in various ways, or possibly have algorithms whose execution time is inversely proportional to its input size, thus balancing itself. However, placing an upper bound also mean violating the "arbitrary" requirement entirely, thus disqualifying the algorithm.
Do note that I'd actually be quite interested if you have examples of such algorithms.
> I cannot think of any algorithms with arbitrary sized inputs that have truly constant execution time.
With respect, I think you may misunderstand the meaning of "constant time" in the sense of complexity theory, i.e. O(1). Accessing an element in an array of size n is a constant time operation. See: https://stackoverflow.com/questions/7297916/why-does-accessi...
Wouldn’t searching for an element in an arbitrary array of size n happen in O(n) time, while only accessing it is considered O(1)? I could understand the search case for something like a hash table being O(1) but does that also apply to your standard array?
So binary search is not logn time because it only reads logn values from the input? To know which parts are not read you basically have to run the algorithm. I find your definition unhelpful.
> A simple example of constant-time within this definition, as well as why constant-time does not mean O(1) in this context, is that of a comparator:
This is incorrect. The paper explicitly uses "constant time" in the sense of O(1). You can see this listed throughout the paper, in various complexity analyses, beginning from Section 2.
> This is standard terminology within programming and cryptography. This is the only practical definition to use, as cryptography commonly deals with arbitrarily sized inputs, which can of course not all run in O(1).
How about 'consistent time' or 'uniform time' for non-O(1) uses?
It might be standard notation, but not only is O(1) misleading as it misrepresents the dependency on the problem size, it is also the wrong symbol as it hides the exact thing it's trying to convey.
A simple example of constant-time within this definition, as well as why constant-time does not mean O(1) in this context, is that of a comparator:
The most normal, and fastest, approach is to go through data in the largest chunks the CPU can handle, returning 1 immediately a mismatch is found, and 0 at the end under the assumption that no termination = no mismatch. However, this reveals information about the values compared, as the time it takes to compare reveals where the mismatch is located.
For constant time, your goal is to ensure that all data is processed equally regardless of outcome. You do this by effectively making the comparison a-b=c, or a^b=c, where a zero-result means equality. In reality, this would be implemented by XOR'ing the largest chunk the CPU can handle together, and then OR'ing the result with a global value starting at 0, overwriting it. This continues without termination to the very end, where you then simply return the running OR'd counter, which is either 0 for equality, or an arbitrary value for the opposite.
Both of course leak the length of the values compared, but the lengths in these cases are commonly known by the adversary, making its protection less relevant.