Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s not click bait. It’s standard terminology.


To clarify: it's not "constant time" in the sense of having O(1) time complexity with regards to the size of the inputs, which is what most people mean by "constant time" (which is obviously not possible in this case: there's never going to be a GCD algorithm that can work as fast on 100-bit integers as on 1,000,000,000-bit integers).

It's "constant time" in the cryptographic sense, that the time to run it can't be used as a side-channel to figure out what the inputs are. A great result to be sure, but the terminology is undoubtedly confusing.


With any algorithm complexity analysis you have to define what the inputs are considered to be. For cryptography, algorithms are designed to be constant-time with respect to the non-secret inputs. The secret inputs (which you are trying to protect) usually do not vary from one call to the next (eg, long-term private keys etc) - so can be assumed to be constant.

So while the terminology seems confusing, it’s not actually different. It’s just a different choice of “input” compared to typical algorithm analysis.


> it's not "constant time" in the sense of having O(1) time complexity with regards to the size of the inputs

Yes it is. The presented algorithm is constant time in the exponent, i.e. 2 + O(1), where this exponent is not impacted by the size of the inputs n and c. Much like any other complexity analysis, an algorithm is O(1) as long as O(1) is asymptotically the "largest part" of the running time. As the size of n increases, the exponent 2+O(1) increasingly dominates execution time.


"Constant time in the exponent" is nonsense, I'm afraid.

A bounded exponent of n would be the same thing as "polynomial time", but the thing that's bounded (and indeed arbitrarily close to 2 for large n) is the exponent of log n.

The running time of the algorithm presented in this paper is n (log n)^(2+o(1)). This ...

... is not constant; it increases with n, a bit faster than linearly.

... has o(1), not O(1), in the exponent; the two mean different things. O(1) means "bounded", o(1) means "tends to zero". The claim isn't that the running time is <= n times polynomial(log n) but that it's <= n times "at most approximately a quadratic polynomial in log n".

... doesn't in fact depend mostly on that exponent; the most important factor is the n, not the (log n)^(2+o(1)). If that 2 were a 100, the n factor would still (asymptotically) matter more.

For instance, suppose n=2^100 and our logs are to base 2. Then the running time of this algorithm is approximately some constant times 2^100 (that's the n factor) times 100^2 (that's the factor with log n in it). 2^100 is much, much, much bigger than 100^2.


I disagree. "Constant time" means O(1), which this isn't.


Disagreeing with a fact doesn’t make a lot of sense. Maybe you meant to say “I didn’t know this was standard terminology” or “I think it’s a confusing choice of word anyway”.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: