Negligible (complexity theory)
From Wikipedia, the free encyclopedia
- A function
is negligible, if for every positive integer c and all sufficiently large x’s (i.e., there exists an Nc > 0, for all x > Nc),
Contents |
[edit] History
The concept of negligibility can find its trace back to sound models of analysis. Though the concepts of "continuity" and "infinitesimal" became important in mathematics during Newton and Leibniz's time (1680s), they were not well-defined until late 1810s. The first reasonably rigorous definition of continuity in mathematical analysis was due to Bernard Bolzano, who wrote in 1817 the modern definition of continuity. Lately Cauchy, Weierstrass and Heine also defined as follows (with all numbers in the real number domain ):
- (Continuous function) A function f(x) is continuous at x = x0 if for every positive number ε > 0, there exists a positive number δ > 0 such that | x − x0 | < δ implies | f(x) − f(x0) | < ε.
This classic definition of continuity can be transformed into the definition of negligibility in a few steps by changing a parameter used in the definition per step. First, in case with f(x0) = 0, we must define the concept of "sufficiently large", then define "infinitesimal function":
- (Sufficiently large) A mathematic proposition is true when a number x is sufficiently large if there exists a positive number Nc > 0, for all x > Nc, the proposition is true.
- (Infinitesimal) A continuous function μ(x) is infinitesimal if for all sufficiently large number x's, for every positive number[1]
such that
Next, we change "positive number ε" to "positive polynomial function ε(x)". Since a number is actually a polynomial of degree 0, the definition of negligible function is clearly a generalization of the definition of infinitesimal function by extending the input parameter ε from a polynomial of degree 0 to a polynomial of arbitrary constant degree.
- (Negligible function) A continuous function μ(x) is negligible if for all sufficiently large x's, for every positive polynomial
such that
In complexity-based modern cryptography, a security scheme is provably secure if the probability of security failure (e.g., inverting a one-way function, distinguishing cryptographically strong pseudorandom bits from truly random bits) is negligible in terms of the cryptographic key length x = n. Hence comes the definition at the top of the page because key length n must be a natural number.
Nevertheless, the general notion of negligibility has never said that the system input parameter x must be the key length n. Indeed, x can be any predetermined system metric and corresponding mathematic analysis would illustrate some hidden analytical behaviors of the system.
[edit] Footnote
- ^ Note that the reciprocal pnum is not needed if the function domain is real number. We write in this way to show the steps more clearly.
[edit] References
- Goldreich, Oded (2001). Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press. ISBN 0-521-79172-3. Fragments available at the author's web site.
- Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Section 10.6.3: One-way functions, pp.374–376.
- Christos Papadimitriou (1993). Computational Complexity, 1st edition, Addison Wesley. ISBN 0-201-53082-1. Section 12.1: One-way functions, pp.279–298.