NP vs small P

The P vs NP conjecture has always seemed a little odd to me. Or rather the interpretation of the conjecture that any problem in P is tractable. How reassuring is to know a problem can be solved in time less than some polynomial function of the size of the input if that polynomial has extremely high degree?

But this evening I ran across a fascinating comment by Avi Wigderson [1] that puts the P vs NP conjecture in a different light:

Very few known polynomial time algorithms for natural problems have exponents above 3 or 4 (even though at discovery the initial exponent may have been 30 or 40).

Problems in P may be more tractable in practice than in (current) theory. Wigderson’s comment suggests that if you can find any polynomial time algorithm, your chances are improved that you can find a small-order polynomial time algorithm. It seems there’s something deep going on here that would be hard to formalize.

Related posts

[1] From his new book Mathematics and Computation

2 thoughts on “NP vs small P

  1. I would guess that exponents above 4 are rare because the bulk of interesting geometry happens in 3 and 4 dimensions. So you get some magic in those dimensions that higher ones can’t often exceed. Formalizing that would probably take some absurd amount of time, though.

  2. If you have a problem where divide-and-conquer works, then the exponent of the overall algorithm is determined by how much work it is to combine partial solutions. That is, if a problem of size N can be split into two subproblems of size N/2, and assembling the solutions of the two subproblems into an overall solution is O(N^k), then the overall algorithm can be done in O(N^k log N). I conjecture that most problems in P are solvable by divide-and-conquer algorithms where the necessary value of k is no more than 3 or so.

Comments are closed.