How to solve supposedly intractable problems

Contrary to semi-educated belief, NP-complete problems are not necessarily intractable. Often such problems are intractable, but not always. If a problem is NP-complete, this means that there is no polynomial-time algorithm for solving the problem

  1. for the worst case,
  2. as the problem size grows,
  3. finding exact solutions,
  4. with certainty,
  5. as far as anyone knows.

One way out of this (#5) is to show how to solve NP-complete problems efficiently. That may not be possible, and it hasn’t been possible so far, so we’ll not worry about this one. But that still leaves four options:

  1. special cases
  2. small problem sizes
  3. approximate solutions
  4. probabilistic solutions.

For example, the Traveling Salesman problem is NP-complete. But it’s easy to find solutions for a small number of cities by brute force. And even for a moderate number of cities, people routinely find solutions; there’s even an iPhone app for finding solutions. For larger problems, there are algorithms that yield near-optimal solutions with high probability.

Similar remarks hold for other kinds of problems deemed intractable. For example, some say you can’t find the roots of a 5th degree polynomial. Isn’t that a theorem? No. There’s a theorem that says you cannot find

  1. exact solutions,
  2. in general,
  3. using a finite sequence of certain mathematical operations.

So there are three loop holes here:

  1. approximate solutions,
  2. special cases, and
  3. more general operations.

If you only want to find solutions to 100 decimal places, for example, that’s doable.

Most integrals cannot be computed

  1. by a typical calculus student,
  2. exactly,
  3. with certainty,
  4. in terms of elementary functions.

But often integrals can be computed by some combination of

  1. more sophisticated exact techniques,
  2. numerical approximation,
  3. randomized algorithms, and
  4. a broader class of functions.

For example, some say that the integral of exp(-x2) cannot be computed. Of course it can, though there is not a finite combination of elementary functions that gives the exact integral. You can compute the integral exactly in terms of the error function erf(x), or compute it numerically to any desired accuracy. You could even approximate the integral with a Monte Carlo method, though there’s no point in using that approach here. For many high-dimensional integration problems, some sort of Monte Carlo is the only practical approach.

Maybe you need to solve a problem for which none of the loop holes given here apply. That’s often the case. I’m not saying that there’s always a way out, only that sometimes there is. Sometimes problems thought to be intractable are not.

21 thoughts on “How to solve supposedly intractable problems

  1. What is the definition of “elementary function” anyway? Why is sin() elementary but erf() not?

  2. I’ve wondered about this for a while. Saying “ooh, problem X is NP hard, but problem Y is solvable in polynomial time” doesn’t really tell me much, if the problems I’m interested in are problems with fixed inputs. Can we say much about these problems apart from “X took 12 seconds to run with this implementation, Y took less than 1…”?

    Obviously, if I’m interested in arbitrary inputs, or scalability of my program, then complexity classes start to matter. But is there any theory about “difficulty” of programs that isn’t about “difficulty in the limit”?

  3. Delmania: Sorry, I don’t know that reference. Is it a book by those two authors?

    Steve: This is mostly a historical classification. Sines were known from ancient times, but I believe de Moivre was the first to identify something like erf(x). (More on that here.) I imagine the name “erf” came later, but de Moivre provided the motivation.

    On the other hand, history isn’t entirely arbitrary. Sines were discovered first because they have more tangible and more frequent application. And in general “elementary” functions satisfy simpler relationships than “advanced” functions.

  4. John: http://en.wikipedia.org/wiki/Computers_and_Intractability
    Within the realm of complexity theorists, it’s pretty well known, though a bit outdated, since if I recall correctly, only 2 of the 12 open problems in the book are left open.

    Seamus: Actually, most work in complexity theory on NP complete problems is pretty much answering the questions you asked. Once a problem has been defined as intractable, people start looking for various instances of the input that can be solved, and also try to find lower bounds on the problem. NP-complete just means you can’t find a nice general solution for arbitraty input.

  5. With regards to option (4), computing answers to NP-complete problems with less than certainty. It is believed that it is impossible to find answers to NP-complete problems with any probability significantly different from chance. I believe that the conjecture NP is not a subset of BPP (the class of problems which can be solved in polynomial time with significant probability) is believed about the same as NP != P.

  6. David: I believe that analysis applied to finding exact results probabilistically. If you weaken the requirement of exactness and certainty simultaneously, i.e. finding a good approximation with high probability, things may be different.

    One note on what approximation means in the context of the traveling salesman problem. You can find a route that has near optimal length. This does not necessarily mean a route that is near the optimal route. It may be a very different route that has nearly the length of the optimal route.

  7. Steve, here’s Wikipedia’s definition: In mathematics, an elementary function is a function of one variable built from a finite number of exponentials, logarithms, constants, and nth roots through composition and combinations using the four elementary operations (+ – × ÷).

  8. John, the story for 5th degree polynomials is not exactly the way you describe – it’s not just “in general”. There are explicit quintics that can’t be solved by radicals (the “finite sequence of certain mathematical operations”).

    Here’s one: q(x) = x^5-10x+2.

  9. jonathan: Yes, there are specific quintics that cannot be solved by radicals. My point is that there are also quintics that can be solved by radicals, such as (x – 2)5.

  10. Now, now, John, why the timid circumlocution?

    It appears that the problems you are talking about are from optimization, i.e., for functions f and g find x to minimize f(x) subject to g(x) = 0, e.g., for positive integers m and n and and m x n matrix A, find x to minimize z = cx subject to Ax = b where the components of x are integers, that is ‘integer linear programming’ (ILP). These optimization problems were heavily from operations research. Of course, in statistics, in principle, applying the Neyman-Pearson result can result in a knapsack problem which is in NP-complete.

    Of course, as in (TeX markup):

    Michael R. Garey and David S. Johnson, {it Computers and Intractability: A Guide to the Theory of NP-Completeness,/} ISBN 0-7167-1045-5, W. H. Freeman, San Francisco, 1979.

    not all NP-complete problems are optimization problems. E.g., there is the problem of ‘satisfiability’ (SAT) where we take a Boolean expression in some variables and ask if there are values of the variables to make the expression true.

    So, there is a point: For SAT, an approximate solution is of low interest. But for the optimization problems, a feasible solution that is nearly optimal is nearly always just fine in practice. E.g., such optimization problems in classic micro-economics come from business operations where saving $10 million, all but the last 10 cents, in reasonable computer time, nearly always in practice, is JUST FINE. Of course this fact stands in strong contrast with a cartoon early in G&J which begins to show that much of heat about NP-complete is a flim-flam, fraud, scam.

    For how to find feasible, nearly optimal solutions to ILP problem, of course there is a huge literature, e.g.,

    George L. Nemhauser and Laurence A. Wolsey, {it Integer and Combinatorial Optimization,/} ISBN 0-471-35943-2, John Wiley & Sons, Inc., New York, 1999.

    In particular, for your

    “Often such problems are intractable, but not always.”

    this is far off the mark! For NP-complete optimization problems, in practice that the problems are in principle in NP-complete is nearly irrelevant and not an indication of particular difficulty at all; in practice, NP-complete optimization problems are nearly always quite doable and rarely “intractable” in any very important sense.

    For the traveling salesman problem in Euclidean spaces, there is a much better result: Just find a minimum spanning tree of the cities. Then do a depth-first traversal of the tree where go ‘cross country’ (that is, don’t follow the spanning tree) to avoid revising cities already visited, and call that traversal your ‘traveling salesman tour’. Then (from memory) for each a > 0 and b in (0,1), as the number of cities n grows, for cities distributed with a continuous density, this approximate solution will be within a of the shortest distance with probability greater than b. That is, as n grows, this simple minimum spanning tree solution becomes nearly optimal nearly always!

    Details are in, say,

    Richard M. Karp, “The Probabilistic Analysis of Some Combinatorial Search Algorithms,” pages 1-19, {it Algorithms and Complexity: New Directions and Recent Results,/} edited by J. F. Traub, Academic Press, New York, 1976.

    The class NP-complete is philosophically curious but so far not of much importance in practice. For a shocking example, the great headline news about NP-complete would be an algorithm that showed that P = NP. The supposedly the heavens would open and bring nirvana to the earth. A bit of an exaggeration, and we can see why: Consider linear programming. There is the simplex algorithm of G. Dantzig and others. and a classic paper of Klee and Minty showed that in worst case it is exponential, not polynomial. However long experience shows that on a linear programming problem with m constraints, the simplex algorithm runs in about 3m iterations and, thus, is low degree polynomial where, indeed, mostly we get to ignore the number of variables n.

    Well, from Khachiyan, there is a polynomial algorithm for linear programming, however it is so slow that whenever it is faster than simplex both are too slow to be practical.

    Net, for the first case of a complicated problem for which we found a polynomial algorithm instead of an exponential one, in practice the exponential algorithm is terrific and the polynomial one, awful.

    So, even if we find an algorithm that shows P = NP, except for a philosophical eureka, we should keep the Champaign corked for at least a while.

    Really, the problem P versus NP is not so much to help the real world but to set up a problem that will let academic researchers have some very long lasting jobs. That fact should not let us conclude that the corresponding, practical optimization problems are meaningfully ‘intractable’.

  11. Note that sin can be constructed from elementary operations (involving complex numbers): sin(x) = (Exp(i * pi * x) – Exp(-i * pi * x)) / 2. It does make some sense to consider it elementary.

  12. Hi John, The enumeration (1.2.3. …) does not show up in your feed.

    @ny_entrepreneur, If I remember correctly, there exists a beautiful theorem that says every NP-problem can be translated to/replace with each other. So basically, all NP-problems are some variation of SAT!! So If one can propose a polynomial solution for SAT, he/she would have solved all other classes of NP-problems as well (including optimizations and cryptography (the later usually ends-up as a worst case SAT clause). Simply write those quantifiers as an extremely big logic clause and give it to that magical algorithm.

    BTW, P=NP is relatively a very “young” problem. just think how long it took to solve fermat’s problem, or no one has any idea for prime numbers, despite the fact that they were around for centuries.

  13. Adel,

    “If I remember correctly”, yes, of course you do.

    On your “there exists a beautiful theorem”, well not really one theorem but at least hundreds and by now at least some thousands. That is, NP-complete is a collection of problems, and for most of the problems in NP-complete there has to be a theorem specific to that problem showing that the problem is in NP-complete.

    Yes, in the sense of the question P versus NP, all the problems in NP-complete are equivalent, in the narrow sense of the question P = NP, as you claim.

    But this thread is heavily about just optimization problems. And, yes, integer linear programming (ILP) problems are in NP-complete and, really, are a good representative of a very large fraction of the practical NP-complete optimization problems. E.g., the knapsack problem is ILP.

    So, yes, if we had an algorithm that showed that P = NP, then that algorithm would be a polynomial algorithm for solving ILP problems, e.g., convert the problems maybe to SAT problems.

    But the main concern of this thread is optimization problems, e.g., ILP, from operations research, not the logic problem SAT. Then, as I tried to explain, approximately optimal solutions for optimization problems can be just fine in practice, but approximate solutions for SAP are not of interest. To be more clear, in the sense of P = NP, ILP and SAT are equivalent but in practice they are very different.

    Then I went on to point out that in practice, often, maybe usually, we can do well enough finding approximately optimal solutions to ILP problems now. So, for practical ILP, we don’t have to wait for an algorithm that shows that P = NP, if such an algorithm can exist.

    But more generally I tried to explain that the question P = NP is one heck of an indirect, and difficult, approach to optimization problems, replacing a possibly difficult problem with so far an impossible one.

    The big point for practice is that the question P = NP is asking for the sky and everything in it — polynomial performance, for exactly optimal solutions, on worst case problems, guaranteed, all the time, that is, nothing probabilistic. If back off a little on exactly optimal, worst case, and guaranteed, not probabilistic, then enter a huge, very different, much nicer world of practice.

    Net, so far the question P = NP is asking for too much, much more than is needed in practice for optimization problems, and, for practical optimization, is mostly just a philosophical curiosity.

    Further I tried to explain that our experience with polynomial algorithms is not good: So far the evidence is that for challenging, practical optimization problems, an exponential algorithm runs rings around a polynomial algorithm. Net, by a huge margin, polynomial doesn’t have to mean fast or the fastest in practice.

    Really, the criterion of polynomial looked good early computer science problems in sorting and searching 50 years ago. Then the criterion looked good in, say, matching (say, work of Ford and Fulkerson and Edmonds — Edmonds is the person who said that a polynomial algorithm was a ‘good’ algorithm). Net, so far the criterion of polynomial on worst case problems, etc. is a bit too simplistic, gives us theoretical problems where we are stuck, asks for more than we need in practical optimization, and sets aside work that might make progress in practical optimization problems.

    Net, we MUST not assume that, because the question P = NP is difficult, optimization problems are difficult. And we should not look to an algorithm that would show that P = NP as the main hope for progress in solving practical optimization problems. For practical optimization, f’get about P = NP.

    My points here are important: E.g., once I was scheduling a fleet of airplanes and met with a well known mathematician. Right away he said “ah, the traveling salesman problem”. NO! NOT “the traveling salesman problem”, and NOT the question P = NP. Instead, my problem was scheduling airplanes, and in practice that is QUITE different. My work made good and valuable progress on scheduling the airplanes.

  14. @ny_entrepreneur, Thank you for your thorough comments, I agree with you that for any specific problem (whether optimization or not) a tailored approximated solution would be far more practical, than sitting around and waiting for a general solution to SAT. Still, the idea that they MAY exists a universal solution to all of our problems is very amusing.

    @john, You’re right, your feed is correct (http://feeds.feedburner.com/theendeavour), it’s probably my reeder.

  15. Alejandro,

    “A small correction. The Traveling Salesman problem is NP-hard, not NP-complete”

    Good. Thanks. I was typing too fast.

    Adel,

    Approximation is fine in practice for nearly all optimization problems but not for something like SAT. There are claims that an algorithm that shows that P = NP w0uld ‘trivialize’ all of pure math, e.g., settle Fermat’s last theorem right away if A. Wiles had not, and for this likely approximation wouldn’t work.

    “Still, the idea that they MAY exists a universal solution to all of our problems is very amusing.” Yes, philosophically its an amazing question. And it would be good to have some theoretical tools to get an answer; my guess is that to solve the problem we need some new ideas and that that is wide open. But, still, there is the experience that, as we saw in linear programming, in practice a polynomial algorithm need not be useful in practice. It’s like do you want a car that will ALWAYS go to the grocery store but goes only 0.0001 MPH or a car that will go to the grocery store whenever there is less than two feet of snow on the roads and, then, can always get you there at at least 30 MPH? Then once we settle P = NP, we will need to see how the answer fits with 1000 computers each with 10 processors each with 1000 cores and with quantum computers. Then we will ask, did God have to settle P = NP, and if so how and if not why not?

  16. Alejandro and ny_entrepreneur, in fact most optimisation problems are NP-hard, not NP-complete. Hamiltonian cycle problem, a special case of the TSP, is NP-complete. But I’m often annoyed by the ubiquitous allegation, in articles on some optimisation application, that some sort of meta-heuristics will be used due to the NP-hardness of the problem. As stated by ny_entrepreneur, most problems may be solved in practice to optimality or near-optimality in the average case. Worst-case performance seems to me more as a theoretical exercise.

  17. Magnus RC Wootton

    One way to look at intractable solves, is solved by brute force is the “accidental solution” so the tractable solve is the “purposeful solution.”

Comments are closed.