Ten surprises from numerical linear algebra

Here are ten things about numerical linear algebra that you may find surprising if you’re not familiar with the field.

  1. Numerical linear algebra applies very advanced mathematics to solve problems that can be stated with high school mathematics.
  2. Practical applications often require solving enormous systems of equations, millions or even billions of variables.
  3. The heart of Google is an enormous linear algebra problem. PageRank is essentially an eigenvalue problem.
  4. The efficiency of solving very large systems of equations has benefited at least as much from advances in algorithms as from Moore’s law.
  5. Many practical problems — optimization, differential equations, signal processing, etc. — boil down to solving linear systems, even when the original problems are non-linear. Finite element software, for example, spends nearly all its time solving linear equations.
  6. A system of a million equations can sometimes be solved on an ordinary PC in under a millisecond, depending on the structure of the equations.
  7. Iterative methods, methods that in theory require an infinite number of steps to solve a problem, are often faster and more accurate than direct methods, methods that in theory produce an exact answer in a finite number of steps.
  8. There are many theorems bounding the error in solutions produced on real computers. That is, the theorems don’t just bound the error from hypothetical calculations carried out in exact arithmetic but bound the error from arithmetic as carried out in floating point arithmetic on computer hardware.
  9. It is hardly ever necessary to compute the inverse of a matrix.
  10. There is remarkably mature software for numerical linear algebra. Brilliant people have worked on this software for many years.

Related posts:

Don’t invert that matrix
Searching for John Francis
Applying PageRank to biology
Matrix cookbook
What is the cosine of a matrix?

Tagged with: ,
Posted in Computing, Math
22 comments on “Ten surprises from numerical linear algebra
  1. Sean Devlin says:

    No love for ATLAS? I thought this was use in favor of lapack nowadays.

    http://math-atlas.sourceforge.net/

  2. Giles Warrack says:

    Re No 3, Herbert Wilf (Penn) has a nice handout on the Kendall-Wei algorithm and Google

    http://www.math.upenn.edu/~wilf/

  3. joeyo says:

    @Sean: LAPACK is built on top of BLAS/ATLAS.

  4. Evan says:

    So.. you have convinced me to learn Linear Algebra. Do you have any suggestions for books to get started? Thanks for the inspiration!

  5. John says:

    Evan, there are a lot of linear algebra books out there. I got a review copy of Cheney and Kincaid’s linear algebra book a few weeks ago and I thought it was a nice text. It’s easy to read and has a good mix of theory and application.

    If you know basic linear algebra and want to jump into numerical linear algebra, I recommend Demmel’s Applied Numerical Linear Algebra. Also, there’s the classic Matrix Calculations by Golub and Van Loan.

  6. Evan says:

    John, thanks for your quick reply. I’ll definitely check those out!

  7. Matt says:

    By the way, has anyone here played with GotoBLAS?
    Is its performance really as good as the benchmarks on its website suggest or is it just advertisement?
    The source code is freely available, though only for research applications (commercial is not free). Here are relevant links:
    http://www.tacc.utexas.edu/resources/software/gotoblasfaq/
    http://en.wikipedia.org/wiki/Kazushige_Goto
    http://www.tacc.utexas.edu/tacc-projects/
    http://web.tacc.utexas.edu/~kgoto/
    http://www.pathscale.com/building_code/gotoblas.html

  8. John suggested I post this here –

    We recently released a MATLAB package called QLA that dramatically accelerates core linear algebra routines on dense matrices. It includes SVD, linear systems, least squares, PCA, etc., and speedups range from 10x to 1,000x+ on benchmarks. The gains come by trading a minuscule bit of accuracy for a mammoth increase in speed.

    You can find more details and the free beta version here:
    http://massiveanalytics.com.

  9. Ron Modesitt says:

    What a thoroughly interesting blog! Glad I found you, with thanks to Stumble.

    Ron

  10. Luis says:

    Another interesting surprise is that, often, the matrices involved are very sparse.

    Books: I love “Matrix algebra useful for statistics” by Searle.

  11. Some surprises indeed!

    However, a small note for point 4:

    The efficiency of solving very large systems of equations has benefited at least as much from advances in algorithms as from Moore’s law.

    At least solutions for spare system of equations have benefited two orders of magnitude more from algorithmic developments than hardware advancements (slide 7) to a total of 10 orders of magnitude improvement from 1970 to 2001.

    For other kinds of linear equations, the improvements indeed have been approximately the same..

  12. fish says:

    Ha, good to know re: matrix inversion — I often use inverts in calculating color transforms (see http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html) but hey. Thanks!

  13. Juan Solano says:

    Let me recommend the videos in Khan Academy, is a very nice place to start and refresh the key concepts.. and then digest the Linear Algebra beauty…

    http://www.khanacademy.org/#browse

    Juan

  14. J says:

    Charles Van Loan has a sweet pair of custom Nike kicks. Not even joking.

  15. martin says:

    Could you elaborate on number 6? I would have thought it would have taken close to a millisecond to just read a million values.

  16. mg says:

    @Evan my favorite text is Janich’s Linear Algebra.

  17. Evan says:

    Thanks mg. I’ll check that out as well.

  18. AV says:

    this article is a ‘trivality’.
    any indian schoolkid knows this.

    – of course linear algebra is balzing fast for FEA and related computations.
    but the writer should state that the ‘speed’ in fast lin algebra comes not from inherent properties of lin-algebra, but from the fact that we MASSIVELY approximate the actual behavior of the beams/trusses/etc. we approximate reality. and thats becoz we dont have good computational algos to handle the ‘true’ mathematical representations.

    if that was ‘overhead transmission’ – then here’s a simplification– try to find a ‘fast’ solution to the Navier Stokes equations using ‘linear algebra’ – in their pure form.

  19. Anselmo says:

    For me, the greatest achievement of linear algebra is the simplex method. Linear optimisation problems with million variables may be solved in home computers in a few minutes. It is remarkably efficient, though it has an exponential worst case performance. In addition, most exact non-linear optimisation algorithms use the simplex method for solving the linear approximation model.

  20. babita says:

    i want to solve the equation Ax=B,where A is sparse given in CSC format.A and B are known.

    Can you give me some references to solve it???

    Reply as soon as possible…its urgent..

  21. Federico Poloni says:

    “The heart of Google is an enormous linear algebra problem.”

    This is often heard, and the algorithm itself is a nice one to tell around, but I think its role in the architecture of a search engine is overrated.

    1- While PageRank constitutes a part of the algorithm that is used to rank sites (allegedly – it’s not like it’s open source), there must be much more than that. PageRank returns a universal ranking, that does not depend on the query. Clearly the ordering has to be tweaked to the specific search. Pagerank is not computed on-the-fly for every web search, it would be too slow to do that.

    2- Ranking isn’t the only part of the problem. There is crawling, indexing, storing data, and recalling results efficiently. PageRank might be a nice idea that constitutes 1% of that, but the bulk of the computation is many boring details on storing and retrieving compressed data.

  22. John says:

    Frederico: The basic idea that made Google a better search engine than competitors was an eigenvalue problem. That’s not to say that the algorithm hasn’t been tweaked, or that it’s easy to implement, or that many other factors don’t matter.

2 Pings/Trackbacks for "Ten surprises from numerical linear algebra"
  1. [...] This post was mentioned on Twitter by John D. Cook, Tracy Harms. Tracy Harms said: RT @JohnDCook: Ten surprises from numerical linear algebra http://bit.ly/5fHYB6 [...]

  2. [...] Ten surprises from numerical linear algebra [...]