Let us suppose we are given a set of linear equations \mathbf{A}\mathbf{x}=\mathbf{b} to solve. Here \mathbf{A} represents a square matrix of nth order and \mathbf{x} and \mathbf{b} vectors of nth order. We may either treat this problem as it stands and attempt to find \mathbf{x}, or we may solve the more general problem of finding the inverse of the matrix \mathbf{A}, and then allow it to operate on \mathbf{b} giving the required solution or the equation as \mathbf{x}=\mathbf{A^{-1}}\mathbf{b}. If we are quite certain that we only require the solution to be the one set of equations, the former approach has the advantage of involving less work (about one-third the number of multiplications by almost all methods). If, however, we wish to solve a number of sets of equations with the same matrix \mathbf{A} it is more convenient to work out the inverse and apply it to each of the vectors \mathbf{b}. This involves, in addition, n^2 multiplications and n recordings for each vector, compared with a total of about \frac{1}{3}n^3 multiplications in an independent solution.

— Alan Turing (1948)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s