Some open-source libraries for linear system computations
Free libraries are readily available for the majority of linear system problems
encountered in scientific computing, A Google search will show
scores of them, unfortunately without an external objective evaluation of
the libraries. A library should provide:
- Interface(s) to C/C++ and Fortran
- Have at least a simple and an expert interface, where the basic one
chooses most of the parameters and settings for you.
- A test suite to verify an installation on your machine was correct
- Numerically reliability, and when that is not possible, a warning
and estimate of the potential size of computational error
- Thorough documentation
Following are a few that I personally recommend, and they usually handle
80% of the cases I've encountered. The list is not comprehensive,
and many unlisted ones also have excellent quality satisfying the above
metrics.
These are primarily intended for cases where the system is too large for
Matlab to handle, or where Matlab cannot be used. In general, for small
to midsize applications Matlab is the best and easiest to use.
Increasingly Matlab is able to handle large scale systems, so it can
even solve problems with millions of unknowns (circa 2012).
References
The most readily available text is Matrix Computations,
by Golub and van Loan.
Converting one of the algorithm statements in the book to a Matlab
function can almost be done without changing the text.
It is definitely targeted towards practical implementations.
For sparse linear system solvers,
Alan George and Joseph Liu's Computer Solution of Large Sparse
Positive Definite Systems, (Prentice Hall, 1981) lays out
the foundations and terminology still in use. Unfortunately it is
now expensive ($148 for a used copy on Amazon, $360 for a new one!).
Tim Davis's
Direct Methods for Sparse Linear Systems
is more current and provides a really good and brief overview of
the title topic.
Duff, Reid and Erisman have the early and definitive book
Direct Methods for Sparse Matrices
SIAM ...
... is not just a colonial name for a country in Asia.
Indiana University is an institutional member of
SIAM,
which means students can join it for free. Long before other computer
science societies recognized scientific computing as a field, SIAM was
advocating it. Their main revenue is from publishing books and journals,
but the books are usually much cheaper than what textbook publishers demand.
Maybe not as cheap as the elapsed copyright books from
Dover Publications,
but closer to it than Prentice-Hall in prices.
Also, many of the authors make no money on the books and
instead have their royalties fed into a pool for sponsoring
students to attend SIAM conferences.
Take advantage of IU's institutional membership, and not just for this one book.
For a good mathematical reference on the methods used in
modern eigenvalue solvers, see
The
Matrix Eigenvalue Problem: GR and Krylov Subspace Methods by Watkins,
another SIAM publication.
Libraries
The most widely used library
for the dense systems
is LAPACK.
If you need to find LU solvers or eigenvalues of dense matrices that exceed
the memory capability of a single machine, use
ScaLAPACK.
As the name suggests it scales up, to to thousands of processors if necessary.
Those provide full decompositions (e.g., all of the eigenvalues and
eigenvectors) of dense matrices and some quasi-sparse ones like triangular
or banded systems.
Both of those rely upon good implementations of the BLAS, one of the reasons
for emphasizing them in P573.
The site that hosts LAPACK is
netlib.org also hosts a large collection of
publically available libraries, books, papers, and databases in scientific
computing. Some are antique but even those are reliable and classic, in
the English lit sense that they "have withstood the test of time".
Warning: a compatibility library (AKA a "reference implementation")
of the BLAS is also on netlib.org, but it is not a high-performance one.
It was designed as a temporary measure when the BLAS were first
proposed, with the idea that it would be supplanted by people implementing
the AFPI in more efficient versions.
Instead use a BLAS library supplied by your compiler vendor (Sun, Intel,
PGI) and if one is not available try
ATLAS,
FLAME,
or similar projects.
Tim Davis's book is also associated with a library
UMFPACK,
which implements a "multifrontal" method for LU factorization.
The other major method for sparse LU uses "supernodes", and
SuperLU
is an excellent and well-maintained library for it.
Other Junk Stuff
All of the above references primarily just provide keywords for doing a
Google search. Try doing a search with just the word "flame" and see how
many pages it takes before the link for the scientific library shows up.
Don't forget the second fundamental rule of scientific computing: never
write code if instead it can be scavenged on the Web (but give a citation
when you do that). Use the Google, that's what it is for.
- Last Modified: Mon 04 Nov 2019, 07:01 AM