 Home Geography Direct methods for sparse matrices

# Controlling algorithm stability through pivoting

When performing Gaussian elimination, we showed in Chapter 3 that it was necessary to interchange rows to avoid a zero pivot. In Section 4.3, we illustrated that the use of a small pivot can lead to growth in the size of the computed entries. On our 3-digit computer, the small pivot led to a large multiplier resulting in the addition -2420 + 1.58, which destroys the value 1.58. Interchanges are again necessary for stability, this time to control growth in the resulting numbers. We study a variety of pivoting strategies aimed at controlling growth in the course of the factorization.

## Partial pivoting

One reason why we might get growth that destroys smaller values in the course of the computation comes from having a very large multiplier. A solution to this is to require that the inequality |/j- | < 1 should hold for the coefficients of the matrix L. This is readily achieved by reordering the rows of the matrix so that the new pivot satisfies the inequalities The k-th column must be scanned below the main diagonal to determine its entry of largest magnitude. The row containing this entry is exchanged with the k-th row and so becomes the new k-th row. This strategy is called partial pivoting.

Applying this strategy to example (4.3.1), we would interchange rows one and two of A before beginning the reduction and would compute the factors Using these factors, we compute which is almost correct to three decimal places (x = 1.176).

Note that this strategy for computation subject to rounding errors is similar to the strategy used for treating a zero pivot in exact arithmetic. In the exact case, we were concerned only to avoid a zero pivot; in the presence of rounding, small pivots must also be avoided. Note also that a zero pivot gives the ultimate disaster of infinite growth and total loss of accuracy.

In practice, Gaussian elimination with partial pivoting is considered to be a stable algorithm. This is based on experience rather than rigorous analysis, since the best a priori bound that can be given for a dense matrix is This bound is easy to establish (see Exercise 4.5), but generally very pessimistic, particularly in the case where n is very large.

 Related topics