.
Cauchy matrix
In mathematics, a Cauchy matrix, named after Augustin Louis Cauchy, is an m×n matrix with elements aij in the form
\( a_{ij}={\frac{1}{x_i-y_j}};\quad x_i-y_j\neq 0,\quad 1 \le i \le m,\quad 1 \le j \le n \)
where \( x_i \) and \( y_j \) are elements of a field \( \mathcal{F}, \) and \( (x_i) \) and \( (y_j) \) are injective sequences (they do not contain repeated elements; elements are distinct).
The Hilbert matrix is a special case of the Cauchy matrix, where
\( x_i-y_j = i+j-1. \; \)
Every submatrix of a Cauchy matrix is itself a Cauchy matrix.
Cauchy determinants
The determinant of a Cauchy matrix is clearly a rational fraction in the parameters \( (x_i) \) and \( (y_j) \) . If the sequences were not injective, the determinant would vanish, and tends to infinity if some \( x_i \) tends to \( y_j \) . A subset of its zeros and poles are thus known. The fact is that there are no more zeros and poles:
The determinant of a square Cauchy matrix A is known as a Cauchy determinant and can be given explicitly as
\( \det \mathbf{A}={{\prod_{i=2}^n \prod_{j=1}^{i-1} (x_i-x_j)(y_j-y_i)}\over {\prod_{i=1}^n \prod_{j=1}^n (x_i-y_j)}} \) (Schechter 1959, eqn 4).
It is always nonzero, and thus all square Cauchy matrices are invertible. The inverse A−1 = B = [bij] is given by
\( b_{ij} = (x_j - y_i) A_j(y_i) B_i(x_j) \ \) , (Schechter 1959, Theorem 1)
where Ai(x) and Bi(x) are the Lagrange polynomials for \( (x_i) \) and \( (y_j) \) , respectively. That is,
\( A_i(x) = \frac{A(x)}{A^\prime(x_i)(x-x_i)} \quad\text{and}\quad B_i(x) = \frac{B(x)}{B^\prime(y_i)(x-y_i)}, \)
with
\( A(x) = \prod_{i=1}^n (x-x_i) \quad\text{and}\quad B(x) = \prod_{i=1}^n (x-y_i). \)
Generalization
A matrix C is called Cauchy-like if it is of the form
\( C_{ij}=\frac{r_i s_j}{x_i-y_j}. \)
Defining X=diag(xi), Y=diag(yi), one sees that both Cauchy and Cauchy-like matrices satisfy the displacement equation
\( \mathbf{XC}-\mathbf{CY}=rs^\mathrm{T} \)
(with \( r=s=(1,1,\ldots,1) \) for the Cauchy one). Hence Cauchy-like matrices have a common displacement structure, which can be exploited while working with the matrix. For example, there are known algorithms in literature for
approximate Cauchy matrix-vector multiplication with \( O(n \log n) \) ops (e.g. the fast multipole method),
(pivoted) LU factorization with \( O(n^2) \) ops (GKO algorithm), and thus linear system solving,
approximated or unstable algorithms for linear system solving in \( O(n \log^2 n) \) .
Here n denotes the size of the matrix (one usually deals with square matrices, though all algorithms can be easily generalized to rectangular matrices).
See also
Toeplitz matrix
References
A. Gerasoulis (1988). "A fast algorithm for the multiplication of generalized Hilbert matrices with vectors" (PDF). Mathematics of Computation 50 (181): 179–188. doi:10.2307/2007921.
I. Gohberg, T. Kailath, V. Olshevsky (1995). "Fast Gaussian elimination with partial pivoting for matrices with displacement structure" (PDF). Mathematics of Computation 64 (212): 1557–1576. doi:10.1090/s0025-5718-1995-1312096-x.
P. G. Martinsson, M. Tygert, V. Rokhlin (2005). "An O(N \log^2 N) algorithm for the inversion of general Toeplitz matrices" (PDF). Computers & Mathematics with Applications 50: 741–752. doi:10.1016/j.camwa.2005.03.011.
S. Schechter (1959). "On the inversion of certain matrices" (PDF). Mathematical Tables and Other Aids to Computation 13 (66): 73–77. doi:10.2307/2001955.
Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License