Difference between revisions of "Book:Nicholas Higham/Functions of Matrices: Theory and Computation"

From specialfunctionswiki
Jump to: navigation, search
(Created page with "{{Book|Functions of Matrices: Theory and Computation|2008|||Nicholas Higham}} :List of Figures :List of Tables :Preface :1 Theory of Matrix Functions ::1.1 Introduction ::1.2...")
 
Line 162: Line 162:
 
::10.8 Notes and References
 
::10.8 Notes and References
 
::Problems
 
::Problems
 +
:11 Matrix Logarithm
 +
::11.1 Basic Properties
 +
::11.2 Conditioning
 +
::11.3 Series Expansions
 +
::11.4 Padé Approximation
 +
::11.5 Inverse Scaling and Squaring Method
 +
:::11.5.1 Schur Decomposition: Triangular Matrices
 +
:::11.5.2 Full Matrices
 +
::11.6 Schur Algorithms
 +
:::11.6.1 Schur-Fréchet Algorithm
 +
:::11.6.2 Schur-Parlett Algorithm
 +
::11.7 Numerical Experiment
 +
::11.8 Evaluating the Fréchet Derivative
 +
::11.9 Notes and References
 +
::Problems
 +
:12 Matrix Cosine and Sine
 +
::12.1 Basic Properties
 +
::12.2 Conditioning
 +
::12.3 Padé Approximation of Cosine
 +
::12.4 Double Angle Algorithm for Cosine
 +
::12.5 Numerical Experiment
 +
::12.6 Double Angle Algorithm for Sine and Cosine
 +
:::12.6.1 Preprocessing
 +
::12.7 Notes and References
 +
::Problems
 +
:13 Functions of Matrix Times Vector: $f(A)b$
 +
::13.1 Representation via Polynomial Interpolation
 +
::13.2 Krylov Subspace Methods
 +
:::13.2.1 The Arnoldi Process
 +
:::13.2.2 Arnoldi Approximation of $f(A)b$
 +
:::13.2.3 Lanczos Biorthogonalization
 +
::13.3 Quadrature
 +
:::13.3.1 On the Real Line
 +
:::13.3.2 Contour Integration
 +
::13.4 Differential Equations
 +
::13.5 Other Methods
 +
::13.6 Notes and References
 +
::Problems
 +
:14 Miscellany
 +
::14.1 Structured Matrices
 +
:::14.1.2 Monotone Functions
 +
:::14.1.3 Other Structures
 +
:::14.1.4 Data Sparse Representations
 +
:::14.1.5 Computing Structured $f(A)$ for Structured $A$
 +
::14.2 Exponential Decay of Functions of Banded Matrices
 +
::14.3 Approximating Entries of Matrix Functions
 +
:A Notation
 +
:B Background: Definitions and Useful Facts
 +
::B.1 Basic Notation
 +
::B.2 Eigenvalues and Jordan Canonical Form
 +
::B.3 Invariant Subspaces
 +
::B.4 Special Classes of Matrices
 +
::B.5 Matrix Factorization and Decompositions
 +
::B.6 Pseudoinverse and Orthogonality
 +
:::B.6.1 Pseudoinverse
 +
:::B.6.2 Projector and Orthogonal Projector
 +
:::B.6.3 Partial Isometry
 +
::B.7 Norms
 +
::B.8 Matrix Sequences and Series
 +
::B.9 Perturbation Expansions for Matrix Inverse
 +
::B.10 Sherman-Morrison-Woodbury Formula
 +
::B.11 Nonnegative Matrices
 +
::B.12 Positive (Semi)definite Ordering
 +
::B.13 Kronecker Product and Sum
 +
::B.14 Sylvester Equation
 +
::B.15 Floating Point Arithmetic
 +
::B.16 Divided Differences
 +
::Problems
 +
:C Operation Counts
 +
:D Matrix Function Toolbox
 +
:E Solutions to Problems
 +
:Bibliography
 +
:Index
 +
 +
[[Category:Book]]

Revision as of 04:16, 24 February 2018

Nicholas Higham: Functions of Matrices: Theory and Computation

Published $2008$.


List of Figures
List of Tables
Preface
1 Theory of Matrix Functions
1.1 Introduction
1.2 Definitions of $f(A)$
1.2.1 Jordan Canonical Form
1.2.2 Polynomial Interpolation
1.2.3 Cauchy Integral Theorem
1.2.4 Equivalence of Definitions
1.2.5 Example: Function of Identity Plus Rank-$1$ Matrix
1.2.6 Example: Functions of Discrete Fourier Transform Matrix
1.3 Properties
1.4 Nonprimary Matrix Functions
1.5 Existence of (Real) Matrix Squre Roots and Logarithms
1.6 Classification of Matrix Square Roots and Logarithms
1.7 Principal Square Roots and Logarithms
1.8 $f(AB)$ and $f(BA)$
1.9 Miscellany
1.10 A Brief History of Matrix Functions
1.11 Notes and References
Problems
2 Applications
2.1 Differential Equations
2.1.1 Exponential Integrators
2.2 Nuclear Magnetic Resonance
2.3 Markov Models
2.4 Control Theory
2.5 The Nonsymmetric Eigenvalue Problem
2.6 Orthogonalization and the Orthogonal Procrustes Problem
2.7 Theoretical Particle Physics
2.8 Other Matrix Functions
2.9 Nonlinear Matrix Equations
2.10 Geometric Mean
2.11 Pseudospectra
2.12 Algebras
2.13 Sensitivity Analysis
2.14 Other Applications
2.14.1 Boundary Value Problems
2.14.2 Semidefinite Programming
2.14.3 Matrix Sector Function
2.14.4 Matrix Disk Function
2.14.5 The Average Eye in Optics
2.14.6 Computer Graphics
2.14.7 Bregman Divergences
2.14.8 Structured Matrix Interpolation
2.14.9 The Lambert $W$ Function and Delay Differential Equations
2.15 Notes and References
Problems
3 Conditioning
3.1 Condition Numbers
3.2 Properties of the Fréchet Derivative
3.3 Bounding the Condition Number
3.4 Computing or Estimating the Condition Number
3.5 Notes and References
Problems
4 Techniques for General Functions
4.1 Matrix Powers
4.2 Polynomial Evaluation
4.3 Taylor Series
4.4 Rational Approximation
4.4.1 Best $L_{\infty}$ Approximation
4.4.2 Padé Approximation
4.4.3 Evaluating Rational Functions
4.5 Diagonalization
4.6 Schur Decomposition and Triangular Matrices
4.7 Block Diagonalization
4.8 Interpolating Polynomial and Characteristic Polynomial
4.9 Matrix Iterations
4.9.1 Order of Convergence
4.9.2 Termination Criteria
4.9.3 Convergence
4.9.4 Numerical Stability
4.10 Preprocessing
4.11 Bounds for $\lVert f(A) \rVert$
4.12 Notes and References
Problems
5 Matrix Sign Function
5.1 Sensitivity and Conditioning
5.2 Schur Method
5.3 Newton's Method
5.4 The Padé Family of Iterations
5.5 Scaling the Newton Iteration
5.6 Terminating the Iterations
5.7 Numerical Stability of Sign Iterations
5.8 Numerical Experiments and Algorithm
5.9 Best $L_{\infty}$ Approximation
5.10 Notes and References
Problems
6 Matrix Square Root
6.1 Sensitivity and Conditioning
6.2 Schur Method
6.3 Newton's Method and Its Variants
6.4 Stability and Limiting Accuracy
6.4.1 Newton Iteration
6.4.2 DB Iterations
6.4.3 CR Iteration
6.4.4 IN Iteration
6.4.5 Summary
6.5 Scaling the Newton Iteration
6.6 Numerical Experiments
6.7 Iterations via the Matrix Sign Function
6.8 Special Matrices
6.8.1 Binomial Iteration
6.8.2 Modified Newton Iterations
6.8.3 $M$-Matrices and $H$-Matrices
6.8.4 Hermitian Positive Definite Matrices
6.9 Computing Small-Normed Square Roots
6.10 Comparison of Methods
6.11 Involutory Matrices
6.12 Notes and References
Problems
7 Matrix $p$th Root
7.1 Theory
7.2 Schur Method
7.3 Newton's Method
7.4 Inverse Newton Method
7.5 Schur-Newton Method
7.6 Matrix Sign Method
7.7 Notes and References
Problems
8 The Polar Decomposition
8.1 Approximation and Properties
8.2 Sensitivity and Conditioning
8.3 Newton's Method
8.4 Obtaining Iterations via the Matrix Sign Function
8.5 The Padé Family of Methods
8.6 Scaling the Newton Iteration
8.7 Terminating the Iterations
8.8 Numerical Stability and Choice of $H$
8.9 Algorithm
8.10 Notes and References
Problems
9 Schur-Parlett Algorithm
9.1 Evaluating Functions of the Atomic Blocks
9.2 Evaluating the Upper Triangular Part of $f(T)$
9.3 Reordering and Blocking the Schur Function
9.4 Schur-Parlett Algorithm for $f(A)$
9.5 Preprocessing
9.6 Notes and References
Problems
10 Matrix Exponential
10.1 Basic Properties
10.2 Conditioning
10.3 Scaling and Squaring Method
10.4 Schur Algorithms
10.4.1 Newton Divided Difference Interpolation
10.4.2 Schur-Fréchet Algorithm
10.4.3 Schur-Parlett Aglorithm
10.5 Numerical Experiment
10.6 Evaluating the Fréchet Derivative and Its Norm
10.6.1 Quadrature
10.6.2 The Kronecker Formulae
10.6.3 Computing and Estimating the Norm
10.7 Miscellany
10.7.1 Hermitian Matrices and Best $L_{\infty}$ Approximation
10.7.2 Essentially Nonnegative Matrices
10.7.3 Preprocessing
10.7.4 The $\psi$ Functions
10.8 Notes and References
Problems
11 Matrix Logarithm
11.1 Basic Properties
11.2 Conditioning
11.3 Series Expansions
11.4 Padé Approximation
11.5 Inverse Scaling and Squaring Method
11.5.1 Schur Decomposition: Triangular Matrices
11.5.2 Full Matrices
11.6 Schur Algorithms
11.6.1 Schur-Fréchet Algorithm
11.6.2 Schur-Parlett Algorithm
11.7 Numerical Experiment
11.8 Evaluating the Fréchet Derivative
11.9 Notes and References
Problems
12 Matrix Cosine and Sine
12.1 Basic Properties
12.2 Conditioning
12.3 Padé Approximation of Cosine
12.4 Double Angle Algorithm for Cosine
12.5 Numerical Experiment
12.6 Double Angle Algorithm for Sine and Cosine
12.6.1 Preprocessing
12.7 Notes and References
Problems
13 Functions of Matrix Times Vector: $f(A)b$
13.1 Representation via Polynomial Interpolation
13.2 Krylov Subspace Methods
13.2.1 The Arnoldi Process
13.2.2 Arnoldi Approximation of $f(A)b$
13.2.3 Lanczos Biorthogonalization
13.3 Quadrature
13.3.1 On the Real Line
13.3.2 Contour Integration
13.4 Differential Equations
13.5 Other Methods
13.6 Notes and References
Problems
14 Miscellany
14.1 Structured Matrices
14.1.2 Monotone Functions
14.1.3 Other Structures
14.1.4 Data Sparse Representations
14.1.5 Computing Structured $f(A)$ for Structured $A$
14.2 Exponential Decay of Functions of Banded Matrices
14.3 Approximating Entries of Matrix Functions
A Notation
B Background: Definitions and Useful Facts
B.1 Basic Notation
B.2 Eigenvalues and Jordan Canonical Form
B.3 Invariant Subspaces
B.4 Special Classes of Matrices
B.5 Matrix Factorization and Decompositions
B.6 Pseudoinverse and Orthogonality
B.6.1 Pseudoinverse
B.6.2 Projector and Orthogonal Projector
B.6.3 Partial Isometry
B.7 Norms
B.8 Matrix Sequences and Series
B.9 Perturbation Expansions for Matrix Inverse
B.10 Sherman-Morrison-Woodbury Formula
B.11 Nonnegative Matrices
B.12 Positive (Semi)definite Ordering
B.13 Kronecker Product and Sum
B.14 Sylvester Equation
B.15 Floating Point Arithmetic
B.16 Divided Differences
Problems
C Operation Counts
D Matrix Function Toolbox
E Solutions to Problems
Bibliography
Index