Vector spaces over a field with extra structure: inner products, orthogonality, matrix decompositions, spectral theory. Split out from abstract-algebra as its own tree because it anchors quantum mechanics (Hilbert spaces, observables as…
linear-algebra
Inner product
A sesquilinear form ⟨·,·⟩: V×V → F satisfying conjugate symmetry, linearity in first slot, positive-definiteness. Induces a norm ||v|| =…
Orthogonality
u ⊥ v iff ⟨u,v⟩ = 0. Orthogonal sets are linearly independent; orthonormal bases make computation diagonal.
Gram-Schmidt process
Any linearly independent set {v_1,...,v_k} can be converted to an orthonormal set {e_1,...,e_k} spanning the same subspace by successive…
Rank-nullity theorem
For a linear map T: V → W on finite-dimensional V, dim V = rank T + nullity T. The dimension is split between image and kernel.
Change of basis
A matrix representation of a linear map depends on choice of basis. Change-of-basis matrix P transforms coordinates: [v]_{B'} =…
Spectral theorem
Every self-adjoint (Hermitian) operator on a finite-dim inner-product space is unitarily diagonalizable with real eigenvalues. Cornerstone…
Singular value decomposition (SVD)
Any m×n matrix A factors as A = UΣV*, with U, V unitary and Σ diagonal with non-negative entries (singular values). Generalizes…
Positive-definite matrix
A Hermitian matrix A is positive-definite iff v*Av > 0 for all nonzero v, equivalently all eigenvalues are positive. Underpins Gaussian…
Jordan canonical form
Any square matrix over ℂ is similar to a block-diagonal Jordan matrix with eigenvalue blocks λI + N, where N is nilpotent. Generalizes…
Dual space V*
The vector space of linear functionals V → F. For finite-dim V, dim V* = dim V and V ≅ V** canonically. Underlies bra-ket notation and…
Cauchy–Schwarz inequality
For vectors u, v in an inner-product space: |⟨u,v⟩| ≤ ‖u‖ ‖v‖, with equality iff u, v are linearly dependent. Cornerstone of norm/metric…
Cayley–Hamilton theorem
Every square matrix A over a commutative ring satisfies its own characteristic polynomial: p_A(A) = 0.
Quadratic form
A homogeneous polynomial of degree 2, Q(x) = x^⊤ A x for symmetric A. Classification by signature (Sylvester's law of inertia);…
Trace tr(A)
Sum of diagonal entries of a square matrix; equivalently Σ eigenvalues. Basis-independent, linear, and satisfies tr(AB)=tr(BA); is the…
Exterior power Λ^k V
The kth alternating tensor power of a vector space V; dim Λ^k V = C(n,k) when dim V = n. Λ^n V = ⟨det⟩ realises the determinant.
Tensor algebra T(V)
⊕_{k≥0} V^{⊗k}, the free associative algebra on V. Universal: any linear map V → A to an associative algebra A extends uniquely to an…
Bilinear form
A map B : V × V → k linear in each argument. Symmetric, alternating, and non-degenerate are core sub-types; classification via Gram matrix…
Schur decomposition
Any complex square matrix A is unitarily similar to an upper-triangular matrix: A = U T U^*, U unitary. Diagonal of T consists of the…
LU decomposition
Any square matrix A (generically, up to a permutation P) factorises as A = PLU where L is unit lower-triangular and U is upper-triangular. …
Tensor decomposition
Factorisation of a multi-way array (tensor) into a sum or product of simpler tensors. Two canonical families: CP/PARAFAC (T = Σ_r λ_r a_r…
Principal Component Analysis
Data matrix X centered; SVD → principal axes = right singular vectors. Projections maximize variance. Dimensionality reduction.
Spectral theorem (finite-dim)
Hermitian A ∈ M_n(ℂ): A = U D U* with U unitary, D diagonal real. Orthonormal eigenbasis. Normal matrices similarly diagonalize.
QR decomposition
A = QR with Q orthogonal/unitary, R upper-triangular. Gram-Schmidt, Householder reflections, Givens rotations. Basis for QR algorithm…
LU / Cholesky decomposition
LU: A = LU with L unit lower, U upper (pivoting: PA = LU). Cholesky: positive-definite A = LL*. Backward-stable Gauss elimination.
Matrix / operator norms
||A||_p = sup ||Ax||_p / ||x||_p. Spectral = σ_max; Frobenius = √tr(A*A). Sub-multiplicative; condition number κ(A) = ||A|| ||A^{-1}||.
Courant-Fischer min-max
Eigenvalue λ_k = max_{dim V = k} min_{x ∈ V} ⟨Ax,x⟩/⟨x,x⟩ for Hermitian A. Weyl's inequalities for eigenvalue perturbations.
Tensor product of vector spaces
V ⊗ W universal for bilinear maps; dim(V⊗W) = dim V · dim W. Matrix Kronecker product; quantum entanglement.
Perron-Frobenius theorem
Non-negative irreducible matrix has positive real eigenvalue (spectral radius) with positive eigenvector, simple, strictly dominant. Markov…
Gershgorin circle theorem
All eigenvalues of A lie in union of disks D(a_{ii}, Σ_{j≠i} |a_{ij}|). Cheap eigenvalue localization; diagonal dominance implies…
Exterior / wedge product
Λ^k V = T^k V / (v⊗w+w⊗v relations). Determinant via ∧ⁿ. Forms, volume; differential forms in geometry.
Sylvester's law of inertia
For a real symmetric matrix A, the triple (p, q, r) counting positive, negative, and zero eigenvalues is invariant under real congruence A…
Polar decomposition
Every real or complex m × n matrix A with m ≥ n factors uniquely (on its image of full column rank) as A = UP where U has orthonormal…
Smith normal form
Every m × n matrix A over a principal ideal domain R is equivalent via invertible row/column operations to a diagonal diag(d_1 | d_2 | … |…
Rational (Frobenius) canonical form
Every linear operator T on a finite-dimensional vector space over any field F is conjugate to a block-diagonal matrix of companion matrices…
Moore-Penrose pseudoinverse
For any real or complex matrix A, there is a unique A⁺ satisfying the four Penrose axioms. For full column rank: A⁺ = (A* A)^{-1} A*. …
Woodbury matrix identity
Expresses the inverse of a rank-k perturbation A + UCV of an invertible matrix A in terms of A⁻¹ plus a low-rank correction.…
Vandermonde determinant
The determinant of the Vandermonde matrix V with rows (1, x_i, x_i², …, x_i^{n−1}) equals the product of pairwise differences. …
Cauchy-Binet formula
For A of size m × n and B of size n × m with n ≥ m, det(AB) equals the sum over m-subsets S ⊆ [n] of det(A_S) · det(B^S), where A_S is A…