REW

Does Every Finite-dimensional Inner Product Space Has An Orthonormal Basis?

Published Aug 29, 2025 8 min read
On this page

Yes, every finite-dimensional inner product space has an orthonormal basis . This is a foundational result in linear algebra, and its proof relies on the Gram–Schmidt process, an algorithm that can convert any basis into an orthonormal one. The existence of an orthonormal basis simplifies many problems in linear algebra, as computations become significantly easier in this special coordinate system.

A formal definition of inner product spaces and orthonormal bases

To fully understand why an orthonormal basis is guaranteed to exist, we must first define the key terms.

Inner product spaceAn inner product space is a vector space Vcap V

𝑉

over a field of scalars (either real numbers Rthe real numbers

or complex numbers Cthe complex numbers

) equipped with an inner product, denoted ⟨⋅,⋅⟩open angle bracket center dot comma center dot close angle bracket

⟨⋅,⋅⟩

. An inner product is a function that takes two vectors and produces a scalar, satisfying the following properties for all vectors u,v,w∈Vu comma v comma w is an element of cap V

𝑢,𝑣,𝑤∈𝑉

and all scalars cc

𝑐

:

  • Linearity in the first argument:⟨cu+v,w⟩=c⟨u,w⟩+⟨v,w⟩open angle bracket c u plus v comma w close angle bracket equals c open angle bracket u comma w close angle bracket plus open angle bracket v comma w close angle bracket

    ⟨𝑐𝑢+𝑣,𝑤⟩=𝑐⟨𝑢,𝑤⟩+⟨𝑣,𝑤⟩

    .

  • Conjugate symmetry:⟨u,v⟩=⟨v,u⟩¯open angle bracket u comma v close angle bracket equals modified open angle bracket v comma u close angle bracket with bar above

    ⟨𝑢,𝑣⟩=⟨𝑣,𝑢⟩

    . For real vector spaces, this simplifies to ⟨u,v⟩=⟨v,u⟩open angle bracket u comma v close angle bracket equals open angle bracket v comma u close angle bracket

    ⟨𝑢,𝑣⟩=⟨𝑣,𝑢⟩

    .

  • Positive-definiteness:⟨v,v⟩≥0open angle bracket v comma v close angle bracket is greater than or equal to 0

    ⟨𝑣,𝑣⟩≥0

    , and ⟨v,v⟩=0open angle bracket v comma v close angle bracket equals 0

    ⟨𝑣,𝑣⟩=0

    if and only if v=0v equals 0

    𝑣=0

    .

The inner product gives us geometric concepts like length and angle.

  • Norm (length): The norm of a vector vv

    𝑣

    is ‖v‖=⟨v,v⟩the norm of v end-norm equals the square root of open angle bracket v comma v close angle bracket end-root

    ‖𝑣‖=⟨𝑣,𝑣⟩√

    .

  • Orthogonality: Two vectors uu

    𝑢

    and vv

    𝑣

    are orthogonal if their inner product is zero: ⟨u,v⟩=0open angle bracket u comma v close angle bracket equals 0

    ⟨𝑢,𝑣⟩=0

    .

  • Normalized (unit vector): A vector vv

    𝑣

    is normalized if its norm is 1: ‖v‖=1the norm of v end-norm equals 1

    ‖𝑣‖=1

    .

Orthonormal basisA basis is a set of vectors that are linearly independent and span the entire vector space. An orthonormal basis is a basis {e1,e2,…,en}the set e sub 1 comma e sub 2 comma … comma e sub n end-set

{𝑒1,𝑒2,…,𝑒𝑛}

where all vectors are mutually orthogonal and have a unit norm. This can be written concisely using the Kronecker delta, δijdelta sub i j end-sub

𝛿𝑖𝑗

:⟨ei,ej⟩=δij={1if i=j0if i≠jopen angle bracket e sub i comma e sub j close angle bracket equals delta sub i j end-sub equals 1 cases; Case 1: 1 if i equals j space 0 if i is not equal to j end-cases;

⟨𝑒𝑖,𝑒𝑗⟩=𝛿𝑖𝑗=1if𝑖=𝑗0if𝑖≠𝑗

The proof using the Gram–Schmidt process

The proof for the existence of an orthonormal basis is constructive, meaning it provides a procedure for building one. The Gram–Schmidt process is this algorithm. The only prerequisite is that every finite-dimensional vector space has a basis, which is a standard result in linear algebra.

Let's start with a finite-dimensional inner product space Vcap V

𝑉

of dimension nn

𝑛

, and an arbitrary basis for Vcap V

𝑉

:B={v1,v2,…,vn}script cap B equals the set v sub 1 comma v sub 2 comma … comma v sub n end-set

ℬ={𝑣1,𝑣2,…,𝑣𝑛}

The Gram–Schmidt process transforms this basis into an orthogonal basis B′={u1,u2,…,un}script cap B prime equals the set u sub 1 comma u sub 2 comma … comma u sub n end-set

ℬ′={𝑢1,𝑢2,…,𝑢𝑛}

. From there, it's a simple step to get an orthonormal basis by normalizing each vector.

Step 1: Construct the first orthogonal vectorThe first vector of our new orthogonal basis, u1u sub 1

𝑢1

, is simply the first vector of the original basis, v1v sub 1

𝑣1

.u1=v1u sub 1 equals v sub 1

𝑢1=𝑣1

Step 2: Construct the second orthogonal vectorThe second vector, u2u sub 2

𝑢2

, is found by taking v2v sub 2

𝑣2

and subtracting its projection onto u1u sub 1

𝑢1

. This removes any component of v2v sub 2

𝑣2

that is parallel to u1u sub 1

𝑢1

, leaving a vector that is guaranteed to be orthogonal to u1u sub 1

𝑢1

.u2=v2−proju1(v2)=v2−⟨v2,u1⟩⟨u1,u1⟩u1u sub 2 equals v sub 2 minus proj sub u sub 1 open paren v sub 2 close paren equals v sub 2 minus the fraction with numerator open angle bracket v sub 2 comma u sub 1 close angle bracket and denominator open angle bracket u sub 1 comma u sub 1 close angle bracket end-fraction u sub 1

𝑢2=𝑣2−proj𝑢1(𝑣2)=𝑣2−⟨𝑣2,𝑢1⟩⟨𝑢1,𝑢1⟩𝑢1

Step 3: Generalize to subsequent vectorsWe can repeat this process for each subsequent vector vkv sub k

𝑣𝑘

from the original basis. To find uku sub k

𝑢𝑘

, we subtract its projection onto each of the previously found orthogonal vectors u1,…,uk−1u sub 1 comma … comma u sub k minus 1 end-sub

𝑢1,…,𝑢𝑘−1

.uk=vk−∑i=1k−1projui(vk)=vk−∑i=1k−1⟨vk,ui⟩⟨ui,ui⟩uiu sub k equals v sub k minus sum from i equals 1 to k minus 1 of proj sub u sub i open paren v sub k close paren equals v sub k minus sum from i equals 1 to k minus 1 of the fraction with numerator open angle bracket v sub k comma u sub i close angle bracket and denominator open angle bracket u sub i comma u sub i close angle bracket end-fraction u sub i

𝑢𝑘=𝑣𝑘−𝑘−1𝑖=1proj𝑢𝑖(𝑣𝑘)=𝑣𝑘−𝑘−1𝑖=1⟨𝑣𝑘,𝑢𝑖⟩⟨𝑢𝑖,𝑢𝑖⟩𝑢𝑖

Step 4: Normalize the orthogonal vectorsAfter performing this procedure for all nn

𝑛

vectors, we have an orthogonal basis {u1,u2,…,un}the set u sub 1 comma u sub 2 comma … comma u sub n end-set

{𝑢1,𝑢2,…,𝑢𝑛}

. To make it orthonormal, we simply normalize each vector by dividing it by its norm.ek=uk‖uk‖e sub k equals the fraction with numerator u sub k and denominator the norm of u sub k end-norm end-fraction

𝑒𝑘=𝑢𝑘‖𝑢𝑘‖

The set {e1,e2,…,en}the set e sub 1 comma e sub 2 comma … comma e sub n end-set

{𝑒1,𝑒2,…,𝑒𝑛}

is the final orthonormal basis.

Proof that the Gram–Schmidt process always works

Several key points ensure the Gram–Schmidt process produces a valid orthonormal basis:

  1. Non-zero intermediate vectors: A potential issue would be if one of the intermediate vectors uku sub k

    𝑢𝑘

    turned out to be the zero vector. However, this is not possible. If uk=0u sub k equals 0

    𝑢𝑘=0

    , it would mean that vkv sub k

    𝑣𝑘

    is a linear combination of the previous vectors u1,…,uk−1u sub 1 comma … comma u sub k minus 1 end-sub

    𝑢1,…,𝑢𝑘−1

    . Since each uiu sub i

    𝑢𝑖

    is a linear combination of v1,…,viv sub 1 comma … comma v sub i

    𝑣1,…,𝑣𝑖

    , this would imply that vkv sub k

    𝑣𝑘

    is a linear combination of v1,…,vk−1v sub 1 comma … comma v sub k minus 1 end-sub

    𝑣1,…,𝑣𝑘−1

    , contradicting the fact that {v1,…,vn}the set v sub 1 comma … comma v sub n end-set

    {𝑣1,…,𝑣𝑛}

    is a basis and therefore linearly independent.

  2. Preservation of span: The subspace spanned by the first kk

    𝑘

    vectors of the original basis, span{v1,…,vk}span the set v sub 1 comma … comma v sub k end-set

    span{𝑣1,…,𝑣𝑘}

    , is the same as the subspace spanned by the first kk

    𝑘

    vectors of the orthonormal basis, span{e1,…,ek}span the set e sub 1 comma … comma e sub k end-set

    span{𝑒1,…,𝑒𝑘}

    . Since the entire original basis spans the vector space Vcap V

    𝑉

    , the resulting orthonormal set must also span Vcap V

    𝑉

    .

  3. Linear independence: Any set of non-zero orthogonal vectors is linearly independent. Since the Gram–Schmidt process produces a set of non-zero vectors that are mutually orthogonal, they must be linearly independent.

Significance and consequences

The existence of an orthonormal basis has profound implications for the study of finite-dimensional inner product spaces.

  • Simplification of inner products: In an orthonormal basis, the inner product of any two vectors can be calculated as the standard dot product of their coordinate vectors. For example, if x=∑xieix equals sum of x sub i e sub i

    𝑥=𝑥𝑖𝑒𝑖

    and y=∑yieiy equals sum of y sub i e sub i

    𝑦=𝑦𝑖𝑒𝑖

    , then ⟨x,y⟩=∑xiyi¯open angle bracket x comma y close angle bracket equals sum of x sub i y sub i bar

    ⟨𝑥,𝑦⟩=𝑥𝑖𝑦𝑖

    . This simplifies many computations.

  • Coordinate representation: The coordinates of any vector vv

    𝑣

    with respect to an orthonormal basis {ei}the set e sub i end-set

    {𝑒𝑖}

    are easily found using inner products:v=∑i=1n⟨v,ei⟩eiv equals sum from i equals 1 to n of open angle bracket v comma e sub i close angle bracket e sub i

    𝑣=𝑛𝑖=1⟨𝑣,𝑒𝑖⟩𝑒𝑖

  • Orthogonal Projections: The orthogonal projection of a vector vv

    𝑣

    onto a subspace spanned by an orthonormal set is also greatly simplified.

  • Isometry to Euclidean space: Any finite-dimensional inner product space is "geometrically identical" (or isometric) to RnR-n

    ℝ𝑛

    (or CnC-n

    ℂ𝑛

    in the complex case) equipped with the standard dot product. This means that once an orthonormal basis is found, the abstract inner product space can be treated as a concrete Euclidean space.

Enjoyed this article? Share it with a friend.