Orthogonal Projection Calculator

Compute the projection of one vector onto another. Enter vectors; get formula, scalar component, projection vector, and perpendicular component instantly.

Example: 3, 4 (2D) or 1, 0, 3 (3D). Use commas to separate components. u and v must have the same number of components.
v cannot be the zero vector; projection uses v·v in the denominator. Spaces are OK. Decimals allowed.

Equation Preview

projv(u) = ((u·v)/(v·v)) v

Helping Notes

  • Enter vectors as comma-separated lists (e.g., 3,4 or 1,0,3).
  • Vectors must be the same dimension.
  • Formula: projv(u) = ((u·v)/(v·v)) v.
  • Scalar projection (length along v): (u·v)/||v||.

Projection Vector projv(u)

Scalar Projection (length of u along v)

Perpendicular Component u − projv(u)

Dot Products & Magnitudes

Helping notes: If v = 0 or dimensions mismatch, projection is undefined. Check inputs and try again.

What is an Orthogonal Projection Calculator?

An Orthogonal Projection Calculator finds the closest point in a line, plane, or higher–dimensional subspace to a given vector. Geometrically, it drops a perpendicular from the vector to the subspace, splitting the vector into a “parallel” component that lies inside the subspace and a “perpendicular” component orthogonal to it. Algebraically, the calculator evaluates projections onto a single direction, onto a plane defined by an orthonormal basis, or onto the column space of a full–rank matrix. It also returns the projection matrix, the orthogonal residual, and the distance (norm of the residual), presenting all steps with readable LaTeX.

About the Orthogonal Projection Calculator

For a nonzero vector \(\mathbf{u}\), the projection of \(\mathbf{v}\) onto the line spanned by \(\mathbf{u}\) is

\[ \operatorname{proj}_{\mathbf{u}}(\mathbf{v})=\frac{\mathbf{u}^\top \mathbf{v}}{\mathbf{u}^\top \mathbf{u}}\;\mathbf{u}. \]

If \(\mathbf{u}\) is unit, the projection matrix is \(P=\mathbf{u}\mathbf{u}^\top\). For an orthonormal basis \(U=[\mathbf{u}_1\,\cdots\,\mathbf{u}_k]\), the projection onto \(\mathcal{S}=\operatorname{span}(U)\) is

\[ P = U U^\top,\qquad \operatorname{proj}_{\mathcal{S}}(\mathbf{v}) = P\,\mathbf{v},\qquad \mathbf{r}=\mathbf{v}-P\mathbf{v}\ \perp\ \mathcal{S}. \]

For a general full–rank matrix \(A\in\mathbb{R}^{m\times n}\) (columns span the target subspace), the projector is

\[ P = A\,(A^\top A)^{-1}A^\top,\qquad \operatorname{proj}_{\operatorname{Col}(A)}(\mathbf{v})=P\,\mathbf{v},\qquad \|\mathbf{v}-P\mathbf{v}\|=\min_{\mathbf{w}\in\operatorname{Col}(A)}\|\mathbf{v}-\mathbf{w}\|. \]

A plane with unit normal \(\mathbf{n}\) uses \(\operatorname{proj}_{\text{plane}}(\mathbf{v})=\mathbf{v}-(\mathbf{n}^\top\mathbf{v})\mathbf{n}\). The calculator also highlights projector properties \(P^\top=P\) and \(P^2=P\), and it can report the least–squares solution \(\hat{\mathbf{x}}\) of \(A\mathbf{x}\approx\mathbf{b}\) via normal equations \(A^\top A\hat{\mathbf{x}}=A^\top\mathbf{b}\) with \(\operatorname{proj}(\mathbf{b})=A\hat{\mathbf{x}}\).

How to Use this Orthogonal Projection Calculator

  1. Choose a mode: onto a vector, onto a plane/orthonormal basis, or onto the column space of \(A\).
  2. Enter \(\mathbf{v}\) (target), and the defining data: \(\mathbf{u}\), orthonormal columns \(U\), unit normal \(\mathbf{n}\), or matrix \(A\).
  3. Compute: the tool returns \(P\), the projection \(P\mathbf{v}\), residual \(\mathbf{r}=\mathbf{v}-P\mathbf{v}\), and distance \(\|\mathbf{r}\|\).
  4. (Optional) For least squares, it reports \(\hat{\mathbf{x}}=(A^\top A)^{-1}A^\top\mathbf{b}\) and the fitted vector \(A\hat{\mathbf{x}}\).
  5. Copy the steps and matrix forms for documentation or coursework.

Core Formulas (LaTeX)

Line (nonunit \(\mathbf{u}\)): \[ \operatorname{proj}_{\mathbf{u}}(\mathbf{v})=\frac{\mathbf{u}^\top \mathbf{v}}{\mathbf{u}^\top \mathbf{u}}\mathbf{u},\quad P=\frac{\mathbf{u}\mathbf{u}^\top}{\mathbf{u}^\top\mathbf{u}}. \]

Orthonormal basis \(U\): \[ P=UU^\top,\quad \mathbf{v}=P\mathbf{v}+(I-P)\mathbf{v},\ (I-P)\mathbf{v}\perp \operatorname{span}(U). \]

General subspace (full column rank \(A\)): \[ P=A(A^\top A)^{-1}A^\top. \]

Plane with unit normal \(\mathbf{n}\): \[ \operatorname{proj}_{\text{plane}}(\mathbf{v})=\mathbf{v}-(\mathbf{n}^\top\mathbf{v})\mathbf{n}. \]

Least squares link: \[ A^\top A\hat{\mathbf{x}}=A^\top\mathbf{b},\quad \operatorname{proj}_{\operatorname{Col}(A)}(\mathbf{b})=A\hat{\mathbf{x}}. \]

Examples (Illustrative)

Example 1 — Onto a line in \(\mathbb{R}^3\)

\(\mathbf{u}=(1,2,2)\), \(\mathbf{v}=(2,1,0)\). \(\mathbf{u}^\top\mathbf{v}=4\), \(\mathbf{u}^\top\mathbf{u}=9\). \(\operatorname{proj}_{\mathbf{u}}(\mathbf{v})=\tfrac{4}{9}(1,2,2)=(\tfrac{4}{9},\tfrac{8}{9},\tfrac{8}{9})\). Residual \((\tfrac{14}{9},\tfrac{1}{9},-\tfrac{8}{9})\) with distance \(\sqrt{261}/9\approx1.795\).

Example 2 — Onto the \(xy\)-plane

Orthonormal basis \(U=[\mathbf{e}_1,\mathbf{e}_2]\Rightarrow P=\operatorname{diag}(1,1,0)\). For \(\mathbf{v}=(1,2,3)\), \(P\mathbf{v}=(1,2,0)\), residual \((0,0,3)\), distance \(3\).

Example 3 — Column space (least squares fit)

\(A=\begin{bmatrix}1&0\\1&1\\1&2\end{bmatrix}\), \(\mathbf{b}=\begin{bmatrix}1\\2\\2\end{bmatrix}\). \(A^\top A=\begin{bmatrix}3&3\\3&5\end{bmatrix}\), \(A^\top\mathbf{b}=\begin{bmatrix}5\\6\end{bmatrix}\). \(\hat{\mathbf{x}}=(A^\top A)^{-1}A^\top\mathbf{b}=\big(\tfrac{7}{6},\tfrac{1}{2}\big)\). Projection \(A\hat{\mathbf{x}}=\left(\tfrac{7}{6},\tfrac{5}{3},\tfrac{13}{6}\right)\).

FAQs

What’s the difference between projecting onto a vector and onto a subspace?

Onto a vector uses a single direction; onto a subspace uses many directions, summarized by a projection matrix \(P\).

How do I project onto a plane given its normal?

Normalize \(\mathbf{n}\) (or divide by \(\mathbf{n}^\top\mathbf{n}\)) and use \(\mathbf{v}-(\mathbf{n}^\top\mathbf{v})\mathbf{n}\).

Why is the orthogonal projection the “closest” point?

Because \(P\mathbf{v}\) minimizes \(\|\mathbf{v}-\mathbf{w}\|\) over the subspace; the residual is orthogonal, giving the shortest distance.

What if my matrix \(A\) has dependent columns?

Use QR/SVD or the pseudoinverse \(A^+\); then \(P=A A^+\) still projects onto \(\operatorname{Col}(A)\).

Does this work over complex vectors?

Yes—replace transposes by conjugate transposes: \(P=A(A^*A)^{-1}A^*\).

What properties characterize a projection matrix?

Symmetric and idempotent: \(P^\top=P\) and \(P^2=P\). Eigenvalues are \(0\) or \(1\).

More Math & Algebra Calculators