Lagrange Multiplier Calculator

Solve constrained extrema for f(x,y) subject to constraint h(x,y)=0 using the Lagrange multiplier method: ∇f = λ∇h and h=0. You may enter the constraint as an equation like x^2 - y^2 = 6 — it will be rewritten to zero-form.

If you type =, only the left side is used.
Equations like a=b are auto-rewritten to (a)-(b).
Start near an expected solution.
Tweak if convergence fails.
Multiplier’s initial value.

Result

Equation Preview

Inputs rewritten as needed. We solve for (x,y,λ) with fₓ = λhₓ, fᵧ = λhᵧ, and h=0.

Solution

Different initial guesses can converge to different stationary points.

Verification

Residuals near zero indicate the conditions are satisfied.

Helping Notes

  • Constraint equations like x^2 - y^2 = 6 are converted to x^2 - y^2 - 6.
  • You may write the objective with = too (left side is used).
  • Use functions like sin, cos, exp, log, sqrt.

What is a Lagrange Multiplier Calculator?

A Lagrange Multiplier Calculator is a tool for optimizing an objective function subject to one or more equality constraints. It automates the classical method of introducing multipliers for each constraint, building the first-order system, and solving for stationary points that satisfy feasibility. The calculator reports candidate optima and, when requested, applies second-order tests (e.g., bordered Hessian) to classify maxima, minima, or saddle points. All equations are displayed in LaTeX and render responsively with MathJax or math.js.

About the Lagrange Multiplier Calculator

Given an objective \(f(\mathbf{x})\) with constraints \(g_i(\mathbf{x})=0\) for \(i=1,\dots,m\), the method searches for points where the gradient of \(f\) lies in the span of constraint gradients. This converts a constrained problem into solving simultaneous equations in \(\mathbf{x}\) and multipliers \(\lambda_i\). The tool supports symbolic entry of \(f\) and \(g_i\), numeric solving, multiple constraints, and optional inequality handling via KKT hints. It also checks constraint qualification issues (e.g., linearly dependent gradients) and warns when solutions may be degenerate or non-isolated.

How to Use this Lagrange Multiplier Calculator

  1. Enter the objective \(f(x_1,\dots,x_n)\) and each constraint \(g_i(x_1,\dots,x_n)=0\).
  2. Choose output: Stationary Points, Classify (second-order test), or KKT (inequalities).
  3. Provide initial guesses (optional) for numerical solvers or bounds if variables are restricted.
  4. Review solutions: \(\mathbf{x}^\star\), multipliers \(\lambda_i^\star\), feasibility residuals, and (if enabled) bordered-Hessian classification.
  5. Export the step-by-step equations for documentation or coursework.

Core Formulas (LaTeX for MathJax/math.js)

First-order conditions (m constraints): \[ \nabla f(\mathbf{x}^\star) = \sum_{i=1}^{m} \lambda_i \nabla g_i(\mathbf{x}^\star), \qquad g_i(\mathbf{x}^\star)=0 \ (i=1,\dots,m). \]

Single-constraint system: \[ \begin{cases} \nabla f(\mathbf{x})=\lambda \nabla g(\mathbf{x})\\ g(\mathbf{x})=0 \end{cases} \]

Bordered Hessian (classification, one constraint): \[ H_B= \begin{bmatrix} 0 & \nabla g^\top\\ \nabla g & \nabla^2 f \end{bmatrix}, \quad \text{use principal minors’ signs to test max/min on the constraint manifold.} \]

KKT (inequality \(h_j(\mathbf{x})\le 0\)): \[ \nabla f=\sum_i \lambda_i \nabla g_i+\sum_j \mu_j \nabla h_j,\; g_i=0,\; h_j\le0,\; \mu_j\ge0,\; \mu_j h_j=0. \]

Examples (Illustrative)

Example 1 — Maximize \(f(x,y)=xy\) on the unit circle

Constraint \(g(x,y)=x^2+y^2-1=0\). Conditions: \[ \begin{cases} y=2\lambda x\\ x=2\lambda y\\ x^2+y^2=1 \end{cases} \Rightarrow \lambda=\pm \tfrac{1}{2}. \] For \(\lambda=\tfrac12\): \(y=x\Rightarrow (x,y)=\big(\tfrac{1}{\sqrt2},\tfrac{1}{\sqrt2}\big)\) gives maximum \(xy=\tfrac12\). For \(\lambda=-\tfrac12\): \(y=-x\Rightarrow (x,y)=\big(\tfrac{1}{\sqrt2},-\tfrac{1}{\sqrt2}\big)\) gives minimum \(xy=-\tfrac12\).

Example 2 — Minimize distance to the line \(x+y=1\)

Minimize \(f(x,y)=x^2+y^2\) subject to \(g(x,y)=x+y-1=0\). \[ 2x=\lambda,\; 2y=\lambda,\; x+y=1 \Rightarrow x=y=\tfrac12,\ f=\tfrac12. \]

Example 3 — Utility maximization with a budget

Maximize \(u(x,y)=x^{1/2}y^{1/2}\) subject to \(ax+by=m\). \[ \frac{\partial u}{\partial x}=\frac{1}{2}\frac{y^{1/2}}{x^{1/2}}=\lambda a,\quad \frac{\partial u}{\partial y}=\frac{1}{2}\frac{x^{1/2}}{y^{1/2}}=\lambda b \] \(\Rightarrow ax=by\) and \(ax+by=m \Rightarrow x=\tfrac{m}{2a},\; y=\tfrac{m}{2b}.\)

FAQs

What is a Lagrange multiplier?

A scalar (or set of scalars) that balances the gradient of the objective against constraint gradients at an optimum.

When should I use Lagrange multipliers?

Use them for smooth optimization with equality constraints when interior stationary points might exist.

Can the calculator handle multiple constraints?

Yes. It forms \(\nabla f=\sum_i \lambda_i \nabla g_i\) with all \(g_i(\mathbf{x})=0\) and solves the resulting system.

Does it support inequalities?

Use the KKT option: add multipliers \(\mu_j\ge0\) and complementary slackness \(\mu_j h_j=0\).

How do I check max vs. min?

Enable classification to inspect the bordered Hessian or evaluate \(f\) around the feasible manifold.

What if gradients are dependent?

The tool flags rank deficiency; constraint qualification may fail, so results require extra care.

Do I need initial guesses?

Symbolic solves may not; numeric solvers benefit from reasonable starting points for faster convergence.

Can it return saddle points?

Yes. Stationary feasible points can be maxima, minima, or saddles; classification distinguishes them.

Does scaling a constraint change the solution?

No. Solutions for \(\mathbf{x}\) are invariant; only multipliers rescale accordingly.

More Math & Algebra Calculators