Mastering Quadratic Programming: From Theory to Practice

Mastering Quadratic Programming: From Theory to Practice

Oct 20, 2024

Stay in the loop

Join thousands of readers and get the best content delivered directly to your inbox.

Get a list of personally curated and freely accessible ML, NLP, and computer vision resources for FREE on newsletter sign-up.

Quadratic Programming (QP) is a powerful optimization technique that plays a crucial role in various fields, from finance to machine learning. In this comprehensive guide, we'll explore what QP is, why it's important, and how to solve QP problems using different methods.

What is Quadratic Programming?

Quadratic Programming is a type of mathematical optimization problem where we aim to minimize (or maximize) a quadratic objective function subject to linear constraints. The standard form of a QP problem is:

minx12xTHx+cTx\min_x \frac{1}{2}x^THx + c^Tx

subject to: Ax+b=0Ax + b = 0 Cx+d0Cx + d \leq 0

Where:

  • xx is the vector of variables we're optimizing
  • HH is a symmetric n×nn \times n matrix
  • cc is an nn-dimensional vector
  • AA and CC are matrices representing equality and inequality constraints
  • bb and dd are vectors
💡

A QP is convex if the matrix HH is positive semidefinite (i.e., H0H \geq 0). Convex QPs are particularly important because any local minimum is also a global minimum.

Why is Quadratic Programming Important?

QP has numerous applications across various domains:

  1. Finance: Portfolio optimization, risk management
  2. Machine Learning: Support Vector Machines (SVMs), least squares regression
  3. Control Theory: Model Predictive Control (MPC)
  4. Economics: Resource allocation, production planning
  5. Engineering: Structural design optimization

Solving Quadratic Programming Problems

Let's explore how to solve QP problems, starting with the simplest case and moving to more complex scenarios.

1. Unconstrained Quadratic Programming

The unconstrained QP problem has the form:

minxf(x)=12xTHx+cTx\min_x f(x) = \frac{1}{2}x^THx + c^Tx

Solution:

  1. Find the gradient: f(x)=Hx+c\nabla f(x) = Hx + c
  2. Set the gradient to zero: Hx+c=0Hx + c = 0
  3. Solve for xx: x=H1cx^* = -H^{-1}c

This solution is a global minimum if HH is positive definite.

2. Constrained Quadratic Programming

For constrained QP problems, we need more sophisticated methods. But first, let's understand the optimality conditions.

KKT Conditions

The Karush-Kuhn-Tucker (KKT) conditions provide necessary conditions for optimality:

  1. Stationarity: Hx+c+ATλ+CTμ=0Hx^* + c + A^T\lambda^* + C^T\mu^* = 0
  2. Primal Feasibility: Ax+b=0Ax^* + b = 0 and Cx+d0Cx^* + d \leq 0
  3. Dual Feasibility: μ0\mu^* \geq 0
  4. Complementary Slackness: μi(Cix+di)=0\mu_i^*(C_ix^* + d_i) = 0 for all ii

Here, λ\lambda^* and μ\mu^* are the Lagrange multipliers for the equality and inequality constraints, respectively.

Solution Methods

Two main approaches are used to solve constrained QPs:

a. Active-Set Methods

Active-set methods work by guessing which inequality constraints are active (i.e., hold with equality) at the optimum.

Steps:

  1. Start with an initial guess of the active set.
  2. Solve the equality-constrained QP (original equalities + active inequalities).
  3. Check KKT conditions:
    • If all multipliers are non-negative, you're done.
    • If not, update the active set and repeat.
IterationActive SetAction
1{1, 2}Solve QP with constraints 1 and 2 as equalities
2{1, 2, 3}μ₃ < 0, add constraint 3 to active set
3{1, 3}μ₂ < 0, remove constraint 2 from active set
4{1, 3}All KKT conditions satisfied, optimal solution found
b. Interior-Point Methods

Interior-point methods approach the solution from the interior of the feasible region.

Key idea: Replace inequality constraints with a barrier function.

Steps:

  1. Transform the problem: minxf(x)ti=1mlog(gi(x))\min_x f(x) - t\sum_{i=1}^m \log(-g_i(x)) where gi(x)0g_i(x) \leq 0 are the inequality constraints and t>0t > 0 is a parameter.
  2. Solve this problem for decreasing values of tt.
  3. As t0t \to 0, the solution approaches the true optimum.
Interior-point method visualization

Interior-point strategies replace the non-smooth complementarity conditions with a smooth nonlinear equality.

Comparison of Methods

AspectActive-Set MethodsInterior-Point Methods
Efficiency for small problemsGenerally more efficientCan be less efficient due to overhead
Efficiency for large problemsCan struggle with many constraintsHighly efficient, especially with many constraints
Warm-startingEasy to warm-startMore difficult to warm-start
Iterative behaviorCan jump to solution in finite stepsApproaches solution smoothly

Practical Example: Portfolio Optimization

Let's consider a practical example of using QP for portfolio optimization.

Problem: Allocate investments among nn assets to maximize expected return while minimizing risk.

Formulation:

  • xix_i: Proportion of wealth invested in asset ii
  • μi\mu_i: Expected return of asset ii
  • Σ\Sigma: Covariance matrix of asset returns

Objective: maxx(μTxλxTΣx)\max_x (\mu^Tx - \lambda x^T\Sigma x)

subject to: i=1nxi=1\sum_{i=1}^n x_i = 1 xi0 for all ix_i \geq 0 \text{ for all } i

Here's how you might implement this in Python using cvxpy:

import cvxpy as cp
import numpy as np

# Problem data
n = 10  # Number of assets
mu = np.random.randn(n)  # Expected returns
Sigma = np.random.randn(n, n)
Sigma = Sigma.T @ Sigma  # Ensure Sigma is positive semidefinite
lambda_reg = 1.0  # Risk aversion parameter

# Define and solve the CVXPY problem
x = cp.Variable(n)
prob = cp.Problem(cp.Maximize(mu.T @ x - lambda_reg * cp.quad_form(x, Sigma)),
                  [cp.sum(x) == 1, x >= 0])
prob.solve()

# Print result
print("Optimal portfolio allocation:", x.value)
print("Expected return:", mu.T @ x.value)
print("Portfolio risk:", cp.quad_form(x, Sigma).value)

Conclusion

Quadratic Programming is a powerful tool in the optimization toolbox. Its ability to handle quadratic objectives with linear constraints makes it applicable to a wide range of real-world problems. Whether you're optimizing financial portfolios, training machine learning models, or designing control systems, understanding QP can be immensely beneficial.

As we've seen, there are different approaches to solving QPs, each with its strengths. The choice between active-set and interior-point methods often depends on the specific problem characteristics and scale.

🚀

Ready to dive deeper? Try implementing your own QP solver or explore advanced topics like Sequential Quadratic Programming (SQP) for solving general nonlinear programming problems!

Remember, while QP is a powerful technique, it's just one tool in the broader field of mathematical optimization. As you continue your journey in optimization, you'll encounter many other fascinating methods and problem types!

Authors

Federico Sarrocco

Federico Sarrocco

View Portfolio