Yuze Zhao bio photo

Email

My CV

Github

Convex Optimization Problems

Lagrange Multiplier Method

Lagrangian Function

Provides a formulation for constrained optimization (in the sense of $C^1$ ), but in high-dimensional cases the computational workload is huge, so one usually turns to solving the dual problem.

Slater’s Condition

Slater’s condition ensures that the solution of the primal problem in the Lagrangian formulation can be transformed into the solution of the dual problem (sufficiency), and also guarantees the existence of a saddle point.

KKT Conditions

The KKT conditions guarantee that the saddle point is exactly the solution we are looking for (sufficiency).
Under the restriction of a convex feasible set, the KKT conditions are both necessary and sufficient.

\[\inf_{x \in X} \sup_{\lambda} \mathcal{L}(x,\lambda) = \sup_{\lambda} \inf_{x \in X} \mathcal{L}(x,\lambda)\]

PS: When the Lagrangian function is convex, a local minimum is also a global minimum.

Dual Proposition

The key to the dual proposition is to identify the relationship between the variables and the dual variables, then substitute this relationship back into the Lagrangian function. Their relationship is given by:

\[\frac{\partial \mathcal{L}}{\partial x}.\]