Optimization problems with constraints or Lagrange functions

Long ago I talked about the problem of differentiation with restrictions and now I want to emphasize an application, the optimization with restrictions.

Let us imagine a function of two variables f(x, y) subject to a constraint of another function g(x, y)=0. That is, we have a function f (of two variables) which, in turn, the variables are related by the function g.

Now suppose that f and g have continuous partial derivatives near a point $latex P_{0} = (x_{0}, y_{0}) on a smooth curve C that works according to equation g (x, y)=0 which is homogeneous. In addition, supposing that we suppose that we restrict the points of C the function f has a local maximum or minimum in P_{0} and that, as we are assuming and suppose to be free, $latex P_{0} is not an end of the curve C of equation g (x, y) and that (and this is important) \triangledown g(P_{0}) \ neq 0.

What do we mean by this so long?. Simple, we indicate that the function f will have a maximum or minimum at some point contained in the curve that is generated by the function g. No more no less.

The rest of things is to indicate that this point is not an end of the function g (and that, therefore, it can be maximum or minimum without being inside) and that in addition, its gradient is different from zero, come on, it has.

Thanks to this we can call and talk about the Lagrange function which is an extension of Lagrange multiplier previously seen.

L(x,y,\lambda)=f(x,y)+\lambda g(x,y)

That is, we can see candidates for points on the curve g(x, y)=0 such that f(x, y) are maximum or minimum as long as they satisfy, by the above equation:

\frac{\partial L}{\partial x}=f_{x}(x,y)+\lambda g_{x}(x,y)=0
$latex \frac{\partial L}{\partial y}=f_{y}(x,y)+\lambda g_{y}(x,y)=0

What means that:

\triangledown f es paralelo a \triangledown g

And that, in addition, it is true that:

\frac{\partial L}{\partial \lambda}=g(x,y)

That is the restriction equation and so the point gradient must be nonzero, obviously.

At the end of all this mess what we will have in a systems of more or less complicated equations (and here comes the algebra to help us) that can be solved, or not, but that, it assures us that the solution exists (although we can not calculate it) .

This would solve the problem of function optimization (f) with constraint (g) from the Lagrange function.

In the case of having more dimensions (than two), the only difference will be that we will have more partial differentials and, surely, more restrictions, which leads to the problem with more than one restriction.

In this case we would have to optimize the function f(x, y, z) restricted to the function g(x, y, z) and the function h(x, y, z) since Lagrange’s function is “equal” but more long:

L(x,y,z,\lambda)=f(x,y,z)+\lambda g(x,y,z)+\mu h(x,y,z)

And we would have to find triplets that optimize g=0 and h=0. Well, at least, a system of 5 equations with 5 unknowns. That is, x, y, z, \lambda , \mu . And that we would simply touch the machine more.

Comment