Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Constrained Optimization: The Method of Lagrange Multipliers

Equality Constraints: Method of Lagrange Multipliers

A company manufactures new phones that are projected to take the market by storm. The two main input components of the new phone are the circuit board and the LCD Touchscreen that make the phone faster, smarter, and easier to use. The number of phones to be produced E is estimated to equal E = 250 а1/%1/2 where a and b are the number of circuit board production hours and the number of LCD Touchscreen production hours available, respectively. Such a formula is known to economists as a Cobb-Douglas function. Our laborers are paid by the type of work they do: the circuit board labor cost is $5 an hour and the LCD Touchscreen labor cost is $8 an hour. What is the maximum number of phones that can be made if the company has $175,000 allocated for these components in the short run?

Problems such as this can be modeled using constrained optimization. We begin our discussion with optimization with equality constraints, then we move to optimization with inequality constraints in the next section.

Lagrange multipliers[1] can be used to solve nonlinear optimization problems (NLPs) in which all the constraints are equations. Consider the NLP given by

where m < n.

We can build an equality constrained model for our phone problem: maximize the number of phones made using all available production hours.

Lagrange Multipliers: Introduction and Basic Theory

In order to solve NLPs in the form of (4.2), we associate a Lagrange multiplier Ai with the ith constraint, and form the Lagrangian function

The computational procedure for Lagrange multipliers requires that all the partials of the Lagrange function (4.3) must equal zero. The partials all equaling zero at x* form the necessary conditions that x* is a solution to the NLP problem; i.e., these conditions are required for x = (x,X2, ■ ■ ■ ,xn) to be a solution to (4.2). We have

Proposition. Lagrange Multiplier Necessary Conditions.

For x“ to be a solution of the Lagrange function, all partials must satisfy

at x*.

The points x we will consider as candidates for optimal are Definition. Regular Point.

x is a regular point iff the set {Vi = l..m is linearly independent.

The main theorem for Lagrange’s technique is Theorem. Lagrange Multipliers.

Let x be a point satisfying the Lagrange multiplier necessary conditions. Then

a. If / is concave and each g, is linear, then x is an optimal solution of the maximization problem (4.2).

b. If / is convex and each g, is linear, then x is an optimal solution of the minimization problem (4.2).

Previously, we used the Hessian matrix to determine if a function was convex, concave, or neither. Also note that the theorem limits constraints to linear functions.

What if the constraints are nonlinear? We can use the bordered Hessian in the sufficient conditions. For a bivariate Lagrangean function with one constraint

define the bordered Hessian matrix as

The determinant of this bordered Hessian matrix is

The necessary condition for a maximum in the bivariate case with one constraint is that the determinant of its bordered Hessian is positive when evaluated at the critical point.

The necessary condition for a minimum in the bivariate case with one constraint is the determinant of its bordered Hessian is negative when evaluated at the critical point.

If x is a regular point and g, (x) = 0 (all constraints are satisfied at x), then

defines a plane tangent to the feasible region at x. (The 1 • ’ is the usual dot product.)

Lemma.

If x is regular, у = 0, then V/(x) • у = 0.

Note that the Lagrange Multiplier conditions are exactly the same for a minimization problem as for a maximization problem. This is the reason that these conditions alone are not sufficient conditions. Thus, a given solution can either be a maximum or a minimum. In order to determine whether the point found is a maximum, minimum, or saddle point we will use the Hessian.

The value of the Lagrange multiplier itself has an important modeling interpretation. The multiplier Л is the “shadow price” for scarce resources. Thus, A, is the shadow price of the *th constraint. If the right-hand side of the constraint is increased by a small amount, say Д, then the optimal solution will change by Ai ■ A. We will illustrate shadow prices both graphically and computationally.

Graphical Interpretation of Lagrange Multipliers

We can best understand the method of Lagrange multipliers by studying its geometric interpretation. This geometric interpretation involves the gradients of both the function and the constraints. Initially, consider only one constraint, g(x) = b, then the Lagrangian equation simplifies to

The solution is the point x where the gradient vector Vg(x) is perpendicular to contours on the surface. The gradient vector V/ always points in the direction / increases fastest. At either a maximum or a minimum, this direction must be perpendicular to contour lines on f’s surface S. Thus, since both V/ and Vg point along the same perpendicular line, then V/ = AVg. Further, g’s curve must be tangent to /’s contours at optimal points. See Figure 4.14.

The geometrical arguments are similar for the case of multiple constraints.

Lagrange Multipliers Geometrically

FIGURE 4.14: Lagrange Multipliers Geometrically: One Equality Constraint

Let’s preview a graphical solution to an example.

Generate a contour plot of г = /(x) with Maple, and overlay the single constraint onto the contour plot. See Figure 4.15. What information can we obtain from this graphical representation? First, we note that the unconstrained optimum does not lie on the constraint. We estimate the unconstrained maximum as (a;*,'(/*) = (2.3,1.3). The optimal constrained solution lies at the point where the constraint is tangent to a contour of z = /(x). This point is approximately (1.8,1.0) on the graph. We see clearly that the constraint does not pass through the unconstrained maximum, and thus, it can be modified/adjusted (if feasible) until the line passes through the unconstrained solution. At that point, we would no longer add (or subtract) any more constrained resources (see Figure 4.16). Valuable insights about the problem come from plotting the information, when possible.

Contour Plot of / with Linear Constraint g

FIGURE 4.15: Contour Plot of / with Linear Constraint g

Interpreting the value of A as a “shadow price” leads us to consider what happens when the amount of a resource governed by a constraint is changed. Figure 4.16 shows this concept graphically.

Resource Constraint g is Increased by Constant a

FIGURE 4.16: Resource Constraint g is Increased by Constant a

Let’s now turn to the calculations.

Lagrange Multipliers Computations with Maple

The set of equations in (4.2), pg. 163, gives a system of n + m equations in the n + m unknowns {aq, Ay}. Generally speaking, this system presents a very difficult problem to solve without a computer except for simple problems. Also, since the Lagrange Multipliers are necessary conditions only, not sufficient, we may find solutions (aq, Ay) that are not optimal for our specific NLP. We need to be able to classify the points found in the solution. Commonly used methods for determining optimality include

a. the definiteness of the Hessian matrix

b. the definiteness of the bordered Hessian via det(BdH)

We’ll illustrate these, when feasible, in the following examples with Maple. Example 4.19. Lagrange Multipliers with Maple.

First, define the / and g, and then display a plot.

For the plot, we’ll embed the contour plot into a 3D plot to make a better visual representation of /. To embed the plot, use the transform command from the plots package.

Then T will take a point (x,y) on a 2D contour plot and embed it in a 3D plot on the plane z = —500.

Now, put the graphs together.

Rotate the 3D figure and inspect it from many different perspectives. Being able to rotate a 3D image is an incredibly useful feature.

Back to the computations.

The Lagrangian function is L(x,y,X) = f(x,y) + X(g(x,y) — b).

Calculate the system of necessary conditions.

Solve the system NecessCond to find potential optimal points.

Explain why these two values are equal!

It will be no surprise that Maple has several commands for finding solutions to a constrained optimization problem; choose which command to use depending on the form of the output needed. Table 4.5 shows the most commonly used commands.

TABLE 4.5: Maple Commands for Constrained Optimization

Package

Command

Student [MultivariateCalculus]

LagrangeMult ipliers

Basic Form: LagrangeMultipliers(ObjectiveFcn, [constraints], [wars], options)

Options: return more detailed report and/or plots

Optimization

NLP Solve

Basic Form: NLPSoIvc(Obje.ctiveFcn, [constraints], options)

Options: set solving method, starting point, detailed output, etc.

Optimization

Maximize/Minimize

Basic Form: Maximize(0bjectivcFcn, [constraints], options)

Minimize(ObjectiveFcn, [constraints], options)

Options: recommended: set a starting point

Remember, in “real world problems,” a critical element of the solution using the Lagrange multiplier method is the value and interpretation of the multiplier Л.

Using LagrangeMultipliers

The plot output can be very useful.

Using NLPSolve

Constraints must now be written as equations (or inequalities).

We’ve lost the value of A since NLPSolve used a different method. Trying to use the Lagrangian equation for the objective function doesn’t capture the correct value.

Using Maximize

Still no A.

And we did not capture the correct value of A again.

If we need complete information, it’s best to use either direct computation of the method or the LagrangeMultipliers function to solve these problems. Combine the information we’ve found. The solution is

We have a solution, but we need to know whether this solution represents a maximum or a minimum.

We can use either the Hessian or the bordered Hessian to justify that we have found the correct solution to our constrained optimization problem.

The Hessian is negative definite for all values of x. so the regular point, also called the stationary point, x is a maximum.

The bordered Hessian gives the same result.

Since the determinant is positive we have found the maximum at the critical point.

Either method works in this example to determine that we have found the maximum for our constrained optimization problem.

Now, let’s interpret the shadow price A = 0.76. If the right-hand side of the constraint is increased by a small amount Д, then the function will increase by approximately A • Д = 0.76 • Д. Since this is a maximization problem, we would add to the restricting resource if possible because it would improve the value of the objective function.

Rom a graph, it can be seen that the incremental change must be small or the objective function could begin to decrease. Look back at Figure 4.16. If we increase the right side of g(x) = b by one unit so the constraint becomes З.т + у = 7, the solution at the new point (x**,y**) should yield a new maximum functional value f(x**,y**) approximately equal to the old maximum plus Л times the change Д; that is f(x**,y**) ss f(x*,y*) + Л Д, here 10.4457 + 0.7609 = 11.2065. In actuality, changing the constraints yields a new solution of 11.04347826. (Verify this!) The actual increase is about 0.5978.

Example 4.20. Lagrange Multipliers with Multiple Constraints.

Minimize w = x2 + y2 + 3z subject to x + у = 3 and x + 3y + 2z = 7.

First, we directly solve the problem using Maple.

Now, define the Lagrangian function L and find its gradient.

Build the system of equations and solve it to find the potential solution.

Check the solution! (We'll use the “long-form,” that is “full name,” of the commands so as to not load the packages.)

This Hessian is always positive semi-definite. The function is convex, and so our critical point is a minimum.

Now, we’ll solve the problem using Maple’ LagrangeMultipliers function.

The same Hessian shows the answer is our desired minimum.

Interpret the shadow prices we found, Л = 0 and //. = —3/2. If we only had the funds to increase one of the two constraint resources, which one should we choose? Since the shadow price of the first constraint is 0, we expect no improvement in the objective function’s value; we would not spend extra funds to increase the first resource. Since the shadow price of the second constraint is — 1.5, we expect to change the objective function’s value by —1.5A (Improving the objective since we are minimizing!) if we increase that resource by A; we would spend extra funds to increase the second resource as long as the cost of increasing resource 2 was less than 1.5Д. Why does the cost need to be less than 1.5 A?

Applications using Lagrange Multipliers

Many applied constrained optimization problems can be solved with Lagrange multipliers. We’ll start by revisiting the opening problem of this section (from page 162).

Example 4.21. The Cobb-Douglas Function.

A company manufactures new phones that are projected to take the market by storm. The two main input components of the new phone are the circuit board and the LCD Touchscreen that make the phone faster, smarter, and easier to use. The number of phones to be produced P is estimated to equal P = 250 al/41/2 where a and b are the number of circuit board production hours and the number of LCD Touchscreen production hours available, respectively. This type of formula is known to economists as a Cobb-Douglas production function'. Laborers are paid by the type of work they do: the circuit board labor cost is $5 an hour and the LCD Touchscreen labor cost is $8 an hour. What is the maximum number of phones that can be made if the company has $175,000 allocated for these components in the short run?

Let’s use Maple.

Of course, LagrangeMultipliers gives the same results.

We find we can make « 313,765 phones using 11,667.67 production hours for circuit boards and 14,583.33 production hours for LCD touchscreens.

A plot of the Cobb-Douglas function centered at the (11,668,14,583) showing the constraint in Figure 4.17 finishes our analysis.

"Originated in Cobb & Douglas, “A Theory of Production,” Am Econ Review. 18, 1928, pg. 139-165.

Cobb-Douglas Phone Production Function

FIGURE 4.17: Cobb-Douglas Phone Production Function

Example 4.22. Oil Transfer Company Storage Facilities.

The management of a small oil transfer company desires a minimum cost policy taking into account the restrictions on available tank storage space. A formula has been derived from historical data records that describes system costs.

where

The tank space constraint is given by:

where tn is the space required for the nth item (in 1.000s bbl) and T is the available tank space (in 1,000s bbl). The parameter values shown in Table 4.6 are based on the company’s data collected over several years.

TABLE 4.6: Oil Storage data Parameter Estimates

Item n

($l,000s)

(k bbl/hr)

hn

($l,000s)

tn

(1,000s bbl)

1

9.6

3

0.47

1.4

2

4.27

5

0.26

2.62

3

6.42

4

0.61

1.71

The storage tanks have only 22 (1,000s bbl) space available. Find the optimal solution as a minimum cost policy.

We’ll first solve the unconstrained problem. This solution will provide an upper bound for the constrained solution and help us gain insight into the dynamics of the problem.

Define the parameters, then define the functions.

Build the system of equations {df /dxi = 0} and solve it. (Remember to load Student[MultivariateCalculus] for Gradient.)

The only useful solution is where xi, a; 2, хз > 0, so we choose the first row.

This solution x* = (11.07,12.82,9.18} provides an unconstrained upper boui since it does not satisfy the constraint g(x) = 22.

Now we solve the constrained optimization problem knowing that the sol tion will be less than the unconstrained value we just found.

Using solve as we did above gives a plethora of complex and negative valu that we don’t want. We’ll use fsolve with a starting value of the unconstraim solution and Л = 1.

Do we have the minimum?

The Hessian matrix H (Remember to load Student[VectorCalculusj) is

Since the Hessian is positive for all positive x. the matrix is positive definite. We have found a minimum for the constrained problem at x* = (4.761,3.213,4.045).

Should we recommend the company add storage space? We know from the unconstrained solution that, if possible, we would add storage space to decrease the costs. We have found the shadow price Л = 0.74 which suggests that any small increase Д in the RHS of the constraint causes the objective function to decrease by approximately 0.74 • Д. The cost of the extra storage tank would have to be less than the savings incurred by adding the tank.

Exercises

  • 1. Solve the following constrained problems using the Lagrangian approach.
  • (a) Minimize z = x2 + y2 subject to x + 2у = 4.
  • (b) Maximize z = (x — 3)2 + (y — 2)2 subject to x + 2y = 4.
  • (c) Maximize z = x2 + 4xy + y2 subject to x2 + y2 = 1.
  • (d) Maximize z = x2 + 4xy + y2 subject to x2 + y2 = 4 and x + 2y = 4.
  • 2. Find and classify the extrema for f(x,y,z) = x2 + y2 + z2 subject to x2 -(- 2y2 - z2 = 1.
  • 3. Two manufacturing processes, / and /2, both use a resource with b units available. Maximize fi(xi) + /2(^2) subject to x + X2 = b.

If /(l(a:i) = 50 — (a.’i — 2)2 and /2(a;2) = 50 — (a,’2 — 2)2, analyze the manufacturing processes to

  • (a) determine the amount of x t and ж2 to use to maximize /x + /2,
  • (b) determine the amount b of the resource to use.
  • 4. Maximize Z = 2x2 — y2 + xy + 8a; + 3у subject to 3a: + у = 10 and

ж2 + y2 = 16.

5. Use the Method of Lagrange Multipliers to maximize f(x, y, z) = xyz subject to 2 ж + 3у + 4w = 36.

Determine how much f[x, y, z) would change if one more unit was added to the constraint.

  • [1] For a nice history of Lagrange’s technique, see Bussotti, “On the Genesis of the LagrangeMultiplier's,” J Optimization Theory & Applications, Vol 117, No 3, pp 453-459, June 2003.
 
<<   CONTENTS   >>

Related topics