Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

Numerical Search Techniques with Maple

When calculus methods aren’t feasible, we turn to numerical approximation techniques. The basic approach of most numerical methods in optimization is to produce a sequence of improved approximations to the optimal solution according to a specific scheme. We will examine both elimination methods (Golden section, and Fibonacci) and interpolation methods (Newton’s).

In numerical methods of optimization, a procedure is used in obtaining values of the objective function at various combinations of the decision variables, and conclusions are then drawn regarding the optimal solution. The elimination methods can be used to find an optimal solution for even discontinuous functions. An important relationship (assumption) must be made to use these elimination methods. The function must be unimodal. A unimodal function—uni-modal from “one mode’' -is one that has only one maximum (peak) or one minimum (valley), but not both. State this mathematically as

Definition. Unimodal Function.

A function is unimodal on the interval [a, 5] iff

  • 1. / has a maximum (minimum) x* in (a,b), and
  • 2. is strictly increasing (decreasing) on [a, a’*], i.e., x < x* => f(x) < ,f{x*), and
  • 3. is strictly decreasing (increasing) on [x*,b] i.e., if x > x* =>■ f(x) < ,f{x*).

Examples of unimodal and non-unimodal functions are shown in the Figure 3.3. Unimodal functions may or may not be differentiable or even continuous.

Examples of Unimodal and Non-Unimodal Functions

FIGURE 3.3: Examples of Unimodal and Non-Unimodal Functions

Thus, a unimodal function can be nondifferentiable (have corners) or even discontinuous. If a function is known to be unimodal in a given closed interval, then the optimum (maximum or minimum) can be found.

In this section, we will learn many techniques for numerical searches. For the “elimination methods,” we accept an interval answer. If a single value is required, we usually evaluate the function at each end point of the final interval and the midpoint of the final interval, then take the optimum of those three values to approximate our single value.

Golden Section Search

A technique that searches for a value by determining narrower and narrower intervals containing the target value is called a bracketing method. A Golden

Section search is a type of bracketing method that uses the golden ratio[1]. This recent technique was developed in 1953 by Ласк Keifer.

To better understand the golden ratio, consider a line segment that is divided into two parts as shown in Figure 3.4. If the ratio of the length of

The Golden Ratio

FIGURE 3.4: The Golden Ratio

the whole line to the length of the larger part is the same as the ratio of the length of the larger part to the length of the smaller part, we say the line is divided into the golden ratio. Symbolically, taking the length of the original segment as 1, this can be written as

Algebraic manipulation of the relationship above yields r2 + r — 1 = 0. The quadratic’s positive root r = (/5—1)/2 satisfies the ratio requirements for the line segment above. The golden ratio’s numerical value is ф к 0.6180339880. (The traditional symbol for the golden ratio is ф.) This well-known ratio is the limiting value for the ratio of the consecutive terms of the Fibonacci sequences, which we will see in the next method. Another bracketing method, the Fibonacci search is often used in lieu of the Golden Section method.

In order to use the Golden Section search procedure, we must ensure that certain assumptions hold. These key assumptions are:

  • 1. The function must be unimodal over a specified interval,
  • 2. the function must have an optimal solution over a known interval of uncertainty, and
  • 3. we will accept an interval solution since the exact optimal value cannot be found by bracketing, only approximated.

Only an interval solution, that is, an interval containing the exact optimal value, known as the final interval of uncertainty, can be found using this technique. The length of the final interval is controlled by the user and can be made arbitrarily small by selecting a tolerance value. Assuming unimodality guarantees that the final interval’s length is less than the chosen tolerance.

Finding the Maximum of a Function over an Interval with a Golden Section Search

Suppose we know that / is unimodal on an interval I. Break I into two subintervals 11 and I2, then f’s maximum must be in one or the other. Check a test point in each interval- the higher test point indicates which interval contains the maximum since / is unimodal and has exactly one optimum value. We can reduce the number of times we have to evaluate the function by having the intervals overlap and using the endpoints as test points. We can further reduce the number of function evaluations by cleverly choosing the test points so that the next iteration reuses one of the current test points. These ideas lead to the Golden Section search developed by Prof. .Tack Kiefer in his 1953 master’s thesis.[2]

A Golden Section search for a maximum is iterative, requiring evaluations of f(x) at test points aq and X2, where

The test points will lie inside the original interval Io = [a, 6] and are used to determine a new, smaller search interval I. Choosing r carefully lets us reuse the function’s evaluations in the next iteration: either aq or x2 will be the new interval endpoint, and the other test point will be a test point in the new, reduced interval. For a Golden Section Search, r is chosen to be the golden ratio 0.618. If f{x 1) > f(x2), the new interval is [x2, b], and if f(x 1) < f(x2), the new interval is [a,aq]. Continue to iterate in this manner until the final interval length is less than the chosen tolerance. The final interval contains or brackets the optimum solution. The length of the final interval determines the accuracy of the approximate optimum solution. The maximum number of iterations N required to achieve this desired tolerance or final interval length is

When we are required to provide a point solution, instead of a small interval containing the solution, evaluate the function at the end points and the midpoint of the final interval. Select the value of x that yields the largest f(x).

How can the procedure be changed to search for a minimum point?

Figure 3.5 shows the progression of subintervals selected by a Golden Section search.

Golden Section Search Interval Progression

FIGURE 3.5: Golden Section Search Interval Progression

The Golden Section search is written in algorithmic form below.

Golden Section Search Algorithm

Find the maximum of the unimodal function / on the interval [a. b].

INPUTS: Function: /

Endpoints: a, b Tolerance: t

OUTPUTS: Optimal point x* and maximum value f(x*)

Step 1. Initialize: Set r = 0.618, ao = a, bo = b, counter к = 1, limit AT = [ln(t/(6 — a))/ ln(0.618)]

Step 2. Calculate the test points x = a,k- i + r{bk-1 «A:-i) and x2 = bk-1 - r(bk^i - ak-i).

Step 3. Compute f(xi) and f(x2)

Step 4. If f{x 1) > f(x2), then Ik = [x2,b; that is a = X2 If f{x 1) < f(x2), then Ik = [o, a?i]; that is b = x

Step 5. И к = N or bk — ak < t, then STOP

Return estimates x* = midpoint(Д.) and f(x*).

Otherwise, set к = к + 1 and return to Step 2.

Although a Golden Section search can be used with any unimodal function to find the maximum (or minimum) over a specified interval, its main advantage comes when normal calculus procedures fail. Consider the following example.

Example 3.5. Maximizing a Nondifferentiable Function.

Maximize

over the interval [0,3].

Absolute value functions are not differentiable because they have corner points—graph / to see this. Thus, taking the first derivative and setting it equal to zero is not an option. We’ll use a Golden Section search in Maple to solve this problem. Our Maple procedure GoldenSectionSearch is in the text’s PSMv2 package. Use with(PSMv2) to make the procedure available.

The midpoint of the final interval is 0.907318 and f(midpoint) = —2.629270, so we estimate the maximum of the function is —2.629 at the x value 0.9 (to within 0.1).

Example 3.6. Maximizing a Transcendental Function.

Maximize the function

over the interval [0,20]. (Remember that exp(— x) = e~x.)

The Golden Section search gives the maximum of the function is 1.204 at the x value = 2.513.

Fibonacci Search

A Fibonacci search uses the ratio of Fibonacci numbers to generate a sequence of test points based on the expression

Since linin^oc Fn-i/Fn equals the golden ratio, A Fibonacci search is a Golden Section search “in the limit.” However, a Fibonacci search converges faster than a Golden Section search.

In order to use the Fibonacci search procedure, we must ensure that the key assumptions hold:

  • 1. The function must be unimodal over a specified interval,
  • 2. The function must have an optimal solution over a known interval of uncertainty, and
  • 3. We will accept an interval solution since the exact optimal value cannot be found by bracketing, only approximated.

These are the same assumptions as required for a Golden Section search—the two searches are closely related bracketing methods.

As before, only an interval solution—the final interval of uncertainty—can be found using this or any bracketing technique. The length of the final interval or tolerance value is controllable by the user, and can be made arbitrarily small restricted only by the precision of the computations and the computing time available. The final interval is guaranteed to be less than this tolerance value within a specific number of iterations.

Replace the golden ratio r in the test point generating formulas of the Golden Section search with the ratio of Fibonacci numbers to obtain

These are the Fibonacci search test point generators. The test points must lie inside the original interval [o, 6] since a < X{ < ж 2 < b, and will determine the new search interval in the same fashion as a Golden Section search.

Equality means calculation precision is exceeded or a calculation error has occurred.

The iterations continue until the final interval length is less than our imposed tolerance. Our final interval must contain the optimum solution. The size of the final interval determines the accuracy of the approximate optimum solution and vice versa. The number of iterations required to achieve this accepted interval length is the smallest Fibonacci number that satisfies the inequality

When we require a point solution, instead of an interval solution, the method of selecting a point is to evaluate the function, f(x) at the end points and midpoint of the final interval. For maximization problems, select the endpoint or midpoint that yields the largest fix) value.

The Fibonacci search algorithm follows.

Fibonacci Search Algorithm

Find the maximum of the unimodal function / on the interval [a, b. INPUTS: Function: /

Endpoints: a, b Tolerance: t

OUTPUTS: Optimal point x* and maximum value f{x*)

Step 1. Initialize: Set a о = a, bo = b, counter к = 1

Step 2. Calculate the number of iterations n such that Fn is the first Fibonacci number where (ba)/1 < Fn

Step 3. Calculate the test points

Step 4. Compute f(xi) and f{x2)

Step 5. If f{x 1) < fix2), then the new interval is [ж 1, b]

Set X = X2

Compute the new X2 (for the new interval)

If fix 1) > fix 2), then the new interval is [a, a: 2]

Set X2 = x

Compute the new xi (for the new interval)

Step 6. If n = 2 or bn — an < t, then STOP

Return estimates x* = midpoint^/,,) and fix*). Otherwise, set n = n — 1 and return to Step 4.

Although a Fibonacci search can be used with any unimodal function to find the maximum (or minimum) over a specified interval, its main advantage comes when normal calculus procedures fail, such as when / is not differentiable or continuous. Redo the first example of a Golden Section search.

Example 3.7. Maximizing a Nondifferentiable Function (reprise).

Maximize

over the interval [0,3].

We’ll use the FibonacciSearch program from the text’s PSMv2 library package.

How does this answer compare with the optimum found by the Golden Section search?

Now redo the second example from earlier.

Example 3.8. Maximizing a Transcendental Function (reprise).

Maximize the function

over the interval [0,20].

The Fibonacci search gives the maximum as /(2.5) = 1.204. How does this answer compare with the optimum found by the Golden Section search?

Interpolation with Derivatives: Newton’s Method

Finding the Critical Points (Roots of the Derivative) of a Function

Newton’s Method can be adapted to solve nonlinear optimization problems. For a twice differentiable function of one variable, the adaptation is straightforward. Newton’s Method is applied to the derivative, rather than the original function we wish to optimize—the function’s critical points occur at roots of the derivative. Replace / with /' in the iterating formula for Newton’s Method:

Given a specified tolerance £ > 0, the iterations of Newton’s Method can be terminated when 1 x к I < £ or when |/'(a:/0)| < £.

In order to use the modified Newton’s Method to find critical points, the function’s first and second derivatives must exist throughout the neighborhood of interest. Also note that whenever the second derivative at Xk is zero, the next point Xk+i cannot be computed. (Explain why!) This method finds only approximates the value of critical points; it does not know whether it is finding a maximum or a minimum of the original function. Use the sign of the second derivative to determine whether the critical value is a maximum, minimum, or neither (a likely inflection point).

Example 3.9. A Simple Application.

Maximize the polynomial p(x) = 5x — x2 over the interval [0, 7].

This simple problem is easy to solve using elementary calculus.

  • 1. Differentiate: p = 5 — 2x.
  • 2. The only critical point, the root of p', is x* = 5/2.
  • 3. Differentiate again: p" = —2 < 0.
  • 4. The Second Derivative Test indicates that p has a maximum of 25/4 at ж* = 5/2.

Let’s use Maple to implement the modified Newton’s method for finding the maximum of p.

Start with a,'o = 1.0.

Starting at any other value also yields x* = 2.5. Since this simple quadratic function has a linear derivative, the linear approximation of the derivative will be exact regardless of the starting point, and the answer will appear at the second iteration. See Figure 3.6. Algebraically simplifying the method’s iterating function for p confirms this: Newton(x) = 2.5.

Plot of p(x) = 5x — x with its Linear Approximation at x = 4

FIGURE 3.6: Plot of p(x) = 5xx2 with its Linear Approximation at x = 4

The slope of the linear approximation of the function at the point is precisely the slope of the function at that point, so the linear approximation is tangent to the function at the point as shown in the figure.

Example 3.10. A Third Degree Polynomial.

Find a maximal point, if possible, for p(x) = —2x3 + 10ж + 5 on the interval [0,3] to a tolerance of e = 0.01.

We’ll use the same process in Maple, this time, with the starting value x = 1.0.

Start again with xo = 1.0.

The desired tolerance has not been achieved—we’re not finished.

We have a result! Checking the second derivative shows that p( 1.29) = 13.6 is our maximum (to within 0.01).

Maple’s Student[CalculusI] package contains NewtonsMethod for finding the roots of a function. To use NewtonsMethod to find critical points, all we have to do is replace the function with its first derivative. We illustrate below.

Try NewtonsMethod with output= animation.

Exercises

Compare the results of using the Golden Section. Fibonacci’s, and Newton’s

methods in the following.

  • 1. Maximize f(x) = —a;2Tx, on the closed interval [—2,1]. Using a tolerance for the final interval of 0.6. Hint: Start Newton’s method at x = —0.5.
  • 2. Maximize f(x) = —x2 — 3x on the closed interval [—3,1]. Using a tolerance for the final interval of 0.6. Hint: Start Newton’s method at x = 1.
  • 3. Minimize f(x) = x2 + 2a: on the closed interval [—3,1]. Using a tolerance for the final interval of 0.5. Hint: Start Newton’s method at x = —3.
  • 4. Minimize f{x) = —x + ex over the interval [—1,3] using a tolerance of 0.1. Hint: Start Newton’s method at x = — 1.
  • 5. List at least two assumptions required by both Golden Section and Fibonacci’s search methods.
  • 6. Consider minimizing /(ar) = —x + ex over the interval [—1,3]. Assume the search method yielded a final interval of [—0.80,0.25] to within the tolerance of £. Report a single best value of x to minimize f(x) over the interval. Explain your reasoning.
  • 7. Modify the Golden Section (pg. 109) algorithm to find a minimum value of a unimodal function. Write a Maple procedure to implement your change.
  • 8. Modify the Fibonacci search (pg. 113) algorithm to find a minimum value of a unimodal function. Write a Maple procedure to implement your change.

Projects

Project 3.1. Write a program in Maple that uses a secant method search. Apply your program to Exercises 1 through 4 above.

Project 3.2. Write a program in Maple that uses a Regula-Falsi search method. Apply your program to Exercises 1 through 4 above.

Project 3.3. Carefully analyze the rate of convergence of the different search methods presented.

References and Further Reading

[BSS2013] Mokhtar S. Bazaraa, Hanif D. Sherali, and Chitharanjan M. Shetty, Nonlinear Programming: Theory and Algorithms, Wiley, 2013.

[B2016] William C. Bauldry, Introduction to Computation, 2016.

[F1992] William P. Fox, “Teaching nonlinear programming with Minitab,” COED Journal, Vol. II (1992), no. 1, 80-84.

[F1993] William P. Fox, “Using microcomputers in undergraduate nonlinear optimization,” Collegiate Microcomputer, Vol. XI (1993), no. 3, 214- 218.

[FA2000] William P. Fox and Jeffrey Appleget, “Some fun with Newton’s method,” COED Journal, Vol. X (2000), no. 4, 38-43.

[FGMW1987] William P. Fox, Frank Giordano, S. Maddox, and Maurice D. Weir, Mathematical Modeling with Minitab, Brooks/Cole, 1987.

[FR2000] William P. Fox and William Richardson, “Mathematical modeling with least squares using Maple,” Maple Application Center, Nonlinear Mathematics, 2000.

[FGW1997] William P. Fox, Frank Giordano, and Maurice Weir, A First Course in Mathematical Modeling, 2nd Ed., Brooks/Cole, 1997.

[FW2001] William P. Fox and Margie Witherspoon, “Single variable optimization when calculus fails: Golden section search methods in nonlinear optimization using Maple,” COED Journal, Vol. XI (2001), no. 2, 50- 56.

[GFH2014] Frank Giordano, William P. Fox, and Steven Horton, A First Course in Mathematical Modeling, 5th ed., Nelson Education, 2014.

[M2013] Mark M. Meerschaert, Mathematical Modeling, Academic Press, 2013.

[PRS1976] D.T. Phillips, A. Ravindran, and .1. Solberg, Operations Research, Wiley, 1976.

[PFTV2007] William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling, et al., Numerical Recipes, Cambridge U. Press, 2007.

[R1979] S.S. Rao, Optimization: Theory and Applications, Wiley Eastern Ltd., 1979.

[WVG2003] W.L. Winston, M. Venkataramanan, and J.B. Goldberg, Introduction to Mathematical Programming, Duxbury, 2003.

4

  • [1] See http://mathworld.wolfram.com/GoldenRatio.html.
  • [2] See J. Kiefer, “Sequential minimax search for a maximum,” Proc Am Math Soc, 4 (3),1953, 502-506.
 
<<   CONTENTS   >>

Related topics