Constrained Optimization in Engineering Design

Chapter 6: Constrained Optimization, Part I

We now begin our discussion of gradient-based constrained optimization. Recall that we looked at gradient-based unconstrained optimization and learned about the necessary and sufficient conditions for an unconstrained optimum, various search directions, conducting a line search, and quasi-Newton methods. We will build on that foundation as we extend the theory to problems with constraints.

At an unconstrained local optimum, there is no direction in which we can move to improve the objective function. We can state the necessary conditions mathematically as Grad(f)=0. At a constrained local optimum, there is no feasible direction in which we can move to improve the objective. That is, there may be directions from the current point that will improve the objective, but these directions point into infeasible space.

The necessary conditions for a constrained local optimum are called the Kuhn-Tucker Conditions, and these conditions play a very important role in constrained optimization theory and algorithm development.


Chapter 7: Constrained Optimization, Part II

In the previous section we examined the necessary and sufficient conditions for a constrained optimum. We did not, however, discuss any algorithms for constrained optimization. The purpose of this section is to review three popular techniques for constrained optimization:

  1. Sequential Quadratic Programming (SQP)
  2. Generalized Reduced Gradient (GRG)
  3. Interior Point Methods (IP)

SQP works by solving for where the KT equations are satisfied. SQP is a very efficient algorithm in terms of the number of function calls needed to get to the optimum. It converges to the optimum by simultaneously improving the objective and tightening feasibility of the constraints. Only the optimal design is guaranteed to be feasible; intermediate designs may be infeasible.

The GRG algorithm works by computing search directions which improve the objective and satisfy the constraints, and then conducting line searches in a very similar fashion to the algorithms we studied previously. GRG requires more function evaluations than SQP, but it has the desirable property that it stays feasible once a feasible point is found. If the optimization process is halted before the optimum is reached, the designer is guaranteed to have in hand a better design than the starting design. GRG also appears to be more robust (able to solve a wider variety of problems) than SQP, so for engineering problems it is often the algorithm tried first.

Interior point methods are best suited for very large-scale problems with many degrees of freedom (design variables). Interior point methods are also the simplest to code into a mathematical program. We will work with interior point methods to investigate the algorithmic details of constrained optimization.

  1. A. WΓ€chter and L. T. Biegler, On the Implementation of an Interior-Point Filter Line-Search Algorithm for Large-Scale Nonlinear Programming, Mathematical Programming 106(1), pp. 25-57, 2006. Download PDF

Interior Point Example

A refinery must produce 100 gallons of gasoline and 160 gallons of diesel to meet customer demands. The refinery would like to minimize the cost of crude and two crude options exist. The less expensive crude costs $80 USD per barrel while a more expensive crude costs $95 USD per barrel. Each barrel of the less expensive crude produces 10 gallons of gasoline and 20 gallons of diesel. Each barrel of the more expensive crude produces 15 gallons of both gasoline and diesel. Find the number of barrels of each crude that will minimize the refinery cost while satisfying the customer demands.