5.6. Advanced Topics

Before closing out this chapter, we include two advanced topics that are relevant to design optimization. The first topic is reliability-based design optimization (RBDO), which incorporates variations in physical parameters and design variables into design optimization, leading to design with less probability of failure. Second, we introduce a case of design optimization that takes product cost, including manufacturing, as an objective function. This case represents a more realistic design optimization problem that leads to less expensive designs.

5.6.1. Reliability-Based Design Optimization

In Chapter 5 of Chang (2013a), we discussed reliability analysis. We introduced the concept of failure modes associated with certain critical product performance. We incorporated the variability (or uncertainty) of physical parameters or a manufacturing process that affects the performance (hence, failure modes) of the product to estimate a failure probability. We used a beam example for illustration, in which the failure probability of a stress failure mode predicts the percentage of the incidents when the maximum stress of the beam exceeds its material yield strength. The result offered by the reliability analysis is far more precise and effective than that of the safety factor approach.
In this subsection, we move one step further to discuss design optimization by incorporating failure modes (and failure probabilities) into optimization problem formulation. We first briefly review the basics of the reliability analysis, in particular, the most probable point (MPP) search for failure probability estimates. We then formulate the mathematic equations for a standard RBDO and its solution technique, and we present a sample case to illustrate the topic of RBDO using a tracked-vehicle roadarm example. Readers are encouraged to review Chapter 5 of Chang (2013a) to refresh the concept and numerical computations involved in the reliability analysis before reading further.

5.6.1.1. Failure probability

The probability of failure Pf of a product with a failure mode g(X) ≤ 0 is defined as

Pf=P(M0)=P(g(X)0)=g(x)0fX(x)dx

image (5.35)

where fX(x) is the joint probability density function (PDF), X is a vector of random variables, and the function g(x) = 0 is called the limit state function. Note that realization of X = [X1,X2,…,Xn]T is denoted as x = [x1,x2,…,xn]T, which is a point in the n-dimensional space.
The probability integration in Eq. 5.35 is visualized for a two-dimensional case in Figure 5.30a, which shows the joint PDF fX(x) and its contour projected onto the x1-x2 plane. All the points on the projected contours have the same values of fX(x) or the same probability density. The limit state function g(x) = 0 is also shown. The failure probability Pf is the volume underneath the surface of the joint PDF fX(x) in the failure region g(x) ≤ 0. To show the integration more clearly, the contours of the joint PDF fX(x) and the limit state function g(x) = 0 are plotted on the x1-x2 plane, as shown in Figure 5.30b.
The direct evaluation of the probability integration of Eq. 5.35 is very difficult if not impossible. A number of methods have been developed. Monte Carlo simulation is simple and easy to implement. However, this brute-force approach requires tens of thousands analyses or more, which is impractical for engineering problems that often require significant computational effort. One of the widely employed methods to alleviate the computational issue to some extent, while still offering a sufficiently accurate estimate on the failure probability, is the first-order reliability method (FORM). This method aims at providing an acceptable estimate for the integral form of the failure probability Pf defined in Eq. 5.35, which can be achieved by two important steps. The first step involves the simplification of the joint probability density function fX(x). The simplification is accomplished by transforming a given joint probability density function, which may be multidimensional, to a standard normal distribution function of independent random variables of the same dimensions. The second step approximates the limit state function g(x) = 0 by a Taylor's series expansion and keeps up to the first-order terms for approximation. Note that if both linear and quadratic terms are included in the calculation, the method is called the second-order reliability method (SORM).
image
FIGURE 5.30 Probability integration illustrated using a two-dimensional example. (a) Isometric view. (b) Projected view on the x1-x2 plane.
The space that contains the given set of random variables X = [X1,X2,…,Xn]T is called the X-space. These random variables are transformed to a standard normal space (U-space), where the transformed random variables U = [U1,U2,…,Un]T follow the standard normal distribution (i.e., with mean value μ = 0 and standard deviation σ = 1). Such a transformation is carried out based on the condition that the cumulative density functions (CDFs) of the random variables remain the same before and after transformation; that is,

FXi(xi)=Φ(ui)

image (5.36)

Here, Φ()image is the CDF of the standard normal distribution. The transformation can be written as

ui=Φ1(FXi(xi))

image (5.37a)

Hence, the transformed random variable Ui can be written as

Ui=Φ1(FXi(Xi))

image (5.37b)

If the random variables Xi are independent and normally distributed—that is, FXi(xi)=Φ(xiμiσi)image— the transformation, as illustrated in Figure 5.31, can be obtained as

ui=Φ1(Φ(xiμiσi))=xiμiσi

image (5.38)

Hence,

xi=μi+σiui

image (5.39)

Note that, as shown in Figure 5.31c, the projected contours of the transformed PDF on the u1-u2 plane are circles centered at the origin.
The limit state function is also transformed into the U-space, as

g(x)=gu(u)=0

image (5.40)

image
FIGURE 5.31 Transformation of random variables from X to U, illustrated using a two-dimensional example. (a) PDF and projected contours in X-space. (b) Standard normal distribution in U-space. (c) Projected view on the u1-u2 plane.
The transformed limit state function separates the U-space into safe and failure regions, as illustrated in Figure 5.31b and c. After the transformation, the probability integration of Eq. 5.35 becomes

Pf=P(gu(u)0)=gu(u)0ϕ(u)du

image (5.41)

Because all of the random variables U are independent, the joint PDF is the product of the individual PDFs of standard normal distribution:

ϕ(u)=ni=112πe12u2i

image (5.42)

Therefore, the probability integration becomes

Pf=gu(u)0ni=112πe12u2idu1du2dun

image (5.43)

Although it is obvious that Eq. 5.43 is relatively easier to calculate than Eq. 5.35, calculating Eq. 5.43 is still difficult because the limit state function gu(u) is in general a nonlinear function of variables u.
If the nonlinearity of the limit state function is not too severe, the integration of Eq. 5.43 can be approximated by

Pfβ12πe12u2du=Φ(β)

image (5.44)

where Φ is the standard normal distribution function of a single dimension, and β is the shortest distance between the origin and the point on the limit state function gu(u) = 0. This point is depicted as u∗ in Figure 5.32 and is called the β-point, design point, or the most probable point. The β value is called reliability index.
image
FIGURE 5.32 Approximation of failure probability integration, illustrated using a two-dimensional example. (a) PDF and projected contours in the U-space. (b) Projected view on the u1-u2 plane.
As illustrated in Figure 5.32a, the line that connects the origin and the MPP must be perpendicular to the tangent line Lu(u1,u2) = 0 at the MPP. Because the joint PDF ϕ(u) of the multidimension is axisymmetric, its projection on the plane that is normal to the tangent line is nothing but a probability density function of the standard normal distribution ϕ(u) of a single dimension. The probability integration of Eq. 5.35 can then be approximated by Φ(β)image, as stated in Eq. 5.44.
As discussed above, the key idea in calculating the failure probability using FORM or SORM is to locate the MPP in the U-space. Many numerical approaches have been developed for the MPP search. These methods can be categorized in two major categories: the reliability index approach (RIA) and the performance measure approach (PMA). The reliability index approach employs a forward reliability analysis algorithm that computes failure probability for a prescribed performance level in the limit state function. The performance measure approach employs an inverse reliability analysis algorithm that computes response level for a prescribed failure probability. We briefly state both approaches. For more details, readers are referred to Chang (2013a).
The problem for MPP search using RIA can be formulated as follows:

Minimize:uSubject to:gu(u)=0

image (5.45)

in which the MPP is identified by searching a point on the limit state function gu(u)=0image, where the distance between the point to the origin of the U-space is minimum. Again, the distance β is called the reliability index, hence the name of this approach. Note that in Eq. 5.45, the performance level of the limit state function is prescribed. In this case, MPP can only be searched for the performance level prescribed. Once the MPP is found, the distance β (reliability index) can be used to approximate the failure probability as Pf = Φ(β).
The performance measure approach, on the other hand, is given a target reliability index β (or failure probability Pf, and then β=Φ1(Pf)image). Thereafter, we search for MPP by bringing the function gu(u) closer to gu(u)=0image, in which the target performance level is achieved. The concept of PMA is illustrated in Figure 5.33 using a two-dimensional example. The required reliability index β is shown as a circle centered at the origin of the u1-u2 plane with radius β. Depending on the u value entered for the MPP search, the limit state function gu(u) usually is nonzero. If the u value entered is in the safe region, then gu(u) > 0. On the other hand, if u is in the failure region, gu(u) < 0. The MPP search becomes finding u that brings the limit state function gu(u) to gu(u) = 0. Therefore, the problem for MPP search using PMA can be formulated as follows:
image
FIGURE 5.33 MPP search using the performance measure approach.

Minimize:|gu(u)|Subject to:β=u

image (5.46)

in which the reliability index β=Φ1(Pf)image, and Pf is the required failure probability. Also, in Eq. 5.46, the limit state function gu(u) can be rewritten as

gu(u)=gt'(u)gt

image (5.47)

where gt(u) is the performance measure corresponding to the failure mode, and gt is the target performance level.
Next, we formulate the standard reliability-based design optimization problems, in which we assume the RIA for the MPP search.

5.6.1.2. RBDO problem formulation

The classical design optimization problem based on deterministic analysis is typically formulated as a nonlinear constrained optimization problem, as discussed in Chapter 3. Similarly, the RBDO problem can also be formulated as a nonlinear constrained optimization problem where reliability measures are included as constraint functions. Probabilistic constraints in RBDO ensure a more evenly distributed failure probability in the component of a product. In general, the RBDO model contains two types of design variables: distributional design variable θ and conventional deterministic design variable b.
Let θ = [θ1,θ2,...,θn1]T and b = [b1,b2,...,bn2]T be the distributional and deterministic design variable vectors of dimensions n1 and n2, respectively. The RBDO problem can be formulated as follows:

Minimize:f(θ,b)

image (5.48a)

Subjectto:Pfi=P(gi(θ,b)0)Pui0,i=1,m

image (5.48b)

θjθjθuj,j=1,n1

image (5.48c)

bkbkbuk,k=1,n2

image (5.48d)

where f(θ, b) is the objective function, P(•) denotes the probability of the event (•), and Piu is the required upper bound of the probability of failure for the ith constraint function Pfiimage. In Eqs (5.48c) and (5.48d), θjimage and θujimage, and bkimage and bukimage, are the lower and upper bounds of the jth distributional and kth deterministic design variables, respectively.
The reliability constraints defined in Eq. 5.48b are assumed to be independent and thus no correlation exists. As mentioned, it is almost impossible to calculate the probability of failure Pfiimage in Eq. 5.48b by a multiple integration for general design applications. Consequently, the FORM or other more efficient reliability analysis methods are employed. The computational flow of RBDO using the FORM is illustrated in Figure 5.34. Note that at each design iteration, the FORM needs to be carried out several times for individual failure functions. As formulated in Eq. 5.45, each FORM is equivalent to a deterministic optimization, which is very computationally demanding. This is the reason why RBDO is mostly limited to academic problems. Furthermore, the first-order derivative of the failure probability with respect to both distributional and deterministic design variables must be computed to support gradient-based RBDO.
image
FIGURE 5.34 Computation flow for gradient-based RBDO.
5.6.1.2.1. Reliability-based design sensitivity analysis
The sensitivity of the failure probability includes two parts: the sensitivity of the failure probability with respect to the distributional parameters θ of random variables (e.g., mean value, standard deviation), and the sensitivity of the failure probability with respect to the deterministic design variables b.
The derivative of the estimated failure probability Pf obtained using the FORM with respect to a design variable η, which can be either θj or bk, is

Pfη=Φ(β)η=Φ(β)ββη=Φ(β)βη

image (5.49)

where Φ is the standard normal density function. Therefore, to compute the sensitivity of the failure probability Pf, ∂β/∂η must be computed as

βη=(UTU)1/2η=1βUTUη

image (5.50)

where U is the MPP found in the U-space.
The sensitivity of the reliability index with respect to a distributional design variable θj can be obtained by substituting η = θj and U = T(X, θ) in Eq. 5.50 as

βθj=1βUT(T(X,θ)θj+T(X,θ)XXθj)=1βUTT(X,θ)θj

image (5.51)

For the normally distributed random variables, where T can be explicitly written as a transformation function of θ, Eq. 5.51 can be calculated analytically. For the non-normally distributed random variables, the transformation T cannot be obtained explicitly. In such a case, the finite difference method can be used to approximate the derivative of T with respect to θj.
As discussed, the reliability index β is the distance between the origin and the MPP in the U-space. The MPP vector U on the failure surface can be written as

U=βg(U,b)|g(U,b)|

image (5.52)

in which, ∇g(U,b) is the gradient of the failure function at the MPP:

g(U,b)=g(U,b)U

image (5.53)

From Eq. 5.52, the MPP vector U is also a function of b because the failure function g depends on the deterministic design variables b. Substituting Eq. 5.52 into Eq. 5.50 yields

βb=1|g(U,b)|gT(U,b)UUb

image (5.54)

By taking the derivative of g(U, b) = 0 with respect to b,

gT(U,b)UUb+g(U,b)b=0

image (5.55)

Substituting Eq. 5.55 into Eq. 5.54 yields

βb=1|g(U,b)|gT(U,b)b

image (5.56)

Note that evaluation of Eq. 5.56 needs only the first-order derivative of the failure function with respect to deterministic design variables.
To avoid the prohibitively expensive computational efforts required for a large number of reliability analyses during a batch-mode RBDO for design optimization, a mixed-design approach (Yu et al. 1997) that includes deterministic design optimization in batch mode and reliability-based design in an integrated mode, as discussed in Chapter 3, is employed. The mixed design approach starts with a deterministic design optimization, in which performance measures employed as failure modes are defined as constraint functions. After an optimal design is obtained, a reliability analysis is performed to ascertain if the deterministic optimal design is reliable. If the probability of the failure of the deterministic optimal design is found to be unacceptable, a reliability-based design approach that employs a set of interactive design steps, such as trade-off analysis and what-if study, is used to obtain a near-optimal design that is reliable with an affordable computational cost. A tracked-vehicle roadarm is employed to illustrate the approach next.

5.6.1.3. RBDO for a tracked-vehicle roadarm

A roadarm of the military tracked-vehicle shown in Figure 3.26a is employed to illustrate the mixed design approach for the RBDO. A deterministic design optimization is presented first. Then, reliability analysis using the FORM is discussed. The reliability-based design obtained using the interactive design process follows.
5.6.1.3.1. Deterministic design optimization
A 17-body dynamic simulation model (Chang 2013a) is created to drive the tracked-vehicle on a proving ground, at a constant speed of 20 miles per hour. A 20-s dynamic simulation is performed at a maximum integration time step of 0.05 s using dynamic simulation and design system or DADS (Haug and Smith 1990). The joint reaction forces applied at the wheel end of the roadarm, accelerations, angular velocities, and angular accelerations of the roadarm are obtained from the dynamic simulation. Four beam elements, STIF4, and 310 20-node isoparametric finite elements, STIF95, of ANSYS are used for the roadarm finite element model shown in Figure 5.35. The roadarm is made of S4340 steel and the length between the centers of the two holes is 20 in.
The fatigue life fringe plot is shown in Figure 5.36. At the initial design, the structural volume is 486.7 in.3. The crack initiation lives at 24 critical points (with node IDs shown in Figure 5.36) are defined as the constraints with a lower bound of 9.63 × 106 blocks (20 s per block). Note that the lower bound defined is equivalent to 20 years service life, assuming the tracked-vehicle is operated 8 h per day, 5 days per week. Definitions of the objective function and five critical constraint functions are listed in Table 5.5.
image
FIGURE 5.35 Roadarm finite element model.
image
FIGURE 5.36 Fringe plots of crack initiation life.
image
FIGURE 5.37 Design variable definition.

Table 5.5

Objective and Critical Constraint Functions

FunctionDescriptionLower BoundCurrent DesignStatus
ObjectiveVolume487.678 in.3
Constraint 1Node 12169.63 × 106 (20 years)9.631 × 106 bksActive
Constraint 2Node 9269.63 × 106 (20 years)8.309 × 107 bksInactive
Constraint 3Node 15449.63 × 106 (20 years)8.926 × 107 bksInactive
Constraint 4Node 15199.63 × 106 (20 years)1.447 × 108 bksInactive
Constraint 5Node 14339.63 × 106 (20 years)2.762 × 108 bksInactive

image

For shape design parameterization, eight design variables are defined to characterize the geometric shapes of the four intersections, as shown in Figure 5.37. The profile of the crosssection shape is composed of four straight lines and four cubic curves. Side expansions (x1-direction)image of cross-sectional shapes are defined using design variables b1, b3, b5, and b7 for intersections 1 to 4, respectively. Vertical expansions (x3-direction)image of the cross-sectional shapes are defined using the remaining four design variables.
A deterministic optimal design is obtained in six design iterations using the modified feasible direction method in the design optimization tool (DOT). As shown in Table 5.6, at the deterministic optimal design, all fatigue lives are greater than the lower bound and objective function is reduced by 10.5%. The geometric shapes of the roadarm at initial and deterministic optimal designs are shown in Figure 5.38.
5.6.1.3.2. Probabilistic fatigue life predictions
The random variables and their statistical values for the crack initiation life predictions are listed in Table 5.7, including material and tolerance random variables. The eight tolerance random variables b1 to b8 are defined corresponding to the eight shape design variables defined in Figure 5.37.
image
FIGURE 5.38 Geometric shape of the roadarm in front and top views. (a) Initial design. (b) Deterministic optimal design.

Table 5.6

Objective and Critical Constraint Function Values at Initial and Deterministic Optimal Designs

FunctionDescriptionInitial DesignDeterministic Optimal DesignChanges
ObjectiveVolume487.678 in.3436.722 in.310.5%
Constraint 1Node 12169.631 × 106 bks7.704 × 107 bks699.9%
Constraint 2Node 9268.309 × 107 bks9.631 × 106 bks88.4%
Constraint 3Node 15448.926 × 107 bks9.678 × 106 bks89.2%
Constraint 4Node 15191.447 × 108 bks4.698 × 107 bks67.5%
Constraint 5Node 14332.762 × 108 bks4.815 × 108 bks74.3%

image

FORM is used to calculate the reliability of the crack initiation life at five critical points. The results shown in Table 5.8 indicate that the failure probability at nodes 926 and 1544 is greater than 3%. Because the failure probability of the roadarm at the deterministic optimal design is too high, a reliability-based design must be conducted to reduce the failure probability (to obtain a feasible design).
5.6.1.3.3. Reliability-based design
For the reliability-based design, the mean values of the eight shape parameters shown in Figure 5.37 are chosen as the design variables. The objective function is still the structural volume. The constraint functions are the failure probability of the fatigue life at the five critical points, with an upper bound of 1% (i.e., the required reliability of fatigue life larger than 20 years is 99%). Table 5.8 shows that the initial design is infeasible because the second and third constraints are violated. The reliability-based design sensitivity analysis (DSA) method discussed above is used to calculate the sensitivity coefficients of the fatigue failure probability with respect to the design variables.

Table 5.7

Definition of Random Variables

Random VariablesMean ValueStandard DeviationDistribution
Young's modulus E (psi)30.0 × 1060.75 × 106LogNormal
Fatigue strength coefficient σf1.77 × 1050.885 × 104LogNormal
Fatigue ductility coefficient εf0.410.0205LogNormal
Fatigue strength exponent b0.073000.00365Normal
Fatigue ductility exponent c0.60.003Normal
Tolerance b1 (in.)2.8890.032450Normal
Tolerance b2 (in.)1.5830.019675Normal
Tolerance b3 (in.)2.9110.031703Normal
Tolerance b4 (in.)1.6370.019675Normal
Tolerance b5 (in.)2.8700.031703Normal
Tolerance b6 (in.)2.4200.026352Normal
Tolerance b7 (in.)2.8010.032496Normal
Tolerance b8 (in.)4.7000.050568Normal

image

Table 5.8

Objective and Failure Function Values at Deterministic Optimal and Improved Designs

FunctionDescriptionDeterministic Optimal DesignImproved Design (2 RBSOs)Changes
ObjectiveVolume436.722 in.3447.691 in.32.5%
Constraint 1Node 12160.476%0.532%
Constraint 2Node 9263.24%0.992%
Constraint 3Node 15443.21%0.998%
Constraint 4Node 15190.83%0.721%
Constraint 5Node 14330.023%0.018%

image

Because the current design is infeasible, a constraint correction algorithm is selected for the trade-off analysis. Using the sensitivity coefficients, a QP (quadratic programming) subproblem is employed to search a direction in which the reliability will quickly increase. Then, a what-if study is performed along the search direction suggested by the trade-off study, plus a step size.
Through two iterations, a feasible design is achieved, as shown in Table 5.9. The two design iterations took 10 FORMs and two reliability-based DSAs. At the improved design, failure probabilities at five critical points are less than 1%, with 2.5% increments in volume. However, the total volume savings starting from the initial design is 8%—that is, from 487 in.3 to 447 in.3. The design variable values of the initial, deterministic optimal, and improved (after two interactive RBDOs) designs are listed in Table 5.9.

Table 5.9

Design Variable Values at Initial, Deterministic Optimal, and Improved Designs

Design VariablesInitial DesignDeterministic Optimal DesignImproved Design
b1 (in.)3.2502.8892.902
b2 (in.)1.9681.5831.593
b3 (in.)3.1702.9112.925
b4 (in.)1.9681.6371.687
b5 (in.)3.1702.8702.904
b6 (in.)2.6352.4202.442
b7 (in.)3.1702.8012.881
b8 (in.)5.0574.7004.700

image

5.6.2. Design Optimization for Structural Performance and Manufacturing Cost

In mechanical and aerospace industries, engineers often confront the challenge of designing components that can sustain structural loads and meet the functional requirements (e.g., automotive suspension and engine components). It is imperative that these components contain the minimum material to reduce cost and increase efficiency of the mechanical system, such as fuel consumption. The geometry of these components is usually complicated due to strength and efficiency requirements, which often results in increased manufacturing time and cost. Some of these structural components are shown in Figure 5.39.
Although structural optimization has been widely used for many decades to solve such problems, the primary focus has been on the functionality aspects of the design. During the course of an optimization, the geometric complexity of the component may increase, making manufacturing difficult or uneconomical. Due to the increasing geometric complexity, conventional structural optimization problems that define mass as the objective function may not yield components with minimum cost from the manufacturing perspective. This especially holds true for machined components because machining cost is often the dominant constituent of product cost for such components.
image
FIGURE 5.39 Mechanical components involving time-consuming and precision machining operations. (a) Upright in an automotive suspension. (b) Engine block.
In this subsection, we present a case study of structural shape optimization that incorporates machining and material costs as the objective function subject to structural performance constraints. Structural shape optimization reduces material, but it may be accompanied by increases in the geometric complexity due to changes in the design boundary, which ultimately increases manufacturing cost. In this case study, we present a design process that incorporates manufacturing costs into structural shape optimization and produces components that are cost effective and satisfy specified structural performance requirements.

5.6.2.1. Design problem definition and optimization process

As discussed in Chapter 3, a typical single-objective optimization problem is defined as follows:

Minimize:f(b)

image (5.57a)

Subjectto:gi(b)0,i=1,m

image (5.57b)

hj(b)=0,j=1,p

image (5.57c)

bkbkbuk,k=1,n

image (5.57d)

where f(b) is the objective function; b is the vector of design variables, gi(b) is the ith inequality constraint, and hj(b) is the jth equality constraint. The objective function f(b) is the product cost for the component to be discussed shortly.
It is assumed that the designer has all of the required data, such as the initial shape of the component, boundary and loading conditions, material properties, and machining sequences. A solid model of the component is created using solid features in CAD software. Dimensions of the solid features also serve as design variables for the optimization problem. A virtual machining (VM) model is created by defining appropriate machining operations and sequences based on the given solid model. The machining parameters specific to each machining sequence are also specified in the VM model. Similarly, an FEA model is constructed using given boundary and loading conditions, as well as initial geometric information.
The design velocity field is then computed for sensitivity analysis and finite element mesh updates. FEA and VM are conducted to evaluate structural and machining performance measures, respectively. Machining time obtained from the VM model is important for the calculation of machining costs. Data obtained from FEA and VM models are used to evaluate objective and constraint functions. Design sensitivity analysis is conducted to compute the gradients of the objective function and constraints with respect to changes in the design variables. The gradients and values of the objective and constraint functions are passed to an optimization algorithm, which determines the design changes for the next design iteration. FEA and VM models are then updated using these changes, and the process is iterated as shown in Figure 5.40, until an optimal design is achieved.
image
FIGURE 5.40 The design optimization process.

5.6.2.2. Manufacturing cost model

The objective function f(b) in Eq. 5.57a is defined as follows (Edke and Chang 2006):

f(b)=CmatγV(b)+Cmcτ(b)+Ct

image (5.58)

where the three terms on the right-hand side represent material cost, machining cost, and tooling cost, respectively. In Eq. 5.58, Cmat is the material cost rate ($/lb); γ is the specific weight of the material; V(b) is the volume of the component that depends on design; Cmc is the machining cost rate ($/min); τ(b) is the time required to machine one component, which also depends on design; and Ct is the tooling cost ($).
The machining cost rate accounts for the cost of actual machining operations, machine shop overheads, operator wages and overheads, indirect costs such as cost of electricity, and machine depreciation (Dieter 1991). Cmc is given as

Cmc=160[M(100+OHM)100+W(100+OHOP)100]

image (5.59)

In this equation, M is the cost of machine operation per hour ($/h); OHM is the machine overhead rate (%); W is the hourly wage for the operator ($/h); and OHOP is operator overhead rate (%). The unit time τ(b) in Eq. 5.58 is the sum of machining time tmc and idle time ti: τ = tmc + ti, in which

tmc=(Lvf+Lvr);andti=tset+tch+thand+tdown+tins

image (5.60)

In Eq. 5.60, L is the total tool travel while cutting (m); L′ is the tool travel during rapid traverse (m); vf is the feed rate (m/min); vr is the rapid traverse velocity (m/min); tset is the job setup time per part (min); tch is the time for tool change (min); thand is the work-piece handling time (min); tdown is the machine down time (min); and tins is the time for in-process inspection (min). The cost of tooling, which includes cost of cutting tools and cost of jigs/fixtures, is given as

Ct=tmcTtl(Kini+Khnh)+Cf

image (5.61)

where Ttl is the total tool life (min); Ki is cost of the insert ($); ni is the number of cutting edges; Kh is the tool holder cost ($); nh is the number of cutting edges in the tool-holder; and Cf is the cost of jigs/fixtures ($). Machining time tmc is obtained from VM. Similarly, cost models for other machining processes can be developed.

5.6.2.3. Virtual manufacturing

Virtual manufacturing is a simulation-based method that supports engineers to define, simulate, and visualize the manufacturing process in a computer environment. By using virtual manufacturing, the manufacturing process can be defined and verified early in the design process. In addition, the manufacturing time can be estimated. Material cost and manufacturing time constitute a significant portion of the product cost. The virtual machining operations, such as milling, turning, and drilling, allow designers to conduct machining process planning, generate machining tool paths, visualize and simulate machining operations, and estimate machining time. Moreover, the tool path generated can be converted into Computer Numerical Control (CNC) codes (M-codes and G-codes) (Lee 1999; Chang 2013b) and loaded to CNC machines, such as HAAS mills (www.haascnc.com), to machine functional parts as well as dies or molds for production.
Geometrically complex parts are commonly found in the automotive and aerospace industries, where molds and dies are manufactured. The time taken to manufacture molds or dies of different sizes and complexities ranges from 1200 to 3800 h (Sarma and Dutta 1997). Considerable time (49–72%) is spent in contour surface milling for the design surfaces (the surfaces to be machined) of molds and dies (Sarma and Dutta 1997). Design changes in structural geometry due to functional considerations may affect the manufacturing cost significantly. Among different types of machining processes, contour surface milling is most applicable for mold and die machining. Existing commercial CAM tools, such as Pro/MFG (www.ptc.com), support contour surface milling using various methods, including isoparametric (Kai et al. 1995), constant curvature (Lo and Hsaio 1998), and constant scallop height (Sarma and Dutta 1997).
Typical milling operations that make molds or dies include pocket milling and surface contour milling. The quality and accuracy of the machined surfaces are largely determined by surface contour milling. In this section, surface contour milling will be briefly discussed to illustrate the connection between manufacturing time and structural geometric shape determined by performance requirements.
As discussed in Chapter 4, parametric surfaces are employed to parameterize the design boundary. The same surfaces will be assumed for machining. In general, the cutter contact (CC) curves are first generated on the design surface, as shown in Figure 5.41. The CC curves are created, for example, by uniformly splitting parameter u or w:

Ci(w)=S(ui,w),ui[umin,umax]

image (5.62)

where Ci(w) is the ith CC curve (which is indeed a u-isoline), and S(u, w) is the parametric design surface. The CC points are discrete points generated along the CC curve, which form a piecewise linear approximation of the CC curve for the CNC controller to trace. The chord length between the CC curve and the polyline formed by the CC points must be less than a prescribed tolerance, as illustrated in Figure 5.42. CL (cutter location) points are then obtained by offsetting from the CC points, considering workcell and cutter shape, as illustrated in Figure 5.43.
image
FIGURE 5.41 Design surface and CC curves.
image
FIGURE 5.42 CC points and CC curve.
Machining time tmc can be estimated by the length of the CL polyline and a prescribed feed rate. As can be seen in Figure 5.44, scallops remaining on the design surface are not uniform. A uniform scallop (with constant scallop height) is more desirable for the follow-on grinding operations. This can be reasonably achieved by moving the cutter on the design surface with a uniform physical spacing, as illustrated in Figure 5.45.

5.6.2.4. Design sensitivity analysis

As discussed in Chapter 4, design sensitivity analysis calculates the gradients of the objective and constraint functions with respect to design variables. Shape sensitivity analysis for structural performance measures has been developed for many years (Choi and Kim 2006) and was briefly discussed in Chapter 4. Sensitivity of machining time due to changes in the design surface can be calculated for specific machining sequences. For example, sensitivity analysis of machining time for a contour surface milling using the isoparametric method can be obtained as follows. First, a new parametric surface will be generated for a small design change of the kth design variable δbk. The CC curves on the perturbed parametric surface can be created by
image
FIGURE 5.43 Offset CC for CL points: (a) CL and CC lines (3-axis mill) and (b) cutter offset and scallop.
image
FIGURE 5.44 Uneven scallops.
image
FIGURE 5.45 Uniformly spaced toolpath.

Ci(w;b+δbk)=S(ui,w;b+δbk),ui[umin,umax]

image (5.63)

where ui is the u-parametric coordinate of the ith CC curve, which is kept the same before and after the design perturbation. Following the procedures discussed above, CC points and CL points are calculated next, considering chord length tolerance, workcell, and cutter shape. The new machining time tmc(b + δbk) can then be calculated by multiplying the length of the polyline formed by the new CL points with the same prescribed feed rate. The sensitivity of the machining time can be approximated by

tmcbktmc(b+δbk)tmc(b)δbk

image (5.64)

However, the overall finite difference method is probably more general and more straightforward to implement in CAM, especially while considering other machining sequences, such as constant scallop height surface contour milling.

5.6.2.5. Software implementation

The design optimization process is implemented using commercial CAD/CAM/FEA and design optimization tools, as illustrated in Figure 5.46. The shaded boxes show the commercial tools, while the plain boxes show software modules that need to be developed.
As a sample implementation (Edke and Chang 2006), SolidWorks and Pro/ENGINEER were selected for CAD modeling, ANSYS (www.ansys.com) and Pro/MECHANICA (www.ptc.com) for finite element modeling, Pro/MFG for virtual machining, and DOT for optimization. MATLAB and C/C++ are used to construct application programs for data transfer and mathematical computations.
Note that after a change is made in model dimensions, Pro/MFG updates the tool path for an NC sequence only after the particular NC (numerical control) sequence is run. This is when the tool path computations for that NC sequence are performed. Although most of the automation in Pro/ENGINEER can be achieved using Pro/TOOLKIT (e.g., for CAD model updates), Pro/TOOLKIT does not provide any function to perform the tool path computations. Tool path generation in Pro/MFG has to be done interactively by making a series Pro/ENGINEER menu and dialogue box selections. To overcome this problem, the “mapkeys” feature in Pro/ENGINEER is used.
Mapkeys are similar to the macros used in many application packages. A mapkey is a keyboard macro that maps frequently used command sequences to certain sets of keyboard keys. The approach for the recording of a mapkey and its value is shown in Figure 5.47 as an example. Once a mapkey is recorded, it is saved in a configuration file mapkey, with each macro beginning on a new line. Value of this mapkey (the string of commands) can be copied into a Pro/TOOLKIT application as a command string. Using a Pro/TOOLKIT function, the commands are loaded into a stack and are executed sequentially after the control returns to Pro/ENGINEER from the Pro/TOOLKIT application. Using these mapkeys, a Pro/TOOLKIT application is constructed to run the machining sequences and to extract total machining time.
image
FIGURE 5.46 Optimization flow with required software modules.

5.6.2.6. Aircraft torque tube example

The torque tube shown in Figure 5.48 is a structural component located inside the wings of an aircraft. Loads are applied to the three brackets and the bottom face of the tube is bolted to the wing flap. The tube is made up of AL2024-T351 with yield strength of 43 ksi. Seven rectangular holes are created between fins to reduce the weight of the torque tube, as shown in Figure 5.49.
image
FIGURE 5.47 Recording a mapkey in Pro/ENGINEER.
image
FIGURE 5.48 Airplane torque tube.
image
FIGURE 5.49 Torque tube with holes.
5.6.2.6.1. Problem definition
The objective of the torque tube optimization problem is to minimize the product cost subject to limits on structural performance measures. The objective function, which is similar to Eq. 5.58, is defined as:

Minimize:f(b)=Cmatγ{V0[5V1(b1,,b2)V2(b3,b4)V3(b5,,b6)]}+160(tmc+ti)[M(100+OHM)100+M(100+OHOP)100]

image (5.65a)

Subjectto:σ1max(b),σ2max(b),,σ12max(b)21.5ksi

image (5.65b)

bjbjbuj,j=1,6

image (5.65c)

Note that tooling cost is ignored in this problem for simplicity. The tube volume is computed by subtracting the volume of holes from its total volume. Since the five holes (1, 2, 5, 6, and 7) are grouped together, they have only two design variables in common. The maximum principal stresses at 12 locations are defined as constraint functions (see Figure 5.51 for some of the high stresses). The limit on the maximum stress is 21.5 ksi. It is apparent that changes in the hole sizes will vary the weight of the tube, may impact the structural integrity, and influence machining time of the tube. The design variable bounds are listed in Table 5.10.
5.6.2.6.2. Design parameterization
Parameterization of the torque tube holes is shown in Figure 5.50. Hole depth and half of the hole length are selected as design variables. When the length of the holes is changed, the holes either expand or contract symmetrically. Also, the position of the hole is maintained such that it always remains centered between the adjacent fins. From initial tests, it was observed that the maximum stress occurs near the middle bracket. Hence, except for the two holes adjacent to the middle bracket, the design variables of all other holes are grouped together; implying that width and length of the Holes 1, 2, 5, 6, and 7 are changed at the same amounts, respectively. This reduces the number of design variables from 14 (2 design variables per hole time 7 holes) to 6.
image
FIGURE 5.50 Parameterization of torque tube holes.

Table 5.10

Upper and Lower Bounds of Design Variables for the Torque Tube

Lower Limit (in.)Upper Limit (in.)
b1 (length of holes 1, 2, 5, 6, and 7)0.902.30
b2 (depth of holes 1, 2, 5, 6, and 7)1.501.95
b3 (length of hole 3)1.102.50
b4 (depth of hole 3)1.501.95
b5 (length of hole 4)0.601.40
b6 (depth of hole 4)1.501.95
5.6.2.6.3. Finite element analysis
The finite element model constructed in Pro/MECHANICA consists of 11,502 elements. A p-convergence study is conducted at the outset to fix the polynomial level of the element shape function for the analysis. The criterion specified is 3.5% strain energy convergence. The polynomial order is fixed at 7. The FEA solves 2,242,623 equations. Because the model is constrained by fixing displacements at finite element nodes at some locations, concentrated high stresses at such nodes are neglected.
The highest maximum principal stress is located at one of the holes and the inner edge of the middle bracket, as shown in Figure 5.51. The maximum stress magnitude is 20.70 ksi, which is close to the constraint limit (21.5 ksi). Note that a safety factor of 2 is used.
The most critical information required for performing design optimization in this example is the design velocity field. Once the velocity field is obtained, the remaining optimization process becomes routine. For the torque tube, the design boundaries (hole surfaces) are plane surfaces, as shown in Figure 5.52. This simplifies the velocity field calculations, as the prescribed displacement itself is now the boundary velocity. Thus, only domain velocity field calculation is required.
image
FIGURE 5.51 Finite element analysis results.
image
FIGURE 5.52 Velocity field computation for length design variables.
As shown in Figure 5.52, the fins and bottom surfaces are fixed. For the length design variables, a prescribed displacement of 0.1 in. is applied on the two end surfaces of the hole along the longitudinal direction. Roller boundary conditions are applied to the bottom surface of the holes. FEA is conducted to calculate the displacement of the finite element nodes, which is the design velocity field of the length design variable. To calculate the design velocity of the depth design variable, a prescribed displacement of 0.1 in. is applied to the bottom surface of the holes, and the two longitudinal end surfaces are fixed.

Table 5.11

Machining Parameters and Machining Time

Parameter nameValue
Tool0.5 in. end mill
Feed rate10 in./min
Spindle speed1250 rpm
Step depth0.2 in.
Machining time (initial design)41.26 min
5.6.2.6.4. Virtual machining
The VM model defined in Pro/MFG consists of an assembly of the torque tube without holes and the torque tube with holes. The machining parameters are summarized in Table 5.11. A customized pocket milling sequence (Figure 5.53) is defined in Pro/MFG to simulate the machining process. The time required to machine all the holes is 41.26 min for the initial values of design variables.
5.6.2.6.5. Design optimization
The sequential quadratic programming (SQP) algorithm is used for conducting design optimization. The convergence criterion is 1% of the objective function. The algorithm converges in four iterations. There is a 2.4% decrease in the cost. The weight of the torque tube reduces by 6.1%. Machining time decreases by 10.6%. The optimization history is shown in Figure 5.54. The values of the design variables for initial and final design are summarized in Table 5.12.
The design process presented successfully incorporates machining cost into a structural shape optimization problem. In addition to ensuring the manufacturability of the optimized components, the design process delivers components with a minimum cost and the required performance. The trade-off between structural performance and machining cost is critical, as revealed in this torque tube example.
image
FIGURE 5.53 Virtual machining in Pro/MFG.
image
FIGURE 5.54 Optimization results for the torque tube: objective (cost $) function history.

Table 5.12

Optimization Results for the Torque Tube

Initial Design (in.)Final Design (in.)% Change
b1 (length of holes 1, 2, 5, 6, and 7)1.601.2323.1
b2 (depth of holes 1, 2, 5, 6, and 7)1.651.565.45
b3 (length of hole 3)1.801.4320.5
b4 (depth of hole 3)1.651.724.24
b5 (length of hole 4)1.000.964.00
b6 (depth of hole 4)1.651.556.06

image

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.252.56