10

,

HOBBIES OPTIMIZER AND ITS APPLICATIONS

10.0 SUMMARY

After simulating the project and obtaining the initial results, the Higher Order Basis Based Integral Equation Solver (HOBBIES) Optimizer can be used to adjust automatically the designated model parameters such as model-element coordinates, object length, and similar quantities to achieve improved results for the desired characteristics such as gain or sidelobe performance. The HOBBIES Optimizer settings include choosing optimization methods, iteration numbers and symbols, and setting criteria. There are three optimization algorithms for the user to choose from. They are Powell's method, the Simplex (Nelder–Mead) method, and the Particle swarm optimization method. The number of iteration limits the maximum number of simulation calls for each specific algorithm. Symbols are defined by the user to change the model parameters, and the user needs to define the upper and lower boundaries for each of the symbols to be optimized. Several criteria are defined for the optimization goals. Each criterion can be assigned a specific weight to differentiate its priority in the whole list of criteria to be optimized.

The optimization process is controlled by a fitness function, which provides a fitness value based on the difference between the calculated result and the desired performance. The Optimizer attempts to find the minimum value of this function. Optimization is an iterative process, in which the Optimizer calculates the fitness function, modifies the symbol values, and the calculations repeats itself until the objectives or the number of specified iterations are satisfied. The Optimizer uses algorithms that cause the performance to move closer to the goals (i.e., to decrease the error) after each iteration.

10.1 FLOWCHART OF THE HOBBIES OPTIMIZER

10.1.1 Flowchart for Setting Up the Optimizer

To help the user implement the required steps to set up the HOBBIES Optimizer, Figure 10.1 illustrates the flowchart of a typical HOBBIES optimization setup.

images

Figure 10.1. Flowchart of the HOBBIES Optimizer.

Starting with a HOBBIES project containing symbols, the user must specify the optimization method, the symbols to be optimized (as well as their upper and lower limits), and the objective criteria to be satisfied before executing the Optimizer. Each step of the procedure, as shown in the Figure 10.1, will be illustrated later in more detail.

10.1.2 Fitness Function

The fitness function of the optimization (i.e., the difference between the simulated result and the given criteria) is calculated using the formula:

images

where FF is the total fitness value and fi is the fitness value for the ith criterion. The total fitness value is the addition of the fitness value for each criterion that needs to be normalized and weighted. x are the variables to be optimized. There are n criteria in total, and each criterion is associated with a normalization factor αi and a weight wi. The weight is to differentiate the priority of the criteria, and the normalization factor is to balance the values for different levels of criteria (e.g., the gain and/or the impedance). The fitness value for each criterion is calculated using the fitness value fi(x). For the ith criterion, the optimizer stores the results that satisfy the ith criterion and computes the error between the collected value and the objective value. For example, if the criterion is to optimize the radiation pattern within a certain angular range, then there may be some intermediate results that meet the given criterion within the angle of interest and they will be stored by the optimizer. Within the stored results, some of which satisfy the given criterion will not be computed for the error between the stored value and the objective value. In this way, the optimization efforts will be focused on the results that do not satisfy the criterion. Gk(x) stands for the stored result that does not satisfy the criterion, num is the number of results being stored and computed for the value of the cost function, and G0 is the target value of the ith criterion.

When the fitness function is optimized, the computed pattern becomes closer to the objective pattern. If the fitness function obtains a value of zero, the goal is achieved. So the goal of the optimizer is to use optimization algorithms to decrease the total fitness function value.

10.1.3 Flowchart of HOBBIES Optimization

The HOBBIES Optimizer follows the procedure shown in Figure 10.2. Before optimization, the user needs to mesh the model once to establish the meshing style so that during the optimization, the optimizer can follow the same style to mesh the model. If the model is built for the first time, it is recommended that the user executes the model once to determine whether the initial result for use as a reference when setting up the criteria is adequate. During optimization, the Optimizer will generate new symbol values according to the optimization algorithm that the user selects, and the model will be updated by those new symbol values. The HOBBIES kernel will then be called to simulate the updated model and new results will be generated. Then the new results will be sent to the HOBBIES Optimizer and compared with the previous values to predict the new symbol values. This procedure will be repeated until the user is satisfied with the results, or the iteration number reaches the preset limit.

images

Figure 10.2. Flowchart of HOBBIES optimization.

10.2 OPTIMIZATION ALGORITHMS USED IN THE OPTIMIZER

There are three optimization algorithms that have been implemented in the Optimizer. They are the Simplex algorithm, Powell's method, and the Particle swarm optimization method. The Simplex algorithm and Powell's method are local optimization algorithms, and the Particle swarm method is for global search. An introduction of these three algorithms is given next, and more details can be obtained from the references described in those sections.

10.2.1 Powell's Method

This algorithm is a state-of-the-art, derivative-free optimization method, based on the BOBYQA (Bound Optimization by Quadratic Approximation) proposed by M. J. D. Powell in 2009. This optimization method is a derivative-free optimization algorithm based on the trust region paradigm [1]. It is an extension of the NEWUOA (NEW Unconstrained Optimization Algorithm) [2] algorithm that deals with unconstrained optimization problems. In design procedures, derivative-free optimization methods are often employed mainly because the analytical expression of the derivative of the objective function does not usually exist. In addition, the computation of a discreet estimation of the derivative may be a time-consuming process. The BOBYQA is employed for this reason.

Modern optimization algorithms often use a local model of the objective function F. Powell's method creates a quadratic model Q to fit F, and the optimum of Q is considered to match the value of F. The model Q is updated with the value of F corresponding to the previous optimum of Q, and iterations are performed until a stopping criterion is reached. The quadratic model Q is created using m sample points of F, and m is the number of interpolation conditions that are imposed on the quadratic approximation. A standard quadratic interpolation of a function requires m = (2n + 1) samples for a problem of dimension n. A point x0 is the starting point provided by the user. Interpolation points are selected inside a neighborhood of x0. This neighborhood is called the trust-region, and the optimal point will be computed inside it. The trust-region radius decreases during the optimization process when the optimum of Q stops decreasing the value of F. The iterations stop either when the iteration number reaches a user-defined value or when Q is considered close enough to F.

Powell's algorithm can be summarized in the following steps:

  • Create an initial quadratic model Q of the objective function to optimize F.
  • For each iteration:
    • Compute the minimum of Q inside a trust-region.
    • Improve the model using the latest optimum.
    • Stop if the iteration number reaches a user-defined value.
    • Stop if the distance between Q and F is small enough (perfect match of the model Q and the objective function F).
    • Decrease the trust-region radius, if the values computed for F stops decreasing.
  • Evaluate and return the final solution point.

These steps are executed until the objective criteria are reached or the number of iterations is satisfied.

10.2.2 Simplex (Nelder–Mead) Method

The Simplex (Nelder–Mead) [3] or the downhill simplex method is a commonly used nonlinear optimization technique for minimizing an objective function in a n-dimensional space. It is based on evaluating the objective function at trial points. The algorithm first generates (n + 1) trial points in an n-dimensional space that serve as the initial search pattern. Next, it generates a new trial point and tries to replace the current worst point with the new point, if it is better. The algorithm compares the function values at those trial points and updates the trial points iteratively so that the function values at those points gets smaller and smaller until a desired result is obtained or when the responses cannot be improved further. The basic procedure for the algorithm is as follows, and the flowchart of the algorithm is shown in Figure 10.4.

  1. Evaluate the objective function value at the n + 1 trial points, and sort the values: f(x1) ≤ f(x2) ≤ ··· ≤ f(xn+1).
  2. Calculate the centroid xo of all the points except for xn+1.
  3. Reflection: the reflected point xr = xo + α(xoxn+1).

    If the reflected point is between the second worst and the best {i.e., f(x1) ≤ f(xr) ≤ f(xn) }, then replace the worst point xn+1 with the reflected point xr, and return to step 1 and repeat the procedure.

  4. Expansion: If the reflected point is better than the point x1, then it becomes the new best point; then computes the expanded point xe = xo + γ(xoxn+1). If the expanded point is better than the reflected point f(xe) ≤ f(xr), then replace the worst point xn+1 with the expanded point xe, and go to step 1. Otherwise, replace the worst point xn+1 with the reflected point xr, and go to step 1.
  5. Contraction: If the reflected point is not better than the second worst f(xr) ≥ f(xn) , then compute the contracted point xc = x0 + ρ(xn+1x0). If the contracted point is better than the worst point f(xc) < f(xn+1), then replace the worst point xn+1 with the contracted point xc, and go to step 1.
  6. Reduction: If the contracted point is worse than the worst point f(xc) > f(xn+1), then the simplex reduces in size. That is to say, replace each point with: xi = x1 + σ(xix1) for all i ∈ {1,…, n + 1} and return to step 1. α, γ, ρ, and σ are the reflection, the expansion, the contraction, and the shrink coefficients respectively. These values are by default α = 1, γ = 2, ρ = 0.5, and σ = 0.5.

A simple two-dimensional example is shown in Figure 10.3, where B, NB, W and CEN stand for the best point, next best point, worst point, and centroid point, respectively. R, C, and E indicate the reflection point, contraction point, and expansion point, respectively. NB' and W' are reduction points of NB and W.

images

Figure 10.3. A two-dimensional case for the Simplex (Nelder–Mead) method with different search trials: (1) Initial 2-D simplex, (2) Reflection trial, (3) Contraction trial, (4) Expansion trial, (5) Reduction trial.

The algorithm begins with the reflection step, attempting to find a point R with a lower functional value on the opposite side of the worst point W. If the reflection step successfully finds a new minimum, the expansion step is used to search further along the direction from the centroid point CEN to the reflection point R to find possibly a lower functional value. However, if the reflection step fails and finds a bad functional value, the contraction step may be able to find a point C with a good functional value to replace the worst point W.

The initial search size is important and depends on the nature of the problem. If the initial search size is too small, it results in a limited local search; when there are local minimums, it is highly probable that the algorithm may be trapped there, instead of the global minimum. The original algorithm was intended for unbounded problems. However, for solving practical problems that usually have boundary limitations, the algorithm was modified to include boundary constraints. If a trial point is outside the boundaries, a boundary reflection of that point will be taken. If the reflected point is still outside the boundaries, another reflection will be applied and the new reflected point will be taken instead. This process is repeated until the reflected point satisfies the boundary constraints. A flow chart of the method is presented in Figure 10.4.

10.2.3 Particle Swarm Optimization

Particle swarm optimization (PSO) [4,5] is a method for performing numerical optimization without an explicit knowledge of the gradient of the problem to be optimized. PSO optimizes a problem by maintaining a population of candidate solutions called particles and moves these particles around in the search-space according to simple formulas. The movements of the particles are guided by the best found positions in the search-space, which are continually updated as better positions are found by the particles. PSO is a global optimization algorithm as it initially spreads the particles around the whole search-space.

Assume the swarm has S particles and each has a position xi in the search-space and a velocity vi. Both xi and vi are n-dimensional variables. Set pi to be the best known position of particle i, and set g to be the best known position of the entire swarm. The algorithm first spreads the particles around the whole search-space and sets the initial best position.

The basic procedure for the algorithm is as follows.

Initialization:

  • Initialize the particle's position uniformly distributed over the search-space. Take an initial position xi to be the particle's best known position pi. Take the best of xi to be the swarm's best known position g.

Iteration:

  • Update the particle's velocity through the following substitution as viω vi + cp rp × (pi - xi) + cg rg × (gxi), where the × operator indicates multiplication element-by-element. rp and rg are random vectors uniformly distributed between 0 and 1.
  • Update each particle's position by adding the velocity: xixi + vi.
  • Update the particle's best known position pi and the swarm's best known position g.

images

Figure 10.4. Flowchart of the Simplex (Nelder–Mead) method.

Convergence:

  • The iteration is stopped if a termination criterion is met (e.g., number of iterations reached, or adequate objective function value reached).
  • Now g is the best found solution.

The parameters ω, cp, and cg control the behavior and efficiency of the PSO algorithm. They have been the topic of much research in the literature. The way of choosing the parameters applied in the HOBBIES Optimizer is taken from [4].

It should be noted that this algorithm initializes the positions of the particles not based on the initial values of symbols, but instead places them in a uniform distribution over the search-space. Therefore, good initial symbol values have little effect on the final optimization results. And it is recommended that this algorithm be used for a coarse search to obtain a good starting point for the local optimization algorithms.

10.3 SETTING UP THE HOBBIES OPTIMIZER

Figure 10.5 shows the optimizer window, obtained by selecting HOBBIESimagesOptimizer (after at least one symbol is defined through HOBBIESimagesSymbols) which has three tabs. The Status tab gives information such as the optimization methods and iteration number for each method. It also displays the progress of the optimization method (i.e., current iteration error), and the current iteration number that is given in the lower part of the window. The cost history figure is displayed in the right portion of the window. The Symbols and Criteria tabs will be discussed later. The setting up of the Optimizer is discussed next.

10.3.1 Creating a HOBBIES Project with Symbols

For details about how to create a HOBBIES project with symbols, please refer to Section 4.1.7.

images

Figure 10.5. Optimizer window.

10.3.2 Selecting Optimization Methods

At the Status window, the user needs to select two basic parameters. One is to choose the optimization methods in the drop-down menu list as shown in the Figure 10.6. The other is to specify the value for Maximum Iteration. The user may either use the default settings under the Options button or click the button to change the parameters.

images

Figure 10.6. Three available optimization methods in the HOBBIES Optimizer.

Selecting optimization methods:

There are three optimization algorithms available in the HOBBIES Optimizer: i.e., Powell's method [1], the Simplex (Nelder-Mead) method [3], and the Particle swarm optimization method [4, 5]. Details of the three algorithms have been provided in Section 10.2.

There are two ways to utilize the optimization methods. The user can either choose a single algorithm for optimization, and this algorithm can be any one of the three. Or, the user can choose two optimization methods in succession to form a combined optimization. The first optimization method is used for a coarse optimization, and the second method is used for fine-tuning. If there is no need for fine-tuning, the Maximum Iteration for method 2 needs to be set as “0” and only the first method will be used. In both cases, the user can select the preferred optimization algorithm.

Specifying the value for “Maximum Iteration”:

The Maximum Iteration field specifies the maximum number of simulation calls for each optimization method. This value should be a positive integer. Symbols and expressions are not allowed.

“Options” button:

The Options button is used for setting parameters for the optimization algorithms. By clicking the Options button, the relevant option window will appear. Figure 10.7, Figure 10.8, and Figure 10.9 show the option windows for Powell's method, the Simplex (Nelder–Mead) method, and the Particle swarm optimization method, respectively.

images

Figure 10.7. Option window for Powell's method.

  • Powell's method:
    • NPT is the number of interpolation conditions; in other words, it is the number of initial iterations of the optimization. As mentioned in Section 10.2.1, Powell's method first creates a quadratic model to approximate the objective function. A standard quadratic interpolation of a function requires (2n + 1) samples for a problem of dimension n (n is the number of symbols for optimization). The option window gives two choices for NPT: (2n + 1) and 0.5 × (n + 2) × (n + 1). Either choice is acceptable, but it is recommended to choose (2n + 1) when n is large; otherwise the evaluation of the initial iteration will be expensive, and the memory requirement for the algorithm will be large.
    • Final value of trust region radius is the required accuracy for the optimized variables.

    The default values for these two parameters are (2n + 1) and 10−3 = 1e–3, respectively. The user can change the parameters appropriately.

  • Simplex (Nelder–Mead) method:
    • Convergence Tolerance is the convergence accuracy for function value. If within (n + 1) iterations the average difference between the average iteration value and the current iteration value is less than the Convergence Tolerance, then the iteration is considered to have converged.
    • Reflection Coefficient
    • Extension Coefficient
    • Contraction Coefficient

    The descriptions for Reflection Coefficient, Extension Coefficient, and Contraction Coefficient can be found in Section 10.2.2, and their default values for are 1.0, 2.0, and 0.5, respectively.

    images

    Figure 10.8. Option window for the Simplex method.

    images

    Figure 10.9. Option window for the Particle swarm optimization method.

  • Particle swarm optimization method:
    • Particle Number is the number of particles applied for the whole search-space. Setting this parameter depends on the number of symbols for optimization and the maximum iteration number set for the algorithm. They are all correlated, and the correlation depends on the optimization problem itself, so it is on a case by case basis. Details and examples are shown later in Section 10.4.1.
    • Acceleration Coefficient C1 is cp (mentioned in Section 10.2.3).
    • Acceleration Coefficient C2 is cg (mentioned in Section 10.2.3).
    • Maximum Changing Rate limits the maximum distance of a particle's movement. The default value of 0.2 is used; the maximum distance that a particle can move in one iteration is 0.2 multiplied by the difference between the upper and lower boundary of the symbol.

    The default values for Particle number, Acceleration Coefficient C1, and Acceleration Coefficient C2 are 5, 2.0, and 2.0, respectively.

Checking the optimization status:

The Status tab contains optimization information that is outlined in the Figure 10.10. Iterations of Method 1 and Iterations of Method 2 display the current number of simulation called by method 1 and method 2. By comparing with the value of the maximum number of iterations specified for method 1 and method 2, the user can get an idea of the optimization progress. The Cost shows the current value of the fitness function. This value indicates the difference between the objective value and the value obtained from the most recent iteration. The goal of the optimization is to minimize the cost. When it reaches zero, then the goal is achieved. The formula for it can be found in Section 10.1.2. Note that when the optimization is finished, the Cost field will show the fitness value of the last iteration.

images

Figure 10.10. Optimizer information shown on the Optimizer window.

Live figure of cost value history:

The right side of the Status tab contains a graph showing the live display of the cost value history, as shown in Figure 10.10. There is a check box Show all iterations and two buttons Range, and Export Results on top of the graph. If this box is checked, the graph will show the cost value at each iteration, whether or not the iteration results in an improvement. If the box is unchecked, the graph only shows the cost value of iterations that result in an improved optimization result, meaning the curve on the graph will only decrease in value as the optimization progresses.

The Range button opens the Range option window shown in Figure 10.11. The Maximum Iteration controls the number of iterations to be shown on the graph, and the Step Length gives the size of the grid on the graph. For example, the user wants to see the iteration process from the start to the 1000th iteration and wants the grid size to be 200 iterations. Then the user should put 1000 and 200 for Maximum Iteration and Step Length, separately.

The Export Results button opens a file path window for exporting the graph, as shown in Figure 10.12. The extension name of the file is .grf, and the graph can be shown by importing the file while in the post-process mode, shown in Figure 10.13.

images

Figure 10.11. Option window for the graph.

images

Figure 10.12. Path window for exporting the graph to a .grf file.

images

Figure 10.13. Importing the .grf file and show it as a graph.

10.3.3 Specifying the Symbols to Be Optimized

Click the Symbols tab in the Optimizer window to see the symbol list. The Optimizer will load the symbols and their values from the project automatically. However, symbols that are expressed by other symbols will not be shown in the Optimizer window as they depend on other symbols. When there are a large number of symbols, it will take several seconds to load symbols to the Optimizer.

In front of each symbol there is a check box. The user needs to check the boxes of the symbols to be optimized. Initially, all symbols are unchecked when the optimization is set up for the first time as shown in Figure 10.14. Also, each symbol to be optimized must be assigned a Lowest Value and Highest Value for its boundaries during the optimization. The current value of the symbol must be between the highest and lowest boundary values. It is recommended that the starting value of the symbols be near the middle of the given interval. Symbols or expressions are not allowed for boundary values. Figure 10.15 shows the Optimizer window after editing the symbol list.

Note: An optimization might need hundreds or even thousands of iterations to find a good solution. Keeping the number of symbols to be optimized as small as possible is the best way to achieve a quick solution. A good initial value will also speed up the optimization process.

images

Figure 10.14. Symbol list in the Optimizer window.

images

Figure 10.15. Symbol list in the Optimizer window after editing.

10.3.4 Specifying the Criteria

10.3.4.1 Adding a Criterion

The Criteria tab at the top of the optimizer window contains the goals for the optimization to be accomplished. If the optimization is set up for the first time, there are no criteria in the list, as Figure 10.16 shows. Clicking the Add button at the top left corner invokes the Criterion input window, where users can specify the parameters for the criteria, as shown in Figure 10.17. Details on how to set up these parameters will be discussed next.

images

Figure 10.16. Blank criteria list in the Optimizer window.

images

Figure 10.17. Criterion window.

• Parameter

The Parameter defines the type of results to be optimized. The options available are summarized below for the network parameters as well as far-field and near-field components.

  • Network Parameters tab: The user can optimize the Y (Admittance), Z (Impedance), and S (Scattering) parameters. Note that the user must also select the port index. (Figure 10.18 shows a trimmed graph of the criteria window.)
  • Far-Field tab: The user can optimize any of the E-total, E-phi, E-theta, Copolar, and Crosspolar components of the electric far-field (Figure 10.19).
  • Near-Field tab: The user can optimize any of the E-total, Ex, Ey, and Ez components of the electric near-field, and H-total, Hx, Hy, and Hz components of the magnetic near-field (Figure 10.20).

    images

    Figure 10.18. Parameter list for Network Parameters tab.

    images

    Figure 10.19. Parameter list for Far-Field tab.

    images

    Figure 10.20. Parameter list for Near-Field tab.

• First Port (Output port) / Second Port (Input port)

These fields are used to specify the port index for network parameters (shown in Figure 10.21). For example, if the user chooses Parameter to be Y (Admittance), and 1, 2 for First Port and Second Port, respectively, it means Y12 term is to be optimized. These two parameters only appear when Network Parameters tab is chosen.

Note: When the project is working in All generators mode, the parameters First Port and Second Port need to be of the same value, while One generator at a time mode doesn't have this limitation.

• Component

This term defines which component of Parameter is to be optimized as shown in Figure 10.22. Generally, five components are available:

  • Real
  • Imaginary
  • Magnitude
  • Phase
  • Gain

Note: For a Parameter like E-total (Far-Field), the corresponding Component only has the Gain option as optimizing the other components does not make sense. The user needs to select the right Component for specific Parameter; otherwise a warning message will appear.

• Relation

The Relation defines whether the optimized result should be larger, smaller, or equal to the desired value (shown in Figure 10.23).

images

Figure 10.21. First Port and Second Port in the criterion window.

images

Figure 10.22. Component list in the criterion window.

• Goal value and dB check box

If the goal value is in decibels, then check the dB box; otherwise the value will be considered to have the standard unit (SI) for the optimization parameter (Figure 10.24).

images

Figure 10.23. Relation list in the criterion window.

images

Figure 10.24. Goal value and dB check box in the criterion window.

• Weight

This parameter specifies the priority of the criterion, and the default value is 1 (Figure 10.25). When there are two or more criteria, different weights can be assigned to each criterion to differentiate the priority or impact for each criterion. The larger the value of the weight, the higher priority the criterion will have. Note that the weight cannot be zero. The detailed use of the weight value can refer to formulas in Section 10.1.2.

• Excitation

If the model is working in the antenna mode, the excitation will display a generator; if the model is working on the scatter mode, the excitation will show a wave (Figure 10.26). The default value for the Excitation will be the total number of generators/waves in the model. The up and down arrows are used to configure the value. When the project is working in the All generators mode, the parameter for Excitation will have the default value 1.

images

Figure 10.25. Weight option of the criterion.

images

Figure 10.26. Excitation changes between Generator and Wave depending on mode.

• Start Freq / Stop Freq

These two parameters stand for start frequency and stop frequency and define the frequency range for the optimization (Figure 10.27).

The stop frequency should be always equal to or larger than the start frequency. The Start Freq / Stop Freq will automatically load the value of Start frequency / Stop frequency from the frequency setting. If the user tries to input frequencies that are outside of the frequency setting used to simulate the project, a warning window will appear.

Note: These two parameters need to be updated by the user if the project was optimized previously and later the simulation frequency or optimization frequency was changed to another value. The previous optimization will record the parameter settings and load the settings again when the project is opened the next time. And the Start Freq / Stop Freq will still load the value from the old settings.

• Φ-angle and θ-angle

These parameters select the range for phi and theta angles for optimization (Figure 10.28). They are only available for far-field optimization.

Note: When the Near-Field tab is selected, the x, y, and z range will show instead of the phi and theta angles (Figure 10.29).

images

Figure 10.27. Start Freq / Stop Freq in the criterion window.

images

Figure 10.28. Phi and theta range of the far field.

images

Figure 10.29. x, y, and z ranges of the near-field.

After setting up all the parameters on the criterion window, click the OK button and the criterion will be added to the list (Figure 10.30); otherwise, click the Cancel button to cancel the operation.

10.3.4.2 Deleting a Criterion

After adding the criterion, it will appear on the criteria list. To delete a criterion, right click on the Edit button. The Delete option will appear. Clicking on the Delete option will delete the criterion (Figure 10.30).

images

Figure 10.30. Criteria list.

10.3.4.3 Editing a Criterion

If the user needs to modify the parameters of the criterion, just move the mouse onto the Edit button on the left side of each criterion, and left-click the mouse; the criterion window will appear for editing.

10.3.4.4 Adding a List of Criteria

If there are a large number of criteria to be used in the optimization, the user can add them manually through a file. This method can be much quicker. Open the current project folder (the HOBBIES project has the .gid extension for the folder name), and open the OPT folder inside and find the file that has the .opt extension. This file is used to record the optimization parameters for the project. Near the end of the file, there are long lines of numbers that are parameters used for the various criteria (Figure 10.31). At the end, there are descriptions of these parameters.

images

Figure 10.31. An example of .opt file that records the parameters.

An example is shown in Figure 10.31. The number of criteria is written first, and the lines of criteria follow after that number. Each criterion is expressed by one line of numbers that is separated by a space. After all lines of criteria, there is one line of separation before the names of the parameters. The numbers in each line of criterion stand for the values of the parameters. Some of them are the same values as shown in the criterion window, like angle range, weight and so on, while others correspond to different options of the parameters. These relations are listed in the tables below.

  • Parameter

    images

  • Modifier
    Real 0
    Imaginary 1
    Magnitude 2
    Phase 3
    Gain 4
  • Relation
    Less than (<) 0
    Equal (=) 1
    Greater than (>) 2
  • dB
    Box checked 1
    Box unchecked 0

For example, the criteria in HornA.opt (Figure 10.31) can be read as:

Criterion 1: Gain > 15 dB at frequency 2.5 GHz, φ = 0°, θ = 0° with weight equals 2 (Figure 10.32 and Figure 10.33).

Criterion 2: Gain < -5 dB at frequency 2.5 GHz, φ = 180°, θ = 0° with weight equals 1.

Note: To make these lines of criteria share the same format, the x, y, and z range and the phi and theta angles coexist in a line. The user only needs to change either the near-field part or the far-field part.

images

Figure 10.32. An illustration of the first-line criterion parameters.

images

Figure 10.33. Parameter setting for the first criterion.

10.3.5 Running the Optimizer

At the bottom of the HOBBIES Optimizer window, there are six buttons (Figure 10.34):

  • Run – to start the optimization process or to stop it
  • Save – to save the intermediate result and optimizer parameters in a file
  • Revert – to load the intermediate result and parameters through a file
  • Exit – to close the optimizer window.
  • Reset – to load the intermediate result and parameters of previous save status
  • Help – to load the help document about the Optimizer

Before running the optimization, a check is performed to verify that the given criteria fit with the current project setting. If all the criteria fit well to the current project, the optimization process will start; otherwise a warning message will appear. After the optimization starts, the Run button will change to Stop (Figure 10.35). And if the Stop button is clicked, a window will appear asking the user whether to terminate the optimization process (Figure 10.36). If Yes is chosen, the optimization will finish by terminating the current iteration and set the best result of previous iterations as the final optimization result. When the optimization is finished, HOBBIES automatically updates the model and simulates with the optimized symbol values to finish the post-processing.

images

Figure 10.34. Buttons on the Optimizer window.

images

Figure 10.35. The “Run” button changes to the “Stop” button after the optimization starts.

images

Figure 10.36. Window asking the user whether to terminate the optimization.

The Save button allows a state of the optimization to be saved in a file that can be loaded at a later time. By clicking this button, a browse window will appear for the user to select the path for the file (Figure 10.37). By using the Revert button described in the subsequent paragraph, the user can revert the optimization variables to any intermediate point that was saved before. The file that stores the state of the optimization has the extension .ost. It is suggested that the file be saved in the OPT folder under the current project.

The Revert button allows a state of the optimization to be retrieved once it has been saved using Save. By clicking this button, the user can reset the optimization variables to any intermediate point that the user wishes, by selecting the name of the saved state from the menu that appears.

The Reset button resets all optimization variable values to the previous saved status, from either the last time the user clicked the save button or when the user last opened the project, whichever is closest to the current time.

Note: After the optimization finishes, it is necessary to save the project so that the optimized symbol values will be saved.

images

Figure 10.37. A browse window for saving the state as an .ost file.

10.4 OPTIMIZATION EXAMPLES

10.4.1 Test Functions

The previous sections have introduced the three optimization algorithms and their parameters. For Powell's method and the Simplex method, it is recommended, but not necessary, to use the default parameter values. The Final value of trust region radius of Powell's method and Convergence Tolerance of the Simplex method are used to control the convergence accuracy and are subject to the user's requirement. For the Particle swarm optimization algorithm, the particle number is a very important parameter for controlling the search depth and the speed of convergence. Some commonly used benchmark functions (test functions) are tested using the Particle swam optimization algorithm to illustrate the selection of the particle number parameter.

Three test functions are used. Rastrigin's function and Ackley's function are multimodal functions, while the Sphere function is the unimodal function. Generally, multimodal functions are much more difficult to optimize, as illustrated in Table 10.2. Success is declared if the algorithm reaches the function value of (10-6). Table 10.1 gives the information of those test functions, while Table 10.2 shows the results of the performance. Each optimization experiment was run 10 times with a random initial value between the boundaries [xmin, xmax]. The success column in Table 10.2 shows the number of successes out of 10 trials. And NFE stands for the number of functional evaluations.

Table 10.1. Optimization test functions.

images

It is indicated in Table 10.2 that for the unimodal function, the algorithm can always achieve a satisfactory result when the particle number is large or small, while for a multimodal function, a larger particle number will achieve a better result than a smaller particle number. Meanwhile, by selecting a larger particle number, more functional evaluations are needed to achieve a convergence of the optimization procedure. For practical optimization problems, selecting a small particle number will help to achieve fast convergence, but it also becomes easier to get caught in a local minimum. Conversely, selecting a large particle number increases the chance of achieving the global minimum, but usually requires more functional evaluations to converge.

Table 10.2. Performance results for PSO algorithm on test functions.

images

10.4.2 Optimization of a Horn Antenna

Start HOBBIES and load the project from “infoexamplesHorn.gid” under the HOBBIES installation folder. All parameters are set up already. The working frequency is 2.5 GHz. In the Symbols table, the symbols x, y, and L2 have values of 100, 75, and 75, respectively. Those symbols represent the dimensions for the open end of a horn antenna (Figure 10.38). The direction of the maximum gain of the horn antenna is along ϕ = 0°, θ = 0°, and the goal of this optimization is to achieve the maximum gain in that direction. Figure 10.39 gives the criterion window of this example.

Before the first optimization, the model needs to be meshed once. This provides the optimizer with a starting point for meshing automatically. First, click MeshimagesGenerate, and the mesh generation window will appear. Check the option Use geometry as meshing elements as this horn antenna model is created by plates and wires and it does not need to be further meshed (Figure 10.40).

Open the Optimizer (HOBBIESimagesOptimizer) and select the Simplex (Nelder-Mead) as the first optimization method, and change the Max Iteration to be 30 (leave the Max Iteration to be 0 for the second method). Next click the Symbols tab and check the optimization variables x, y, and L2, and specify the boundaries for them. For x, the range should be 20 to 150; and for y, and L2, the boundary values should be 10 and 150 (Figure 10.41).

Click the Criteria tab, and click the Add button. The criterion window will pop out. Input the parameters as Figure 10.39 shows. The criterion indicates that the optimization goal is to achieve a far-field gain larger than 20 dB along the direction of ϕ = 0°, θ = 0°. The weight is by default 1. After setting up the criterion, click the Run button to start the optimization.

After 30 iterations, the optimization finishes as the iteration number reaches the predefined limitation. The optimized values for the variables are listed in the symbol table that x = 107.6, y = 81.8, and L2 = 126.7. By comparing the gain with and without the optimization in post-processing, it is shown that the gain along the direction of φ = 0°, θ = 0° has been increased from 11.74 dB to 13.19 dB.

Note: If the user selects PSO for the first method and Simplex (Nelder-Mead) as the second method and lets each method run for 15 iterations, the optimization result will be further improved even though the total number of iterations is still 30. Because the PSO method is a global search algorithm, using it as the first optimization method for a coarse search can usually improve the final result when using two optimization methods.

images

Figure 10.38. Horn antenna.

images

Figure 10.39. Criterion window.

images

Figure 10.40. Mesh generation window.

images

Figure 10.41. Symbols for optimization.

10.4.3 Optimization of Narrow-Wall Slotted Waveguide Arrays

The slotted waveguide antenna array is formed by the combination of 10 single slotted waveguides. For each single waveguide, it has 10 narrow-wall slots, with dimensions of 22.86 mm × 10.16 mm, and wall thickness of 1.00 mm. The whole length of the waveguide is 266.58 mm. Figure 10.42 (a) gives the geometry of the inclined slot. The inclined angle of the slots is set to be θ, with cutting depth h (measured from the inner wall), and the width of the slots is w. A small dipole inside the waveguide is used as the excitation, as shown in Figure 10.42 (b). The ends of the waveguides are shorted, with the feeding dipole placed at 0.25λg from one end (λg is the guide wavelength). The entire structure is shown in Figure 10.43, with the distance between two adjacent waveguide centers of 20.00 mm. The operating frequency of interest is 9.375 GHz.

images

Figure 10.42. A narrow-wall inclined slot: (a) Slot geometry view, (b) the excitation inside the waveguide.

images

Figure 10.43. A 10-element slotted waveguide array.

From array theory, a 20 dB Taylor distribution is used to determine the feeding of the waveguide array to achieve the desired radiation pattern. There is a 0.9π phase difference between the feeds of the adjacent radiating waveguides.

The symbol list contains 20 symbols in total. There are 10 slots on the wall of each waveguide, the symbols x10, x11,…, x19 are used to compute the inclined angle θ of for each of the ten slots. The symbols h10, h11,…, h19 represent the cutting depth h for each slot. Each waveguide in the array has the same dimensions except for alternating slot incline angles as shown in Figure 10.42. Hence, a total of 20 variables need to be optimized. The goal is to optimize the radiation pattern of the slotted waveguide array. Due to the phase differences between the feeds, the direction of the main lobe is 5° toward the y-axis measured from the +z direction, which corresponds to φ = 90°, θ = 85°.

Open the Optimizer (HOBBIESimagesOptimizer), select Powell's Method as the first optimization method, set the Maximum Iteration to be 100 for method 1, and set Max Iteration to 0 for method 2. Then click the Symbols tab, check all the optimization variables, and specify the boundaries for all of them. For x10∼x19, the range should be 1 to 3, and for h10∼h19, it should be between 2.64 and 3.96 (Figure 10.44).

images

Figure 10.44. List of symbols.

Click the Criteria tab and then the Add button to add the criteria. The goal is to maximize the main lobe and minimize the sidelobe levels (SLLs). Criteria for main lobe and SLL are set separately. Figure 10.45 shows the criteria setting for the main lobe at the xOz-plane and yOz-plane, while Figure 10.46 shows the criteria setting for sidelobes at these two planes.

images

Figure 10.45. Criteria for the main lobe.

images

Figure 10.46. Criteria for the sidelobes.

Before running the optimization, mesh the model manually if this is the first time the optimizer is run on the project. Check the option to use the original geometry as the meshing elements. After setting up the criterion, click the Run button to start the optimization. After 100 iterations, the optimization finishes as the iteration number reaches the predefined limitation. In the symbol list, the user can find the optimized values for the variables. When checking the radiation pattern, the user will find the pattern has been improved in terms of the criteria but does not satisfy all the criteria. The user needs to modify or add more criteria to tune the radiation pattern further, and use the PSO method for a global search first to help achieve the goal. It is a trial-and-error process. One optimization result is compared with the original radiation pattern in Figure 10.47.

In Figure 10.47 (a), which shows the xOz-plane, it is seen that the main lobe remains roughly the same after optimization; however, the largest sidelobes at angles θ = 75° and θ = 95° are suppressed by about 5 dB. The solid and dashed indicators mark the highest sidelobe for the initial and optimized results, respectively. It is easy to see that the SLL performance improved by about 5 dB.

Figure 10.47 (b) shows the yOz-plane, and similarly, the solid and dashed indicators show the highest sidelobe and the mainlobe values for the radiation pattern before and after optimization, respectively. It can be seen that both the mainlobe and the largest sidelobe decrease by roughly 1 dB. The pattern has been improved in the xOz-plane, whereas in the yOz-plane, it remains relatively the same in terms of the mainlobe to peak sidelobe performance.

images

Figure 10.47. Radiation pattern of the slotted waveguide array after optimization: (a). xOz-plane, (b). yOz-plane.

10.5 CONCLUSION

In this chapter, how to set up the HOBBIES Optimizer and run an optimization is demonstrated in detail. The reader can assign symbols to geometry entities and specify the criteria with weight to optimize the model and achieve the desired goal. Three optimization algorithms, Powell's method, the Simplex (Nelder–Mead) method, and the PSO method, are available for the user to choose from. They include both local search and global search algorithms. With these flexible algorithms, users can optimize models with up to hundreds of symbols.

REFERENCES

[1] M. J. D. Powell, “The BOBYQA Algorithm for Bound Constrained Optimization Without Derivatives,” Technical Report DAMTP 2009/NA06, Department of Applied Mathematics and Theoretical Physics, Cambridge University, U.K., 2009.

[2] M. J. D. Powell, “The NEWUOA Software for Unconstrained Optimization,” Technical Report DAMTP 2004/NA08, Department of Applied Mathematics and Theoretical Physics, Cambridge University, U.K., 2004.

[3] J. A. Nelder and R. Mead, “A Simplex Method for Function Minimization,” Computer Journal, Vol. 7, pp. 308–313, 1965.

[4] Z. H. Zhan, J. Zhang, Y. Li, and H. S. Chung, “Adaptive Particle Swarm Optimization,” IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, Vol. 39, pp. 1362–1381, 2009.

[5] J. Kennedy and R. C. Eberhart, “Particle Swarm Optimization,” in Proceedings IEEE International Conference. Neural Networks, Perth, Australia, Vol. 4, pp. 1942–1948, 1995.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.89.82