The following algorithms are currently being tested, with the code for each stored in the main algorithm file. The priority of each algorithm, given as a score between 1 (lowest) and 10 (highest), determines the odds of that algorithm being tested in a given trial - e.g., an algorithm with Priority 3 is three times less likely to be tested than an algorithm with Priority 9.

No Algorithm (nothing; algonum = 0)

This "algorithm" does not adapt the decision variables and simply keeps them at $${\bf u}_0$$ for all experiments. This is intended to provide a reference to the other algorithms, so that one may have a general idea of how much is gained (or lost) by attempting optimization.

SCFO Solver, Standard

The SCFO ("sufficient conditions for feasibility and optimality") solver is based on as-of-yet unpublished theoretical work [1], with the initial version of the solver released in May of 2013. The algorithms tested here correspond to all version numbers starting with 0.91.1, and use the SCFO's standard implementation (as opposed to the fast implementation - see below). The SCFO file, as well as all of the other files it calls, should be downloaded separately. A users' guide to explain how to use the solver and the theory behind it is also available [2]. While the solver is designed to work in a model-free setting, it may employ the user-provided models phimod.m and gmod.m when they are available to assist in certain subroutines.

[1] G. A. Bunin, G. François, and D. Bonvin (2014). Feasible-side global convergence in experimental optimization. Unpublished, arXiv [math.OC] 1406.4063.

[2] G. A. Bunin (2017). The SCFO Experimental Optimization Solver: Users' Guide (version 0.91.3).

Version 0.91.1 (SCFOv7; algonum = 1)

Coded by: G. A. Bunin

Priority: 1

show code

Version 0.91.2 (SCFOv8; algonum = 1.01)

Coded by: G. A. Bunin

Priority: 1

show code

Version 0.91.3 (SCFOv9; algonum = 1.02)

Coded by: G. A. Bunin

Priority: 5

show code

SCFO Solver, Fast

This is the fast version of the SCFO solver (see above), and is essentially a modification of the standard version in that certain numerical routines are approximated to drastically reduce the solver's computational time. In terms of implementation, only the solver settings need modification.

Version 0.91.1 (SCFOv7f; algonum = 2)

Coded by: G. A. Bunin

Priority: 3

show code

Version 0.91.2 (SCFOv8f; algonum = 2.01)

Coded by: G. A. Bunin

Priority: 4

show code

Version 0.91.3 (SCFOv9f; algonum = 2.02)

Coded by: G. A. Bunin

Priority: 10

show code

Response Surface Optimization Following a Central Composite Design (RSOCCD; algonum = 3)

Coded by: G. A. Bunin

Priority: 10

This algorithm is not iterative but is an algorithmic design procedure, as a certain number of prescribed experiments are conducted regardless of what results they yield (i.e., in an open-loop matter), after which the resulting data is used to build a quadratic-model approximation of the experimental optimization problem, which is then solved numerically to yield an approximation of the optimal point. This point is then retained as the optimum for all following experiments. A central composite design to generate the initial set is used as this is a fairly popular and accepted procedure [1].

show code

The files phidoe.m and gdoe.m complement this algorithm and act as the functions needed for the optimization of the constructed data-driven model. Note that this algorithm does not make use of a user-provided model even if the latter is available.

[1] R. H. Myers, D. C. Montgomery, and C. M. Anderson-Cook (2009). Response Surface Methodology. John Wiley & Sons.

Brute Simplex Algorithm (SimplexB; algonum = 4)

Coded by: G. A. Bunin

Priority: 10

The simplex algorithm is a classic tool for nonlinear optimization, with the version coded here essentially following the steps outlined in Section 2.4.1 of [1].

show code

The file simpsub.m is a required subroutine and should be downloaded. Note that the algorithm uses scaled decision variables and generates the experiments $${\bf u}_1,...,{\bf u}_{n_u}$$ in a pseudo-random manner that ensures that the starting simplex has relatively balanced geometry. More importantly, this version of the algorithm is only applicable to problems with bound constraints only (Class 1 problems) and thus is only tested for this problem class. The adjective "brute" is employed since this version of the algorithm does not attempt to account for noise in the function values in any manner, and simply proceeds by using the noisy measurements in the iterative simplex constructions. The algorithm does not make use of user-provided models.

[1] J. C. Spall (2005). Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control. John Wiley & Sons.

Coded by: G. A. Bunin

Priority: 10

The constraint-adaptation [1], or bias-update [2], algorithm proceeds by adding a zero-order correction term to the models of the experimental constraints and then carrying out a model optimization to determine the next experiment. In the algorithm as coded here, a filter gain of 0.7 is used to dampen the corrections slightly. This algorithm is applicable to Class 3 problems only and requires a model.

show code

The auxiliary file gCA.m is required as it defines the constraints in the model optimization. The version of the algorithm coded here uses the pattern search algorithm of MATLAB to minimize the model.

[1] B. Chachuat, A. Marchetti, and D. Bonvin (2008). Process optimization via constraints adaptation. J. Process Control, 18, 244-257.

[2] J. F. Forbes and T. E. Marlin (1994). Model accuracy for economic optimizing controllers: The bias update case. Ind. Eng. Chem. Res., 33(8), 1919-1929.