The experimental optimization problems tested here have the standard form

\[\begin{array}{rll} \mathop {\rm minimize} \limits_{\bf u} & \phi_p ({\bf u}) & \\ {\rm subject\;to} & g_{p,j} ({\bf u}) \leq 0, & j = 1,...,n_{g_p} \\ & g_{j} ({\bf u}) \leq 0, & j = 1,...,n_{g} \\ & u_i^L \leq u_i \leq u_i^U, & i = 1,...,n_u, \end{array} \]where \( {\bf u} = (u_1,u_2,...,u_{n_u}) \) denotes the \( n_u \) decision variables, subject to the lower and upper limits \( u_i^L \) and \( u_i^U \), and \( \phi, g : \mathbb{R}^{n_u} \rightarrow \mathbb{R} \) denote the cost and constraint functions, respectively. The subscript \( p \) is used to denote those functions that are experimental in nature and whose values may only be evaluated for a given \( {\bf u} \) by carrying out a (presumably) expensive experiment. By contrast, the functions without this subscript - namely, the numerical constraints \( g_j \) - may be evaluated for any \( {\bf u} \) without running any experiments. For certain problems, the cost function may also be numerical in nature, in which case \( \phi_p ({\bf u}) \) is simply replaced by \( \phi ({\bf u}) \) in the formulation above.

To solve an experimental optimization problem, one starts with an initial decision-variable set \( {\bf u}_0 \) and generates a chain of additional experiments \( {\bf u}_1, {\bf u}_2, ... \) by using either an iterative algorithm or an algorithmic design procedure, with the goal that the final experiments be close to the problem solution.

The goal of this project is to create a platform where different experimental optimization algorithms may be tested en masse for a large collection of experimental optimization problems and classes of such problems. While it is clear that no algorithm can be the best for all problems, it is very possible that certain algorithms would be the best for certain classes of problems on average, and it is hoped that this platform will succeed in discovering these links. Additionally, since the performance of an algorithm is itself subjective and depends on how each user judges performance, a number of performance metrics are used, such as convergence speed, constraint violations, and amount of suboptimality.

Because experimental optimization problems typically arise in engineering and real-life scenarios, the problems considered here are derived from case studies where the real experiments may be simulated by a reasonably accurate model of reality. Put otherwise, the focus is on well-thought-out case studies and not on mass-generated mathematical problems, although the latter also provide a valid manner of testing algorithms in certain contexts.