Letting $$k$$ denote an experiment index, each of the algorithms tested must iteratively generate the next set of decision variables, $${\bf u}_{k+1}$$, when given

• the past decision-variable sets $${\bf u}_0, {\bf u}_1, ..., {\bf u}_k$$,
• the corresponding measurements/estimates of the experimental cost/constraint function values, $$\hat \phi_p ({\bf u}_0), \hat \phi_p ({\bf u}_1), ..., \hat \phi_p({\bf u}_k)$$ and $$\hat g_{p,j} ({\bf u}_0), \hat g_{p,j} ({\bf u}_1), ..., \hat g_{p,j} ({\bf u}_k)$$, that are obtained for the experiments at $${\bf u}_0, {\bf u}_1, ..., {\bf u}_k$$,
• the statistical properties of the measurement/estimation noise,
• the definitions of the limits $$u_i^L, u_i^U$$ and the numerical constraint functions $$g_j$$,
• (optional) a model of the experimental functions $$\phi_p$$ and $$g_{p,j}$$.

For the sake of simplicity, all of the noise corruption ($$w$$) in the measurements/estimates will be additive and white Gaussian, with

$\hat \phi_p ({\bf u}_k) = \phi_p ({\bf u}_k) + w_{\phi,k}$ $\hat g_{p,j} ({\bf u}_k) = g_{p,j} ({\bf u}_k) + w_{j,k}$

holding for any $$k$$, with $$w_{\phi,k} \sim \mathcal{N}(0,\sigma_\phi^2)$$ and $$w_{j,k} \sim \mathcal{N} (0,\sigma_j^2)$$. These probability distributions will be assumed to be known by the user - i.e., a priority in these test problems, at least for the time being, is how each algorithm rejects noise whose statistics are known. Problems for which the probability distributions are unknown may be included in the future, however.

The average performance of an algorithm for a given test problem is obtained by solving the problem 100 times with different pre-generated noise elements. The following MATLAB code may be used to carry out this procedure:

function [perfave,convper] = algotest(u0,kfinal,sigmaphi,sigmag,uL,uU,ustar,Deltaphi,gpmax,gmax,algonum)
for i = 1:100
wphi = noise(1,:);
wg = noise(2:1+length(sigmag),:);
u = u0;
phiphat = [];
gp = [];
gphat = [];
input = [];
for k = 0:kfinal
if exist('phieval.m') == 2
phip(k+1,1) = phieval(u(k+1,:));
else
phip(k+1,1) = phipeval(u(k+1,:));
end
phiphat(k+1,1) = phip(k+1,1) + sigmaphi*wphi(k+1);
if length(sigmag) > 0
gp(k+1,:) = gpeval(u(k+1,:));
gphat(k+1,:) = gp(k+1,:) + sigmag.*wg(:,k+1)';
end
tic;
[u(k+2,:),output] = algo(u,phiphat,gphat,sigmaphi,sigmag,uL,uU,algonum,input);
input = output;
t(k+1) = toc;
end
perf(:,i) = perfeval(u(1:end-1,:),phip,gp,uL,uU,ustar,Deltaphi,gpmax,gmax,t);
end
for i = 1:11
if i < 8 || i == 11
perfave(i,:) = [mean(perf(i,:)) std(perf(i,:))];
else
perf0 = perf(i,:);
perf0(perf0 > kfinal) = [];
perfave(i,:) = [mean(perf0) std(perf0)];
convper(i-7) = length(perf0);
end
end
end

Here, the inputted u0 is the initial decision-variable set $${\bf u}_0$$ (in row vector form). kfinal is the number of additional experiments that are run to solve the problem. sigmaphi is $$\sigma_\phi$$, i.e., the standard deviation of the noise element of the cost, while sigmag is a row vector specifying the standard deviations of the noise elements of the experimental constraints, with sigmag(j) corresponding to $$\sigma_j$$. In the case that the problem has no experimental constraints, the setting sigmag = [] is used. uL and uU are both row vectors corresponding to the lower and upper limits, $${\bf u}^L = (u_1^L, u_2^L,..., u_{n_u}^L)$$ and $${\bf u}^U = (u_1^U, u_2^U,..., u_{n_u}^U)$$. ustar is the best known solution to the problem, while Deltaphi, gpmax, and gmax are scaling parameters for the performance metrics. Finally, algonum specifies which algorithm should be tested. Apart from algonum, which is varied to test different algorithms, the other settings are all fixed for each problem a priori and are provided together with the problem.

In order for this file to be executed, one needs to download:

• the pre-generated noise elements, extracting them where MATLAB will find them,
• the cost evaluation file phipeval.m/phieval.m and, if needed, the constraint evaluation files, gpeval.m and geval.m (available separately for each problem),
• models of the experimental functions, phimod.m and gmod.m, if the algorithm tested requires a model (available separately for each problem),
• any problem-dependent auxiliary files (available separately for each problem),
• the main algorithm file, algo.m,
• any algorithm-dependent auxiliary files (available separately for each algorithm),
• the performance evaluation file, perfeval.m.

All testing is carried out in MATLAB.