Letting \( k \) denote an experiment index, each of the algorithms tested must iteratively generate the next set of decision variables, \( {\bf u}_{k+1} \), when given

For the sake of simplicity, all of the noise corruption (\( w\)) in the measurements/estimates will be additive and white Gaussian, with

\[ \hat \phi_p ({\bf u}_k) = \phi_p ({\bf u}_k) + w_{\phi,k} \] \[ \hat g_{p,j} ({\bf u}_k) = g_{p,j} ({\bf u}_k) + w_{j,k} \]

holding for any \( k \), with \( w_{\phi,k} \sim \mathcal{N}(0,\sigma_\phi^2) \) and \( w_{j,k} \sim \mathcal{N} (0,\sigma_j^2) \). These probability distributions will be assumed to be known by the user - i.e., a priority in these test problems, at least for the time being, is how each algorithm rejects noise whose statistics are known. Problems for which the probability distributions are unknown may be included in the future, however.

The average performance of an algorithm for a given test problem is obtained by solving the problem 100 times with different pre-generated noise elements. The following MATLAB code may be used to carry out this procedure:

function [perfave,convper] = algotest(u0,kfinal,sigmaphi,sigmag,uL,uU,ustar,Deltaphi,gpmax,gmax,algonum)
for i = 1:100
   noise = dlmread(strcat(['noise' num2str(i) '.txt']));
   wphi = noise(1,:);
   wg = noise(2:1+length(sigmag),:);
   u = u0;
   phiphat = [];
   gp = [];
   gphat = [];
   input = [];
   for k = 0:kfinal
      if exist('phieval.m') == 2
         phip(k+1,1) = phieval(u(k+1,:));
      else
         phip(k+1,1) = phipeval(u(k+1,:));
      end
      phiphat(k+1,1) = phip(k+1,1) + sigmaphi*wphi(k+1);
      if length(sigmag) > 0
         gp(k+1,:) = gpeval(u(k+1,:));
         gphat(k+1,:) = gp(k+1,:) + sigmag.*wg(:,k+1)';
      end
      tic;
      [u(k+2,:),output] = algo(u,phiphat,gphat,sigmaphi,sigmag,uL,uU,algonum,input);
      input = output;
      t(k+1) = toc;
   end
   perf(:,i) = perfeval(u(1:end-1,:),phip,gp,uL,uU,ustar,Deltaphi,gpmax,gmax,t);
end
for i = 1:11
   if i < 8 || i == 11
      perfave(i,:) = [mean(perf(i,:)) std(perf(i,:))];
   else
      perf0 = perf(i,:);
      perf0(perf0 > kfinal) = [];
      perfave(i,:) = [mean(perf0) std(perf0)];
      convper(i-7) = length(perf0);
   end
end
end

Here, the inputted u0 is the initial decision-variable set \( {\bf u}_0 \) (in row vector form). kfinal is the number of additional experiments that are run to solve the problem. sigmaphi is \( \sigma_\phi \), i.e., the standard deviation of the noise element of the cost, while sigmag is a row vector specifying the standard deviations of the noise elements of the experimental constraints, with sigmag(j) corresponding to \( \sigma_j \). In the case that the problem has no experimental constraints, the setting sigmag = [] is used. uL and uU are both row vectors corresponding to the lower and upper limits, \( {\bf u}^L = (u_1^L, u_2^L,..., u_{n_u}^L) \) and \( {\bf u}^U = (u_1^U, u_2^U,..., u_{n_u}^U) \). ustar is the best known solution to the problem, while Deltaphi, gpmax, and gmax are scaling parameters for the performance metrics. Finally, algonum specifies which algorithm should be tested. Apart from algonum, which is varied to test different algorithms, the other settings are all fixed for each problem a priori and are provided together with the problem.

In order for this file to be executed, one needs to download:

All testing is carried out in MATLAB.