开发者

The Optimization of an Objective Function with Step Functions

I've asked this question at Math SE, but the response is not very satisfactory. So I asked here again:

I have an optimization problem with linear inequalities and equalities constraint:

A*x<=b 
Aeq*x=beq

The problem is that the objective function is composed of a summation of a series of Heaviside step functions,

Here's the pseudo code for the objective function:

function f(k, c, x)
  ffunction =0;
  for i=0;i<k.row.length;i++
     smallF=0
     for j=0; j<k.column.length; j++
      smallF+= k.row[i]*k.column[j]*x[j]+c[j]
     end 
     ffunction += u(sm开发者_开发问答allF)
  end
 f=ffunction 
end


function u(x)
  if(x>0)
   return 1
  else
   return 0
  end
end

The suggestion I got is to approximate the step function as a smooth function and use nonlinear optimization for this purpose. But is there anything in MATLAB toolbox that allows me to solve this without doing the smooth function conversion?


This problem can be solved exactly using a mixed-integer programming solver. I explain the details in my answer to your Math SE post ; summarizing, you need to introduce a binary variable and a linear inequality for each term in the objective function involving a Heaviside step function.


In Matlab, you do numerical optimization. That means that you don't have to worry about the analytical form of your objective function. Instead, you need to write an objective function that creates, using the optimization parameters, for every value x of your data an y-value that you can then compare with your input data.

With linear and non-linear constraints, you can use FMINCON to solve your problem.

I'm not entirely sure I understand what you want to optimize (sorry, it's a bit early), but for the sake of an example, let me assume that you have a vector with x-values xdata and a vectory with y-values ydata to which you want to fit a "stair-function". You know how many steps there are, but you do not know where they're placed. Also, you know that the sum of the step locations has to be 5 (linear equality constraint).

You start out by writing your objective function, the output of which you want to get as close to 0 as possible. This could be the squared sum of the residuals (i.e. the difference between the real y-values and the estimated y-values). For my convenience, I won't define the step locations via linear equations, but I'll set them directly instead.

function out = objFun(loc,xdata,ydata)
%#OBJFUN calculates the squared sum of residuals for a stair-step approximation to ydata
%# The stair-step locations are defined in the vector loc

%# create the stairs. Make sure xdata is n-by-1, and loc is 1-by-k
%# bsxfun creates an n-by-k array with 1's in column k wherever x>loc(k)
%# sum sums up the rows
yhat = sum(bsxfun(@gt,xdata(:),loc(:)'),2); %'# SO formatting

%# sum of squares of the residuals
out = sum((ydata(:)-yhat).^2);

Save this function as objFun.m in your Matlab path. Then you define xdata and ydata (or load it from file), make an initial guess for loc (k-by-1 array), and the array Aeq for the linear equality contstraint such that Aeq*loc==beq (Aeq is [1 1 1] in case you have 3 steps), and write

locEst = fmincon(@(u)objFun(u,xdata,ydata),locInitialGuess,[],[],Aeq,5);

This will estimate the location of the steps. Instead of the two empty brackets you can add inequality constraints, and the 5 is because I assumed the equality constraint evaluates to 5.


absolute minimization along any one dimension of X is a simple task that can be found in O(i) time.

1) pick a Xj to modify (all others are fixed)

2) create a list of length i containing the switching location for each equation along Xj, mapped to either 1 or -1 depending on if it decreases or increases as you approach +inf.

3) Sort by the switching location... start with 0 at the minimum switching location and add or subtract the mapped value.

4) Track the minimum sum as you step through this list, keeping the switching location as the optimal mapped to the score.

5) at the end of the list, your best switching location becomes your new value for Xj.

From here, you could do conjugate minimizations along each dimension of X, repeating until you stop improving the result. That would at least find a local minimum.


Alternatively, you could use a canned local optimization code that doesn't require gradients... for instance the Nelder Mead downhill simplex method. There are Matlab libraries available for that task.


Both of these approaches will give you local optima. If you really need a more global solution, you could look into the Simulated Annealing or Genetic Algorithm solutions, which probably would work really well with this type of problem.


Finally, if you have a pile of money to spend (not sure that this is free), I'd check into the Matlab Global Optimization toolbox.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜