开发者

Aggregate/Extrapolate sparse samples in multi-dimensional space

Imagine there is a function which evaluates the elevation of a surface given a floating point x and y coordinate (and additional components in accordance with dimensionality):

double ComputeElevation(double x, double y, ...., double z) { }

This is not an analytic function and thus the derivatives cannot be computed. What I need to do is find the direction in which the surface is steepest for any given {x, y} pair. A single evaluation could be very expensive (think seconds or even minutes in worst case scenarios)开发者_如何学Go.

My typical approach in the 2D case would be to sample the surface in N locations adjacent to {x, y}, then fit a curve through those samples and search the curve for the highest point, as this search does not suffer from expensive evaluation:

Aggregate/Extrapolate sparse samples in multi-dimensional space

In the above image P0 is the given coordinate. {S0, S1, S2, S3} are 4 random-ishly placed samples around P0 and PM is the highest point on the curve. Thus, the vector PM-P0 is the direction of steepest ascent.

But I have no idea how to scale this up to N dimensions, or whether there are much smarter algorithms for doing this.

The number of dimensions is potentially quite large (dozens to hundreds), so whatever method I end up using must work when there are fewer samples than there are dimensions. I'm not looking for the exact answer, that would be impossible, but a half-way decent approximation would already be most welcome.

ps. I'm doing this in C#, not that it matters a great deal but I don't have access to non-C# language features.


It looks like you are trying to estimate the gradient from a set of random samples near a given point.

Unfortunately if you have n dimensions, you need a minimum of n+1 points to properly estimate the gradient. With fewer points, dimensions need to drop out, and you'll be estimating a lower dimensional projection of the gradient. That said, if you capture k dimensions, odds are that your project will get sqrt(k/n) of the length of the true gradient.

Here is one approach. Suppose that you have sampled k+1 random points around your point and further assume that they are linearly independent. Pick one of them as your "origin", and then you'll have k dimensions. Find another n-k more points that are orthogonal to all of the previous ones, and put in your origin's value. (That will cause those dimensions to not be represented in the gradient.)

Now you have n vectors and an estimate of the dot product of the gradient with each. Take each standard unit vector, and write it as a linear combination of your vectors. The same linear combination of your slopes will give you an estimate for that component of the gradient. Do this for all of your unit vectors, add them together, and voila, you have an estimate of your gradient.

Please note that if you are trying to use some near, and some far points, some of which are not linearly independent, then this approach won't work and you'll need to do something much more complicated.


I'm not totally clear why computing the curve is cheaper than sampling points at random, but this reminds me of http://en.wikipedia.org/wiki/Gradient_descent. You can think of your problem as trying to optimize the difference of the elevation between the current location and a new point. It may be faster than trying random points and it's really easy to generalize to multiple dimensions

Since the function is probably non-monotonically increasing, you might want to define it in relationship to a boundary (within x units of the starting point).

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜