开发者

Determine points within a given radius algorithm

I'm not sure what mathematical concept this is to support my question. ^^

Let's say we have PointA as the reference. The problem is to find the points around PointA within a g开发者_JAVA技巧iven radius (Using coordinates). My approach would be to compute the distance of every point (Pythagorean) and then compare with the given radius. I'm sure that this would suck in terms of complexity.

What algorithm may you suggest? A sample code to point things out would be much appreciated. Thanks.


I'd start by creating a box around the circle and first testing if anything falls inside the box. Then you're probably avoiding calculating sqrts and squares all the time. Pick one edge of the box (say the one at the left side) and calculate its x value:

xMin = xOrigin - radius

Then anything that satifies

xTest < xMin

can be ignored. Repeat something similar for all four sides. The moment that a test fails, then stop working on that point. Don't do unnecessary calculations.

This tells you that a point is close but not necessarily within the radius. Next calculate:

abs(sqr(xOrigin - xTest) - sqr(yOrigin - yTest))

if this is less than radius*radius (which you pre-calculate to avoid using square roots) then you have a point within the radius.

That's the best I can figure with first pre-structuring the data.


If your points are not indexed, that's actually an optimal algorithm. There are n points, and it'll take O(n) time to search all of them, in the absence of any other index.

The one micro-optimization is to skip the sqrt operation and compare the sum of the squares of the coordinate deltas with the square of the desired radius.

IF you're going to be making multiple queries against the same data set, there are various indexing schemes that you can use that take some time to compute (O(n log n)), but make the lookups faster (O(m + log n), where m is the number of points found.)

kd-trees are probably the place to start there.


Your only complexity here is the calculation of distance. Just sieve and simplify that calculation and you are optimal.

If your 'centre' is A(x,y), then for any point B(x1, y1) consider:

1/ If B is within your required distance d of point B, then it follows that both x-x1 < d and y-y1 < d. Check these conditions first to filter any 'low hanging fruit' exclusions.

2/ Rather than calculate the distance, calculate the square of the distance and compare it to the square of the maximum allowed distance (which you should obviously precalculate and reference, rather than recalculate every time). It means not having to compute a square root for each point.

These are quite minor optimisations, but assuming that the points are unsorted and random this is the best you will get.


The best answer would depend on the number of dimensions. I assume you work in 2D or 3D space.

A simple approach would be to make a uniform grid of cell size, say "R". Then pin all points to their respective cells.

Each query point intersects only a handful of cells, say 9. You need to inspect only the intersecting cells, then.

A more efficient approach would be to construct a quadtree or KD-tree (there are plenty of other options but for 2D, quadree or KD-tree would do).

Checking circle-rectangle intersection is necessary with hierarchical structures, but this is described in KD-Tree nearest neighbor algorithm (you only need to modify it slightly).

If you are really concerned about performance, it is possible to construct multiple grids (shifted or rotated) and always select the one with the cell most centered on the point so that the search is limited to the least number of cells.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜