How to determine edges in an image optimally?
I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this:
(source: msn.com)the result should be an image with the msn content without the white margins(left& right).
I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect => x1,y1,x2,y2 data.
I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar..
Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'?
public static CropRect EdgeDetection(Bitmap Image, float Threshold)
{
CropRect cropRectangle = new CropRect();
int lowestX = 0;
int lowestY = 0;
int largestX = 0;
int largestY = 0;
lowestX = Image.Width;
lowestY = Image.Height;
//find the lowest X bound;
for (int y = 0; y < Image.Height - 1; ++y)
{
for (int x = 0; x < Image.Width - 1; ++x)
{
Color currentColor = Image.GetPixel(x, y);
Color tempXcolor = Image.GetPixel(x + 1, y);
Color tempYColor = Image.GetPixel(x, y + 1);
if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) +
((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) +
((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold))
{
if (lowestX > x)
lowestX = x;
if (largestX < x)
largestX = x;
}
if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) +
((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) +
((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold))
{
if (lowestY > y)
lowestY = y;
if (largestY < y)
开发者_运维百科 largestY = y;
}
}
}
if (lowestX < Image.Width / 4)
cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0;
else
cropRectangle.X = 0;
if (lowestY < Image.Height / 4)
cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0;
else
cropRectangle.Y = 0;
cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8;
cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8;
return cropRectangle;
}
}
One possible optimisation is to use Lockbits to access the color values directly rather than through the much slower GetPixel.
The Bob Powell page on LockBits is a good reference.
On the other hand, my testing has shown that the overhead associated with Lockbits makes that approach slower if you try to write a GetPixelFast equivalent to GetPixel and drop it in as a replacement. Instead you need to ensure that all pixel access is done in one hit rather than multiple hits. This should fit nicely with your code provided you don't lock/unlock on every pixel.
Here is an example
BitmapData bmd = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, b.PixelFormat);
byte* row = (byte*)bmd.Scan0 + (y * bmd.Stride);
// Blue Green Red
Color c = Color.FromArgb(row[x * pixelSize + 2], row[x * pixelSize + 1], row[x * pixelSize]);
b.UnlockBits(bmd);
Two more things to note:
- This code is unsafe because it uses pointers
- This approach depends on pixel size within Bitmap data, so you will need to derive pixelSize from bitmap.PixelFormat
GetPixel is probably your main culprit (I recommend running some profiling tests to track it down), but you could restructure the algorithm like this:
- Scan first row (y = 0) from left-to-right and right-to-left and record the first and last edge location. It's not necessary to check all pixels, as you want the extreme edges.
- Scan all subsequent rows, but now we only need to search outward (from center toward edges), starting at our last known minimum edge. We want to find the extreme boundaries, so we only need to search in the region where we could find new extrema.
- Repeat the first two steps for the columns, establishing initial extrema and then using those extrema to iteratively bound the search.
This should greatly reduce the number of comparisons if your images are typically mostly content. The worst case is a completely blank image, for which this would probably be less efficient than the exhaustive search.
In extreme cases, image processing can also benefit from parallelism (split up the image and process it in multiple threads on a multi-core CPU), but this is quite a bit of additional work and there are other, simpler changes you still make. Threading overhead tends to limit the applicability of this technique and is mainly helpful if you expect to run this thing 'realtime', with dedicated repeated processing of incoming data (to make up for the initial setup costs).
This won't make it better on the order... but if you square your threshold, you won't need to do a square root, which is very expensive.
That should give a significant speed increase.
精彩评论