开发者

How to separate an image into two with java

I'm wondering if there is a "smart" way of splitting an image based on certain features.

The images are 300x57, black and white (actually grayscale, but most colors are either black or white), it is comprised of two main features (let's call them blobs) separated by black space, each blob slightly varies in width and height, the position of the blobs also varies, the blobs NEVER overlap!

Here is what an image "looks" like:

-------------------------
----WWW---------WWWWW----
---WWWWWWW----WWWWWW-----
-----WWWW-------WWW------
-------------------------

The resulting split would be something like this:

------------     -------------
----WWW-----     ----WWWWW----
---WWWWWWW--     --WWWWWW-----
-----WWWW---     ----WWW------
------------     -------------

Steps I plan to take in order to split the image:

  1. Scan the image from one side to the other.
  2. Determine the edges of the blobs.
  3. Take the distance between the two inside edges.
  4. Split the image at the middle of the inside distance.
  5. Save the two images as separate files.

It would be nice if I normalize the image widths, so all of my images have a uniform width when they're saved.

I have no experience in image manipulation, so I don't know what's an efficient way to do this. I'm currently using a BufferedImage, getting the width/height, iterating over each pixel, etc. There is no wrong solution for my problem, but I'm looking for a more efficient one (less code + faster). I've also been looking into java开发者_如何学JAVA.awt.Graphics...

I would appreciate if I get some ideas for more efficient ways to do this task. I want to stick with Java's built-in libraries, so is BufferedImage or Graphics2D the most efficient thing to use in this case?

EDIT: Here is the code after reading the suggestions:

public void splitAndSaveImage( BufferedImage image ) throws IOException
{
    // Process image ------------------------------------------         
    int height = image.getHeight();
    int width = image.getWidth();
    boolean edgeDetected = false;
    double averageColor = 0;
    int threshold = -10;
    int rightEdge = 0;
    int leftEdge = 0;
    int middle = 0;

    // Scan the image and determine the edges of the blobs.
    for(int w = 0; w < width; ++w)
    {               
        for(int h = 0; h < height; ++h)
        {
            averageColor += image.getRGB(w, h);
        }

        averageColor = Math.round(averageColor/(double)height);

        if( averageColor /*!=-1*/< threshold && !edgeDetected )
        {
            // Detected the beginning of the right blob
            edgeDetected = true;
            rightEdge = w;
        }else if( averageColor >= threshold && edgeDetected )
        {
            // Detected the end of the left blob
            edgeDetected = false;
            leftEdge = leftEdge==0? w:leftEdge;
        }

        averageColor = 0;
    }

    // Split the image at the middle of the inside distance.
    middle = (leftEdge + rightEdge)/2;

    // Crop the image
    BufferedImage leftImage = image.getSubimage(0, 0, middle, height);

    BufferedImage rightImage = image.getSubimage(middle, 0, (width-middle), height);

    // Save the image
    // Save to file -------------------------------------------
    ImageIO.write(leftImage, "jpeg", new File("leftImage.jpeg"));

    ImageIO.write(rightImage, "jpeg", new File("rightImage.jpeg"));
}


A simple way to do this is to sum the pixel values in each column (going down) to create a single array (the same width as your input image) of average values. Starting in the middle of the array, search for the minimum value. This will be the column where you can split the image.

This column probably won't be the center of the gap between your blobs. You can do another outward search from this column, going left first to find all similar columns, and then going right.

-------------------------
----WWW---------WWWWW----
---WWWWWWW----WWWWWW-----
-----WWWW-------WWW------
-------------------------

col avg:

---wwWWwww-----wWWWWww---

Depending on how blank the space is (pixel value wise) between the two blobs, you can set your threshold value pretty low. If there is some noise, it will have to be a little higher.

Finding the right threshold value can be a chore, unless you can determine it algorithmically.


I'm not aware of an edge detection algorithm that doesn't require iterating through the pixels, so your present approach may be optimal. Depending on other factors, you may be able to leverage ImageJ, which has an extensive collection of analytical plugins.

Addendum: Given a preference for avoiding external dependencies, BufferedImage is a good choice. Once you identify the edges, the getSubimage() method is convenient. You may be able to use one of the Raster getPixels() methods effectively in the convolution. ImageIO can write the results.


Does the gap between blobs matter? If you don't need to balance the white space, less work would be needed to just find a vertical white line between blobs. Check if the center vertical line has only white pixels. If the middle line has a black pixel, scan left and right for the first line that has only white pixels. To check for situations where both blobs are to one side of center, scan a horizontal line for black-white-black intervals. If the selected vertical line is within a white interval surrounded by black intervals, you'll know there's at least one blob on each side of the image split.

Failing these checks would require scanning additional lines, but for all well formed images, where the blobs are centered in the right and left halves of the image, this method will take only two line scans. This method may take longer for other images, or even break, for edge case images. This would break for this example:

-------------------------
----WWW----WWWWWWWWWW----
---WWWWWWW----WWWWWW-----
-----WWWWWWWW---WWW------
-------------------------

But the question seems to indicate this situation is impossible. If the reason behind this image splitting requires processing every image, you'll need a fall back method. You wouldn't need a fall back method if the edge cases can be rejected. Once the scanning finds that the image falls outside of acceptable ranges, you can stop checking the image. For example, if a vertical all white line can't be found in the center third of the image, you may be able to reject the image. Or you can just use this method as an optimization, running this check on just two lines to find and split the well formed images, then passing the poorly formed images to a more thorough algorithm.


I don't think there is any reason to do anything other than scanning each line and stop when you have gotten a white->black->white transition (no need to scan the entire line!). If you can make any guess about the position of the blobs you might be able to refine it a little by picking a starting point in the middle of the image and then searching left and right from there. But I seriously doubt it would be worth the effort.

There is also no need to first run an edge detection algorithm on the image. Just scan the lines!

EDIT: Mr. Berna pointed out that this will not work with concave objects.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜