开发者

Separation of multipage tiff with compression "CCITT T.6" very slow

I need to separate multiframe tiff files, and use the following method:

public static Image[] GetFrames(Image sourceImage)
{
    Guid objGuid = sourceImage.FrameDimensionsList[0];
    FrameDimension objDimension = new FrameDimension(objGuid);
    int frameCount = sourceImage.GetFrameCount(objDimension);
    Image[] images = new Image[frameCount];
    for (int i = 0; i < frameCount; i++)
    {
        MemoryStream ms = new MemoryStream();
        sourceImage.SelectActiveFrame(objDimension, i);
        sourceImage.Sa开发者_开发技巧ve(ms, ImageFormat.Tiff);
        images[i] = Image.FromStream(ms);
    }
    return images;
}

It works fine, but if the source image was encoded using the CCITT T.6 compression, separating a 20-frame-file takes up to 15 seconds on my 2,5ghz CPU.(One core is at 100% during the process)

When saving the images afterwards to a single file using standard compression (LZW), the separation time of the LZW-file is under 1 second.

Saving with CCITT compression also takes very long.

Is there a way to speed up the process?

edit:

I have measured the execution times:

        sourceImage.SelectActiveFrame(objDimension, i);
        sourceImage.Save(ms, ImageFormat.Tiff);

These two calls each account for around 50% of the total processing time. Using one MemoryStream with an initial capacity big enough for all images results in no measurable speed gain. The Image.FromStream method takes barely any processing time.

I need the single frames because I need to process them(deskew, rotate, etc.).

If there is a completely different method than mine, I would be happy to hear it.


The first thing to do in your situation would be to measure.

Before we can figure out how to make it faster, and certainly before we make it far more complicated with optimizations, we need to know what are the slow parts. luckily you have a very short piece of code so it would be pretty easy to just throw in your own timing code and then we can take a more informed look.

That being said here are a few uninformed pieces of advice:

  1. I assume that these images are fairly big and all have the same dimensions, therefore instead of making a new memory stream for each image and growing it dynamically why not construct a MemoryStream that will be big enough and then use that for all of them, this will decrease the amount of garbage that the method creates and decrease the number of overall allocations.
  2. You said you are pegging one of your cores at 100%, so we should probably try to use more than one core. You could try to split the work across multiple threads. Maybe have one thread save the frames into MemoryStreams and another can load them into new images, they could communicate via a work queue.
  3. You say you are splitting it and then later you say that you are saving it again, maybe you can just save directly instead of going through another image object in the middle.


It seems to be a problem with GDI+ on Windows 7.

I ran a sample program on a much slower machine with Windows XP, and got much better performance on compressed images than I got with Windows 7(around 2-3 times faster)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜