开发者

Is there a fast alternative to creating a Texture2D from a Bitmap object in XNA?

I've looked around a lot and the only methods I've found for creating a Texture2D from a Bitmap are:

using  (MemoryStream s = new  MemoryStream())
{
   bmp.Save(s, System.Drawing.Imaging.ImageFormat.Png);
   s.Seek(0, SeekOrigin.Begin);
   Texture2D tx = Texture2D.FromFile(device, s);
}

and

Texture2D tx = new Texture2D(device, bmp.Width, bmp.Height,
                        0, TextureUsage.None, SurfaceFormat.Color);
tx.SetData<byte>(rgbValues, 0, rgbValues.Length, SetDataOptions.NoOverwrite);

Where rgbValues is a byte array containing the bitmap's pixel data in 32-bit ARGB format.

My question is, are there any faster approaches that I can try?

I am writing a map editor which has to read in custom-format images (map tiles) and convert them into Texture2D textures to display. The previous version of the editor, which was a C++ implementation, converted the images first into bitmaps and then into textures to be drawn using DirectX. I have attempted the same approach here, however both of the above approaches are significantly too slow. To load into memory all of the textures required for a map takes for the first approach ~250 seconds and for the second approach ~110 seconds on a reasonable spec com开发者_开发知识库puter (for comparison, C++ code took approximately 5 seconds). If there is a method to edit the data of a texture directly (such as with the Bitmap class's LockBits method) then I would be able to convert the custom-format images straight into a Texture2D and hopefully save processing time.

Any help would be very much appreciated.

Thanks


You want LockBits? You get LockBits.

In my implementation I passed in the GraphicsDevice from the caller so I could make this method generic and static.

public static Texture2D GetTexture2DFromBitmap(GraphicsDevice device, Bitmap bitmap)
{
    Texture2D tex = new Texture2D(device, bitmap.Width, bitmap.Height, 1, TextureUsage.None, SurfaceFormat.Color);

    BitmapData data = bitmap.LockBits(new System.Drawing.Rectangle(0, 0, bitmap.Width, bitmap.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bitmap.PixelFormat);

    int bufferSize = data.Height * data.Stride;

    //create data buffer 
    byte[] bytes = new byte[bufferSize];    

    // copy bitmap data into buffer
    Marshal.Copy(data.Scan0, bytes, 0, bytes.Length);

    // copy our buffer to the texture
    tex.SetData(bytes);

    // unlock the bitmap data
    bitmap.UnlockBits(data);

    return tex;
}


they have changed the format from bgra to rgba in XNA 4.0, so that method gives strange colors, the red and blue channels needs to be switched. Here's a method i wrote that is super fast! (loads 1500x 256x256 pixel textures in about 3 seconds).

    private Texture2D GetTexture(GraphicsDevice dev, System.Drawing.Bitmap bmp)
    {
        int[] imgData = new int[bmp.Width * bmp.Height];
        Texture2D texture = new Texture2D(dev, bmp.Width, bmp.Height);

        unsafe
        {
            // lock bitmap
            System.Drawing.Imaging.BitmapData origdata = 
                bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);

            uint* byteData = (uint*)origdata.Scan0;

            // Switch bgra -> rgba
            for (int i = 0; i < imgData.Length; i++)
            {
                byteData[i] = (byteData[i] & 0x000000ff) << 16 | (byteData[i] & 0x0000FF00) | (byteData[i] & 0x00FF0000) >> 16 | (byteData[i] & 0xFF000000);                        
            }                

            // copy data
            System.Runtime.InteropServices.Marshal.Copy(origdata.Scan0, imgData, 0, bmp.Width * bmp.Height);

            byteData = null;

            // unlock bitmap
            bmp.UnlockBits(origdata);
        }

        texture.SetData(imgData);

        return texture;
    }


I found I had to specify the PixelFormat as .Format32bppArgb when using LockBits as you suggest to grab webcam images.

        BitmapData bmd = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height),
            System.Drawing.Imaging.ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
        int bufferSize = bmd.Height * bmd.Stride;
        //create data buffer 
        byte[] bytes = new byte[bufferSize];
        // copy bitmap data into buffer
        Marshal.Copy(bmd.Scan0, bytes, 0, bytes.Length);

        // copy our buffer to the texture
        Texture2D t2d = new Texture2D(_graphics.GraphicsDevice, bmp.Width, bmp.Height, 1, TextureUsage.None, SurfaceFormat.Color);
        t2d.SetData<byte>(bytes);
        // unlock the bitmap data
        bmp.UnlockBits(bmd);
        return t2d;


When I first read this question, I assumed it was SetData performance that was the limit. However reading OP's comments in the top answer, he seems to be allocating a lot of large Texture2D's.

As an alternative, consider having a pool of Texture2D's, that you allocate as needed, return to the pool when no longer needed.

The first time each texture file is needed (or in a "pre-load" at start of your process, depending on where you want the delay), load each file into a byte[] array. (Store those byte[] arrays in an LRU Cache - unless you are sure you have enough memory to keep them all around all the time.) Then when you need one of those textures, grab one of the pool textures, (allocating a new one, if none of appropriate size is available), SetData from your byte array - viola, you have a texture.

[I've left out important details, such as the need for a texture to be associated with a specific device - but you can determine any needs from the parameters to the methods you are calling. The point I am making is to minimize calls to the Texture2D constructor, especially if you have a lot of large textures.]

If you get really fancy, and are dealing with many different size textures, you can also apply LRU Cache principles to the pool. Specifically, track the total number of bytes of "free" objects held in your pool. If that total exceeds some threshold you set (maybe combined with the total number of "free" objects), then on next request, throw away oldest free pool items (of the wrong size, or other wrong parameters), to stay below your allowed threshold of "wasted" cache space.

BTW, you might do fine simply tracking the threshold, and throwing away all free objects when threshold is exceeded. The downside is a momentary hiccup the next time you allocate a bunch of new textures - which you can ameliorate if you have information about what sizes you should keep around. If that isn't good enough, then you need LRU.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜