开发者

File Allocation Table Reading

I am working on a custom FAT file system explorer and things have been going quite well. However, I want to know if there is a better way to efficiently read/write to the chainmap. For large devices, this can be incredible resource intensive and it can be very, very slow. Especially when allocation space.

Here is how I read it:

    public void ReadChainMap()
    {
        chainMap = new uint[clusterCount];
        fx.Io.SeekTo(chainMapOffset);
        EndianIo io = new EndianIo(fx.Io.In.ReadBytes((int)chainMapSize), EndianType.BigEndian);
        io.Open();

        for (int x = 0; x < clusterCount; x++)
            chainMap[x] = (chainMapEntrySize == 2) ?
                io.In.ReadUInt16() : io.In.ReadUInt32();


        io.Close();
    }

The chain can sometimes be hundreds of megabytes.开发者_StackOverflow中文版

And this is how I write it. When allocation and modifications to the chainMap uint array have been done, it will basically loop through that uint array and rewrite the entire chainmap.

    public void WriteChainMap()
    {
        EndianIo io = new EndianIo(new byte[chainMapSize],
            EndianType.BigEndian);
        io.Open(); io.SeekTo(0);

        for (int x = 0; x < clusterCount; x++)
            if (chainMapEntrySize == 2)
                io.Out.Write((ushort)chainMap[x]);
            else
                io.Out.Write(chainMap[x]);

        fx.Io.SeekTo(chainMapOffset);
        fx.Io.Out.Write(io.ToArray());
    }

I have been working on a cache system, but I want to have some more ideas on how to make this better.


It seems like you could segment it somehow. Rather than read/write the whole thing, 'page in/out' chunks based on usage. Think about virtual memory systems for inspiration there.


I've done a lot of research and testing on binary serialization myself and one thing that struck me was that you could read pretty big blocks quickly with todays harddrives and that the lion part of time was actually spent converting bytes into integers, strings etc.

So, one thing you could do is rearchitecture to make use of all your cores, first read as big block of data as possibly and then use PLINQ or Parallel.net to do the actual deserialization. You might even want to go even further into a producer/consumer pattern. You'll only see gains for large number of entries or large blocks or data though otherwise it's usually not worth parallelizing.

Also, you have a seek statement, those are always expensive, try using a memorymappedfile or reading a big block right away if possible and applicable.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜