Efficient Pattern for processing fixed width files
I have a case where 开发者_开发百科in I need to read a flat file with close to 100000 logical records. Each logical record is comprised of nx128 character parts. ie, Type A: 3x128, Type B : 4-5 X 128 etc where maximum possible n is 6.
Application has to read the file and process the records. The problem is 'n' can be determined only when we read the first 52 characters of each nx128 partition.
Could you please suggest any design paterns which I can re-use or any efficient algorithms to perform this ?
Note : 1. Performance is an important criteria as application need to process thousands of file like this everyday. 2. The data is not separated by lines. Its a long string like pattern
You could adopt a master-worker (or master-slave) pattern where in a master thread would be responsible for reading the first 52 characters of data to determine the length of the record. The master may then defer the actual work of reading and processing the records to a worker thread, and move on to the next record again to read only the first 52 characters. Each worker would be responsible for (re)opening the file and processing a particular range of characters; the worker needs to be provided with this information.
Since, I haven't seen the structure of the file, I can only post a few possible limitations or concerns for an implementer to think about:
- An effective and performant implementation would rely on the ability to provide a worker thread with file pointers and the length of the data that the worker should deal with. In simpler words, the worker thread is expected to actually read the file in a random-access mode, instead of having the master do the reading (which is serial). If you cannot perform random-access, there isn't a lot you can do to optimize the master-worker pattern.
- Spawning of new worker threads is not recommended. Use a thread pool. This would also mean that you can limit the number of open file descriptors based on the size of the pool.
- Queue up further requests to process the character ranges in case the pool is exhausted. That way, the master can continue doing its work until the last record has been read.
- Dependencies across records will require you to serialize processing the records. If each record can be processed on it's own thread, without requiring results from other threads to be made available, then you should not encounter any difficult in adopting this approach.
Unless you can change the format you have to work around it.
You can create an index for each file, but you would have to read it once to build the index (but it would save having to do this more than once)
精彩评论