开发者

Should I change from buffered read to in-memory/tokenize on Android app for reading a 100,000 line file?

Currently I am loading a text file that contains 100,000 lines into a SortedMap using buffered reads. Should I abandon this approach and instead load the entire file into memory and then tokenize by line feeds into the SortedMap? Note, I have to parse each line to extract the key and create a per-key supporting object that I then insert into the SortedMap. The file is less than 4MB in size so th开发者_StackOverflowat fits in line with Android's in-memory file size limitations. I am wondering if it's worth the effort to switch to the in-memory approach or if the speed-up gained just isn't worth it.

Also, would a HashMap be a lot faster than a SortedMap? I only need lookup-by-key and can live without the sorted keys if necessary, but it would be nice to have around. If there is a better structure than what I am using let me know and if you have any Android speed tips related to this issue please mention those too.

-- roschler


It's unclear to me why it would be simpler to load the entire file into memory and then tokenize. Reading a line at a time and parsing it that way is pretty simple, isn't it? While I'm all for loading things all at once when it genuinely makes things simpler, I can't see that it would be significantly easier here.

As for SortedMap vs HashMap - typically a HashMap lookup is O(1) if you don't have many hash collisions, but a SortedMap lookup is only O(log n) if there aren't equal elements. How expensive are comparisions compared with hash computations in your object model? With 100,000 elements you'll have around 16-17 comparisons per lookup. Ultimately, I wouldn't want to guess which will be faster - you should test it, as for all performance options. Look at the memory usage too... I would expect a SortedMap to use less memory, but I could easily be wrong.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜