开发者

Fastest way to read and parse ascii file from sdcard

I am working on an app that will allow the user to select an ascii text file (typically from the sdcard) that contains the data required to render a shape in opengl. The file format looks something like this (there are some other lines containing less relevant data):

normal -1.000000e+000 -5.551115e-016 0.000000e+000
vertex 1.387779e-014 0.000000e+000 1.000000e+001
vertex 0.000000e+000 2.500000e+001 1.000000e+001
vertex 1.387779e-014 0.000000e+000 0.000000e+000

The typical file could be around 5mb and contains around 120,000+ lines of data. I have tried several approaches to reading and parsing the file and I can't seem to get it to read the file and parse the data in less than about 90 seconds - which is obviously slower than I would like.

I have tried three approaches:

1) I read the file line by line and used the string split method with space as the delimiter

2) I then tried using the streamtokenizer to create a list of tokens (strings) for each word/number in the file. I then went through the list, filling arraylists with the data I needed (the numbers for the vertices in one list and the numbers for the normals in another). Again, this worked but was slow. Relevant blocks of code:

    File f = new File(Environment.getExternalStorageDirectory()+"/"+filename);
    int fLen = (int)f.length();
    Log.d("msg:", "File contains " + fLen + " Characters");

    try {
        FileReader file = new FileReader(f);
        buf = new BufferedReader(file);
        FileParser st = new FileParser(buf);

        while (st.nextToken() != st.TT_EOF) {
       开发者_高级运维     if (st.ttype==st.TT_WORD){
            if (st.sval.equals("vertex"))
            {
              st.nextToken();
              vertices.add((Double.valueOf(st.sval).floatValue()));
              st.nextToken();
              vertices.add((Double.valueOf(st.sval).floatValue()));
              st.nextToken();
              vertices.add((Double.valueOf(st.sval).floatValue()));
              indices.add((short)(nodeCount-1));
            }
            }
        }

The streamtokenizer is initialized as follows:

public class FileParser extends StreamTokenizer
{

  public FileParser(Reader r)
  {
    super(r);
    setup();
  }

  public void setup()
  {
    resetSyntax();
    eolIsSignificant(true);   
    lowerCaseMode(true);

    wordChars('!', '~');

    whitespaceChars(' ', ' ');
    whitespaceChars('\n', '\n');
    whitespaceChars('\r', '\r');
    whitespaceChars('\t', '\t');
  }// End setup
}

3) Based on an article I read about counting words in a text file that said that streamtokenizers are slow compared to using a char buffer, I tried reading the file into a large char buffer (in chunks where necessary). I saw some improvement but only maybe 20%. Relevant code:

        FileReader file = new FileReader(f);
        char pos = "+".charAt(0);
        char neg = "-".charAt(0);
        char dec = ".".charAt(0);
        float[] normalVector=new float[3];

        int bufSize=500000;
        int offset=0;

        char[] buffer=new char[bufSize];
        while ((len=file2.read(buffer,offset,bufSize-offset)) != -1) {
            index=0;
            while (index < len+offset) {
                while ((index < (len+offset)) && !Character.isLetterOrDigit(buffer[index]) && !(buffer[index]==pos) && !(buffer[index]==neg) && !(buffer[index]==dec)) {
                    index++;
                    if ((index>bufSize-20)&&(len+offset==bufSize)) {
                        offset=len+offset-index;
                        for(int i=0; i<offset; i++){
                               buffer[i]=buffer[index+i];
                        }
                        index=len+offset;
                    }
                }
                start = index;
                while ((index < (len+offset)) && ((Character.isLetterOrDigit(buffer[index]) || buffer[index]==pos || buffer[index]==neg) || buffer[index]==dec)) {
                    index++;
                }
                if (start < (len+offset)) {
                    text = String.copyValueOf(buffer, start, index-start);
                    if (text.equals("vertex")) {
                        xyz=1;
                    } else if (xyz>0) {
                          vertices.add((Double.valueOf(text).floatValue()));
                          xyz=xyz+1;
                          if (xyz==4){
                              nodeCount++;
                              indices.add((short)(nodeCount-1));
                              xyz=0;
                          }
                    }
                }       
            }
        }

There must be some sort of bottleneck that I am missing. Any ideas?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜