开发者

reading small CSV File in java resulting in out of memory error

I have the following two implementation of reading csv files, the csv files in question are not that large(5 megabytes).

The first implementation is using openCSV, the second one is using stringTokenizer.

The first one resul开发者_JS百科ted in out of memory error even when I raised the java max heap memory to 1G(Xmx), although the StringTokenizer implementation is not robust, but I have no choice as I need to read the csv file into memory.

I don't understand why the openCSV version would consume so much memory given the small size of the csv file(it has 200k rows, but only about 5m file size). what is openCSV reader doing that would require so much memory? The StringTokenizer version breezes through it in no time.

here's the error thrown by the openCSV implementation:

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.Arrays.copyOfRange(Arrays.java:3209)
    at java.lang.String.<init>(String.java:215)
    at java.lang.StringBuilder.toString(StringBuilder.java:430)
    at au.com.bytecode.opencsv.CSVParser.parseLine(Unknown Source)
    at au.com.bytecode.opencsv.CSVParser.parseLineMulti(Unknown Source)
    at au.com.bytecode.opencsv.CSVReader.readNext(Unknown Source)

private List<String[]> parseCSV(File f) {
    List<String[]>res=new Vector<String[]>();
    CSVReader reader=null;
    try{
        reader = new CSVReader(new BufferedReader(new FileReader(f)));
        String [] nextLine;
        while ((nextLine = reader.readNext()) != null) {
            for(int i=0;i<nextLine.length;i++)if(nextLine[i]!=null)nextLine[i]=nextLine[i].trim();
            res.add(nextLine);
        }

    }catch(IOException exp){
        exp.printStackTrace();
    }finally{
        if(reader!=null)try {
            reader.close();
        } catch (IOException ex) {
            Logger.getLogger(DataStream2.class.getName()).log(Level.SEVERE, null, ex);
        }
    }
    return res;

}

 private List<String[]> parseCSV(File f) {
    List<String[]>res=new Vector<String[]>();
    BufferedReader br=null;
    try{
        br = new BufferedReader(new FileReader(f));
        String line =null;
        while((line=br.readLine())!=null){
            StringTokenizer st=new StringTokenizer(",");
            String[]cur=new String[st.countTokens()];
            for(int i=0;i<cur.length;i++){
                cur[i]=st.nextToken().trim();
            }
            res.add(cur);
        }
    }catch(IOException exp){
        exp.printStackTrace();
     }
    finally{
        if(br!=null)try {
            br.close();
        } catch (IOException ex) {
            Logger.getLogger(DataStream2.class.getName()).log(Level.SEVERE, null, ex);
        }
    }
    return res;
}


Unlikely perhaps, but I would guess that your input data may be triggering a bug in the opencsv library, maybe causing it to get stuck in a loop.

The download for opencsv provides source and libraries, so you should be able to debug the code yourself.

Since the stacktrace isn't showing line numbers for the opencsv code, I would guess you would need to alter the javac target in the build script to include "debug=true", to enable debug compilation of the code.


it turns out that the StringTokenizer version has a bug, so both versions run out of memory.


Apache Solr uses commons-csv so I would recommend giving it a try. Solr using it is a big endorsement.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜