开发者

Java JDBC clearBatch() and heap memory

I've noticed the following behavior.

I have a file that is about 3MB containing several thousand rows. In the rows I split and create prepared statement (about 250 000 statements).

What I do is:

preparedStatement
addBatch
do for every 200 rows {
 executeBatch
 clearBatch().
}

at the end

commit()

The memory usage will increase to around 70mb without out of memory error. Is it possible get the memory usage down? and have the transactional behavior (if one fails all fails.). I was able to lower the memory by doing commit with the executeBatch 开发者_如何学Cand clearBatch... but this will cause a partial insert of the total set.


You could insert all rows into a temp table with same structure and if everything is fine. let the database insert them into to target table using: insert into target (select * from temp). In case the import into the temp table fails you haven't changed anything in you target table.

EDIT: fixed syntax


You could also use the JDBC 2.0 "batch processing" feature.

  1. Set your dbconnection using connection.setAutoCommit(false)
  2. Add batches to your statement using statement.addBatch(sql_text_here)
  3. Once your batches are all loaded, execute it using: statement.executeBatch()
  4. Commit it using connection.commit()
  5. Catch exceptions and rollback as necessary using connection.rollback()

More on exception handling for rollback... here is a typical rollback exception handler:

  catch( BatchUpdateException bue )
  {
    bError = true;
    aiupdateCounts = bue.getUpdateCounts();

    SQLException SQLe = bue;
    while( SQLe != null)
    {
      // do exception stuff

      SQLe = SQLe.getNextException();
    }
  } // end BatchUpdateException catch
  catch( SQLException SQLe )
  {
    ...

  } // end SQLException catch

Read up here: http://java.sun.com/developer/onlineTraining/Database/JDBC20Intro/JDBC20.html#JDBC2015

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜