Java JDBC clearBatch() and heap memory
I've noticed the following behavior.
I have a file that is about 3MB containing several thousand rows. In the rows I split and create prepared statement (about 250 000 statements).
What I do is:
preparedStatement
addBatch
do for every 200 rows {
executeBatch
clearBatch().
}
at the end
commit()
The memory usage will increase to around 70mb without out of memory error. Is it possible get the memory usage down? and have the transactional behavior (if one fails all fails.).
I was able to lower the memory by doing commit with the executeBatch
开发者_如何学Cand clearBatch
... but this will cause a partial insert of the total set.
You could insert all rows into a temp table with same structure and if everything is fine. let the database insert them into to target table using: insert into target (select * from temp)
.
In case the import into the temp table fails you haven't changed anything in you target table.
EDIT: fixed syntax
You could also use the JDBC 2.0 "batch processing" feature.
- Set your dbconnection using
connection.setAutoCommit(false)
- Add batches to your statement using
statement.addBatch(sql_text_here)
- Once your batches are all loaded, execute it using:
statement.executeBatch()
- Commit it using
connection.commit()
- Catch exceptions and rollback as necessary using
connection.rollback()
More on exception handling for rollback... here is a typical rollback exception handler:
catch( BatchUpdateException bue )
{
bError = true;
aiupdateCounts = bue.getUpdateCounts();
SQLException SQLe = bue;
while( SQLe != null)
{
// do exception stuff
SQLe = SQLe.getNextException();
}
} // end BatchUpdateException catch
catch( SQLException SQLe )
{
...
} // end SQLException catch
Read up here: http://java.sun.com/developer/onlineTraining/Database/JDBC20Intro/JDBC20.html#JDBC2015
精彩评论