开发者

java persistance memory leaks

I have 1M rows in a mysql table and I am java persistence api when I execute following code then I get java heap error:

int counter = 0;
while (counter < 1000000) {
   java.util.Collection<MyEntityClass> data = myQuery.setFirstResult(counter)
       .setMaxResults(1000).getResultList();
   for(MyEntityClass obj : data){
       System.out.println(obj);
  开发者_Go百科 }
   counter += 1000;
}


I'd wonder if JTable is really hanging onto all those old references when you click "next". I don't believe it's a persistence problem. Whatever backing data structure you have behind the JTable, I'd make sure that I cleared it before adding the next batch of records. That way the old values can be GC'd.

Your JTable shouldn't have a ResultSet. It'd be better to have a persistence tier that hid such details from clients. Make the query for a batch of values (not the entire data set), load it from the ResultSet into a data structure, and close the ResultSet and Statement in a finally block. You need to close those resources in the scope of the method in which they were created or you're asking for trouble.


The problem is almost certainly that your resultSet object is caching the entire result set, which will eat up a lot of memory for such a large query.

Rather than resetting the index on the resultSet as you do at present - which doesn't clear the cached result, I would suggest you write a query that retrieves the appropriate rows for the given page, and execute that each time the page changes. Thow away the old result set each time to ensure you're not caching anything.

Depending on the database you are using, you would either use the rownum pseudo-column (Oracle), the row_number() (DB2, MSSQL) function or the limit x offset y syntax (MySql).


  • Is this a Java EE or Java SE application?
  • How are you handling your entity manager?

The entity manager is typically associated with a context. During a transaction every entity that you recover is going to be placed in it, and it will be a cache for all entities, when the transaction commits, JPA will search for modifications in the context and commit the changes to the database.

This implies that if you recover 1 million rows you will have 1 million entities in your context, and they will not be garbage collectable until you close your entity manager.

Since you are referring to a JTable I can only assume this is a JSE application. In this type of application you are in total control of the context. In this type of application there is a one-to-one relationship between context and the entity manager (which is not always the case in Java EE environment).

This implies that you can either create an entity manager per request (i.e. transaction or conversation) or an entity manager for the entire life of the application.

If you are using the second approach, you context is never garbage collected, and the more objects you read from the database the bigger it becomes, until you may eventually reach a memory problem like the one you describe.

I am not saying this is the cause of your problem, but it could certainly be a good lead on finding the root cause, don't you think?


Looks like your resultSet is not subject for GC in his particular case. Inspect your code and see, where the link to this resultSet really goes so that memory leak occurs.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜