开发者

Why does this code run slower and slower?

I have an entity class called SrcFile and one of its columns is:

@NotNull
@Lob
private Byte[] data;

This SrcFile has a OneToOne relationship with Report entity.

From Report.java:

@OneToOne
private SrcFile srcFile;

Persisting a SrcFile entity works great.

srcFileHomeFacade.clearInstance();
SrcFile srcFile = srcFileHomeFacade.getInstance();
byte[] bArray = resource.getBytesForSource();
srcFile.setData(ReportFileResource.toObject(bArray));
System.out.println("````````````````srcFile data length: "+bArray.length);
srcFileHomeFacade.persist();

The problem comes when开发者_如何学C I persist Report.

I do:

report.setSrcFile(srcFile);
reportHomeFacade.persist();

and works nice BUT after running this code multiple times it gets slower and slower (even it raises GC overhead error) and after hours of investigation I discovered that this report.setSrcFile(srcFile) is the problem.

Somehow report does not like to reference such amount of srcFile.data...

Do you see the cause?

If I comment report.setSrcFile all works great (except that SRCFILE_ID in report table will be null, but it's just for testing). Please note that the length of data is about 100.000.

Note: If I not persist any report but only srcFile entities I have no problems. UPDATE:

"Run slower and slower" explanations: this code is called for converting some pcls to pdfs so data contains the source of the pcl and it's different each time. After converting about 100 pcls the process goes slower and slower and with VM I discovered this byte[] arrays which takes a lot of MB memory. Again, it's definitely not a problem about IO but about this setSrcFile on report entity, VisualVM also indicates this.


I am still a bit unsure, but I suspect your problem relates to the way you are persisting things and handle the entity objects: If you discard your entities properly once they are persisted, GC should free up memory often enough to keep your system running smoothly. Especially if you use flush() or commit() after each transaction, your memory usage should not keep building up much at all. In your case, though, it seems that all the entities are kept in memory even after they are no longer needed - so there has to be some reason why the resources are not released.

Do you by any chance use a single for loop to iterate over a set of srcFiles and have all the persist() calls directly within that? If so, your problem might be related to scope. You could try to extract the contents of the loop to a new method to have all local variables properly freed up after each iteration.

You might also be able to improve things by setting CascadeType.PERSIST or CascadeType.ALL and using a single persist() operation on the report to save both objects. FetchType.LAZY might help, as well.

In any case: Look for possible reasons why your program would keep all the entities directly available in memory, instead of saving them to the database and forgetting about them afterwards.


I remember native JDBC interface to BLOBs can be tricky and manual close() of vendor objects was required. If this step is missed the BLOBs were accumulated on JDBC driver (i.e. in JVM memory) and eventually JVM went OOM.

I wonder if you need any vendor-specific properties or steps to do to close BLOBs after save.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜