开发者

How does Hibernate's batch-fetching algorithm work?

I found this description of the batch-fetching algorithm in "Manning - Java Persistence with Hibernate":

What is the real batch-fetching algorithm? (...) Imagine a batch size of 20 and a total number of 119 uninitialized proxies that have to be loaded in batches. At startup time, Hibernate reads the mapping metadata and creates 11 batch loaders internally. Each loader knows how many proxies it can initialize: 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1. The goal is to minimize the memory consumption for loader creation and to create enough loaders that every possible batch fetch can be pro开发者_如何学Pythonduced. Another goal is to minimize the number of SQL SELECTs, obviously. To initialize 119 proxies Hibernate executes seven batches (you probably expected six, because 6 x 20 > 119). The batch loaders that are applied are five times 20, one time 10, and one time 9, automatically selected by Hibernate.

but I still don't understand how it works.

  1. Why 11 batch loaders ?
  2. Why batch loaders can initialize: 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 proxies ?

If anybody could present a step by step algorithm ... :)


This helps avoid creating a large number of different prepared statements.

Each query (prepared statement) needs to be parsed and its execution plan needs to be calculated and cached by the database. This process may be much more expensive than the actual execution of the query for which the statement has already been cached.

A large number of different statements may lead to purging other cached statements out of the cache, thus degrading the overall application performance.

Also, since hard parse is generally very expensive, it is usually faster to execute multiple cached prepared statements (including multiple database round trips) than to parse and execute a new one. So, besides the obvious benefit of reducing the number of different statements, it may actually be faster to retrieve all of the 119 entities by executing 11 cached statements than to create and execute a single new one which contains all of the 119 ids.

As already mentioned in the comments, Hibernate invokes ArrayHelper.getBatchSizes method to determine the batch sizes for the given maximum batch size.


I couldn't find any information on the web about how hibernate handles batch loading, but judging from your information, one could guess the following:

Why 11 batch loaders?

With a batch size of 20, if you want to minimize the number of loaders required for any combination of proxies, there are basically two options:

  • create a loader for 1,2,3,4,5,6,7,...20,21,22,23,... N uninitialized proxies (stupid!) OR
  • create a loader for any N between 1..9 and then create more loaders for batch_size/2(recursively)

Example: for batch size 40, you would end up with loaders for 40,20,10,9,8,7,6,5,4,3,2,1 loaders.

  1. If you have 33 uninitialized proxies, you can use the following loaders: 20, 10, 3
  2. If you have 119 uninitialized proxies, you can use the following loaders, 40(x2), 20, 10, 9
  3. ...

Why batch loaders can initialize: 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 proxies ? I think the hibernate team chose this as a balance between the number of loaders required for loading a "common" number N of uninitialized proxies and memory consumption. The could have created a loader for every N between 0 and batch_size, but I suspect that the loaders have a considerable memory footprint so this is a tradeoff. The algorithm can be something like this (educated guess):

  1. n = batch_size; while (n > 10)

    1.1. loader(n); n = n / 2

  2. for n = 0..10 create loader(n)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜