开发者

PostgreSQL: BYTEA vs OID+Large Object?

I started an application with Hibernate 3.2 and PostgreSQL 8.4. I have some byte[] fields that were mapped as @Basic (= PG bytea) and others that got mapped as @Lob (=PG Large Object). Why the inconsistency? Because I was a Hibernate noob.

Now, those fields are max 4 Kb (but average is 2-3 kb). The PostgreSQL documentation开发者_JS百科 mentioned that the LOs are good when the fields are big, but I didn't see what 'big' meant.

I have upgraded to PostgreSQL 9.0 with Hibernate 3.6 and I was stuck to change the annotation to @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType"). This bug has brought forward a potential compatibility issue, and I eventually found out that Large Objects are a pain to deal with, compared to a normal field.

So I am thinking of changing all of it to bytea. But I am concerned that bytea fields are encoded in Hex, so there is some overhead in encoding and decoding, and this would hurt the performance.

Are there good benchmarks about the performance of both of these? Anybody has made the switch and saw a difference?


Basically there are cases where each makes sense. bytea is simpler and generally preferred. The client libs give you the decoding so that's not an issue.

However LOBs have some neat features, such as an ability to seek within them and treat the LOB as a byte stream instead of a byte array.

"Big" means "Big enough you don't want to send it to the client all at once." Technically bytea is limited to 1GB compressed and a lob is limited to 2GB compressed, but really you hit the other limit first anyway. If it's big enough you don't want it directly in your result set and you don';t want to send it to the client all at once, use a LOB.


But I am concerned that bytea fields are encoded in Hex

bytea input can be in hex or escape format, that's your choice. Storage will be the same. As of version 9.0, the output default is hex, but you can change this by editting the parameter bytea_output.

I haven't seen any benchmarks.


tl;dr Use bytea unless you need "streaming."

bytea is a byte sequence and works like any other value.

Large object are split up into multiple rows. This allows you seek, read, and write large objects like an OS file. You can operate on them without loading the entire thing into memory at once.

However, large objects have downsides:

  1. The is only large object table per database.

  2. Large objects aren't automatically removed when the "owning" record is deleted. (Technically, a large object can be referenced by several records.) See the lo_manage function in the lo module.

  3. Since there is only one table, large object permissions have to be handled record by record.

  4. Streaming is difficult, and has less support by client drivers than simple bytea.

  5. It's part of the system schema, so you have limited to no control over options like partitioning and tablespaces.

In terms of capacity, there isn't a huge difference. bytea is limited to 1GB; large objects are limited to 2GB. If 1GB is too limiting, probably 2GB is as well.

I venture to guess that 93% of real-world uses of large objects would be better served by using bytea.


I don't have a comparison of large objects and bytea handy, but note that the switch to the hex output format in 9.0 was made also because it is faster than the previous custom encoding. As far as text encoding of binary data goes, you probably won't get much faster than what there currently is.

If that is not good enough for you, you can consider using the binary protocol between PostgreSQL client and server. Then you basically get the stuff straight from disk, much like large objects. I don't know if the PostgreSQL JDBC supports that yet, but a quick search suggests no.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜