Why are result sets so large? [closed]
Why are result sets from data bases so large? I typically get result sets that are around a million rows, each of which contains a couple of numerics, a varchar(75), a big int...whatever. Probably not much more than 100 bytes per row. And yet it takes up开发者_StackOverflow社区 6 GB! Is this typical behavior? My stack is Postgres + JDBC + Spring - I don't have any experience with other technologies.
The java.sql.ResultSet
itself will typically only have a couple dozen rows of data buffered at any time (configurable).
Now if you load all those rows into memory as Java objects, then you'll see large amounts of memory used. A String
, for instance, has two bytes per char, two objects headers, an offset, two lengths and a cached hash value. BigInteger
is similar. It all adds up. Use a profiler.
You might have better luck with a cached row set. That or don't load the entire set of results at once.
We're gonna need to see a Query to see why it's so big. Maybe you're not joining tables right. (I wish I could comment that...)
精彩评论