String aggregation outputting large CSV rows overflows `replace`
I am selecting records from a table and returning the results as a single row CSV list.
I am usingOracle 10 release 2
. here is my query:
S开发者_如何学GoELECT column1,rtrim
(replace
(replace
(xmlagg(xmlelement("x", column2)).getclobval(),
'<x>',
NULL),
'</x>',
','),
',')
FROM table1
group by column1;
My problem is that when the string is greater than 4000 chars it fails because of the replace function. I can get around this by removing the replace function and using a function in my programming language.
I would like to amend the query so that I can specify how many elements in the CSV list are returned per row. So for example using the above query would return a row containing 10000 elements in the CSV list (excluding the replace function). The amended query would return 10 rows of 1000 elements (the 10 rows can be amendable). for example the original query would return:1234,1234,1234,1234,1234,5678,3456,12344,654677,
The amended query would be something like
1234,1234,1234,1234,
1234,5678,3456,12344,
654677
I cannot use the collect
function, but anything else should be good, Just SQL if possible
You can use an analytic function to assign the rows to arbitrary buckets, and group by that.
SELECT column1,rtrim
(replace
(replace
(xmlagg(xmlelement("x", column2)).getclobval(),
'<x>',
NULL),
'</x>',
','),
',')
FROM (SELECT column1, column2, NTILE(10) OVER (PARTITION BY column1 ORDER BY column2) bucket)
group by column1,bucket;
The argument to NTILE sets the number of buckets, so you can vary it as needed.
To set a fixed number of values per bucket instead of a fixed number of buckets, I think you could replace the NTILE expression with TRUNC( ((ROW_NUMBER() OVER (PARTITION BY column1 ORDER BY column2)) - 1) /1000 )
, where 1000 is the number of values per bucket.
精彩评论