How can I get a hash of an entire table in postgresql?
I would like a fairly efficient way to condense an entire table to a hash value.
I have some tools that generate entire data tables, which can then be used to generate further tables, and so on. I'm trying to implement a simplistic build system to coordinate build 开发者_StackOverflow中文版runs and avoid repeating work. I want to be able to record hashes of the input tables so that I can later check whether they have changed. Building a table takes minutes or hours, so spending several seconds building hashes is acceptable.
A hack I have used is to just pipe the output of pg_dump to md5sum, but that requires transferring the entire table dump over the network to hash it on the local box. Ideally I'd like to produce the hash on the database server.
Finding the hash value of a row in postgresql gives me a way to calculate a hash for a row at a time, which could then be combined somehow.
Any tips would be greatly appreciated.
Edit to post what I ended up with: tinychen's answer didn't work for me directly, because I couldn't use 'plpgsql' apparently. When I implemented the function in SQL instead, it worked, but was very inefficient for large tables. So instead of concatenating all the row hashes and then hashing that, I switched to using a "rolling hash", where the previous hash is concatenated with the text representation of a row and then that is hashed to produce the next hash. This was much better; apparently running md5 on short strings millions of extra times is better than concatenating short strings millions of times.
create function zz_concat(text, text) returns text as
'select md5($1 || $2);' language 'sql';
create aggregate zz_hashagg(text) (
sfunc = zz_concat,
stype = text,
initcond = '');
I know this is old question, however this is my solution:
SELECT
md5(CAST((array_agg(f.* order by id))AS text)) /* id is a primary key of table (to avoid random sorting) */
FROM
foo f;
SELECT md5(array_agg(md5((t.*)::varchar))::varchar)
FROM (
SELECT *
FROM my_table
ORDER BY 1
) AS t
just do like this to create a hash table aggregation function.
create function pg_concat( text, text ) returns text as '
begin
if $1 isnull then
return $2;
else
return $1 || $2;
end if;
end;' language 'plpgsql';
create function pg_concat_fin(text) returns text as '
begin
return $1;
end;' language 'plpgsql';
create aggregate pg_concat (
basetype = text,
sfunc = pg_concat,
stype = text,
finalfunc = pg_concat_fin);
then you could use the pg_concat function to caculate the table's hash value.
select md5(pg_concat(md5(CAST((f.*)AS text)))) from f order by id
I had a similar requirement, to use when testing a specialized table replication solution.
@Ben's rolling MD5 solution (which he appended to the question) seems quite efficient, but there were a couple of traps which tripped me up.
The first (mentioned in some of the other answers) is that you need to ensure that the aggregate is performed in a known order over the table you are checking. The syntax for that is eg.
select zz_hashagg(CAST((example.*)AS text) order by id) from example;
Note the order by
is inside the aggregate.
The second is that using CAST((example.*)AS text
will not give identical results for two tables with the same column contents unless the columns were created in the same order. In my case that was not guaranteed, so to get a true comparison I had to list the columns separately, for example:
select zz_hashagg(CAST((example.id, example.a, example.c)AS text) order by id) from example;
For completeness (in case a subsequent edit should remove it) here is the definition of the zz_hashagg from @Ben's question:
create function zz_concat(text, text) returns text as
'select md5($1 || $2);' language 'sql';
create aggregate zz_hashagg(text) (
sfunc = zz_concat,
stype = text,
initcond = '');
Great answers.
In case by any means someone required not to use aggregation functions but maintaining support for tables sized several GiB, you can use this that has little performance penalties over the best answers in the case of largest tables.
CREATE OR REPLACE FUNCTION table_md5(
table_name CHARACTER VARYING
, VARIADIC order_key_columns CHARACTER VARYING [])
RETURNS CHARACTER VARYING AS $$
DECLARE
order_key_columns_list CHARACTER VARYING;
query CHARACTER VARYING;
first BOOLEAN;
i SMALLINT;
working_cursor REFCURSOR;
working_row_md5 CHARACTER VARYING;
partial_md5_so_far CHARACTER VARYING;
BEGIN
order_key_columns_list := '';
first := TRUE;
FOR i IN 1..array_length(order_key_columns, 1) LOOP
IF first THEN
first := FALSE;
ELSE
order_key_columns_list := order_key_columns_list || ', ';
END IF;
order_key_columns_list := order_key_columns_list || order_key_columns[i];
END LOOP;
query := (
'SELECT ' ||
'md5(CAST(t.* AS TEXT)) ' ||
'FROM (' ||
'SELECT * FROM ' || table_name || ' ' ||
'ORDER BY ' || order_key_columns_list ||
') t');
OPEN working_cursor FOR EXECUTE (query);
-- RAISE NOTICE 'opened cursor for query: ''%''', query;
first := TRUE;
LOOP
FETCH working_cursor INTO working_row_md5;
EXIT WHEN NOT FOUND;
IF first THEN
first := FALSE;
SELECT working_row_md5 INTO partial_md5_so_far;
ELSE
SELECT md5(working_row_md5 || partial_md5_so_far)
INTO partial_md5_so_far;
END IF;
-- RAISE NOTICE 'partial md5 so far: %', partial_md5_so_far;
END LOOP;
-- RAISE NOTICE 'final md5: %', partial_md5_so_far;
RETURN partial_md5_so_far :: CHARACTER VARYING;
END;
$$ LANGUAGE plpgsql;
Used as:
SELECT table_md5(
'table_name', 'sorting_col_0', 'sorting_col_1', ..., 'sorting_col_n'
);
As for the algorithm, you could XOR all the individual MD5 hashes, or concatenate them and hash the concatenation.
If you want to do this completely server-side you probably have to create your own aggregation function, which you could then call.
select my_table_hash(md5(CAST((f.*)AS text)) from f order by id
As an intermediate step, instead of copying the whole table to the client, you could just select the MD5 results for all rows, and run those through md5sum.
Either way you need to establish a fixed sort order, otherwise you might end up with different checksums even for the same data.
Tomas Greif's solution is nice. But for huge enough table invalid memory alloc request size error will occur. So, it can be overcome with 2 options.
Option 1. Without batches
If the table is not big enough use string_agg
and bytea
data type.
select
md5(string_agg(c.row_hash, '' order by c.row_hash)) table_hash
from
foo f
cross join lateral(select ('\x' || md5(f::text))::bytea row_hash) c
;
Option 2. With batches
If the query in previous option ends with error like
SQL Error [54000]: ERROR: out of memory Detail: Cannot enlarge string buffer containing 1073741808 bytes by 16 more bytes.
the row count limit is 1073741808 / 16 = 67108863
and the table should be divided to batches.
select
md5(string_agg(t.batch_hash, '' order by t.batch_hash)) table_hash
from(
select
md5(string_agg(c.row_hash, '' order by c.row_hash)) batch_hash
from
foo f
cross join lateral(select ('\x' || md5(f::text))::bytea row_hash) c
group by substring(row_hash for 3)
) t
;
Where 3
in group by
clause divides row hashes to 16 777 216 batches (2
: 65 536, 1
: 256). Also other batching methods (e.g. strictly ntile
) will work.
P.S. If you need to compare two tables this post may help.
精彩评论