开发者

MYSQL: Display Skipped records after LOAD DATA INFILE?

In MySQL I've used LOAD DATA LOCAL INFILE which works fine. At the end I get a message like:

Records: 460377  Deleted: 0  Skipped: 145280  Warnings: 0

How can I view the line number of the records that were skipped? SHOW warnings doesn't work:

mysql> show warnings;
Empty set (0.00 s开发者_如何转开发ec)


If there was no warnings, but some rows were skipped, then it may mean that the primary key was duplicated for the skipped rows.

The easiest way to find out duplicates is by openning the local file in excel and performing a duplicate removal on the primary key column to see if there are any.


You could create a temp table removing the primary key items so that it allows duplications, and then insert the data.

Construct a SQL statement like

select count(column_with_duplicates) AS num_duplicates,column_with_duplicates
from table
group by column_with_duplicates
having num_duplicates > 1;

This will show you the rows with redundancies. Another way is to just dump out the rows that were actually inserted into the table, and run a file difference command against the original to see which ones weren't included.


For anyone stumbling onto to this:

Another option would be to do a SELECT INTO and diff the two files. For example:

LOAD DATA LOCAL INFILE 'data.txt' INTO TABLE my_table FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '\"' LINES TERMINATED BY '\r' IGNORE 1 LINES (title, desc, is_viewable);

SELECT title, desc, is_viewable INTO OUTFILE 'data_rows.txt' FIELDS TERMINATED BY '\t' LINES TERMINATED BY '\r' FROM my_table;

Then execute FileMerge (on Mac OS X) data.txt data_rows.txt to see the differences. If you are getting an access denied error when doing the SELECT INTO make sure you:

GRANT FILE ON *.* TO 'mysql_user'@'localhost';
flush privileges;

As the root user in the mysql client.


Records would be skipped, when any database constraint is not met. Check for common ones like

  • Primary key duplication
  • Unique key condition
  • Partition condition


I use bash command-line to find the duplicate row in the csv file:

awk -F\, '{print $1$2}' /my/source/file.csv| sort -n| uniq -c| grep -v "^\ *1"

when the two first columns are the primary key.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜