I have two phisical servers. I copied some databases from server1 to server2 using command: server1$ mysqldump -u root -q -p --delete-master-logs --flush-logs --extended-insert --master-data=1 --sing
If I export a database with phpmyadmin his size is 18MB If I expoert it from terminal using this command is size is only 11MB.
If I have a large database with hundreds of tables, and I only want to mysqldump the data from the first month, how would I do that?
ok so i need to do a mysqldump of a database and this is 开发者_StackOverflowwhat i have mysqldump -uroot -psdfas@N$pr!nT --databases app_pro > /srv/DUMPFILE.SQL
I\'m just trying to perform a mysqldump and have it scheduled. I\'m using RHEL 5 and have added it to the crontab as shown below:
I use mysqldump for MySQL Backup. mysqldump --lock-tables.... The DB is about 2GB, hence mysqldump takes a long time.
bash master needed... To compare mysqldumps from multiple dates, I need sql GROUP BY, ORDER BY functionality... but on the command line...
I\'m using mysqldump to dump all my tables to 开发者_C百科CSV files like so: mysqldump -u-p -t -TC:\\Temp--fields-terminated-by=,
I have this big database (100+ tables and 30+ million rows) that is being a pain in the ass to import back from a full backup.
I was wondering whats the best way to backup MySQL (v5.1.x) data - creating mysql data dir archive use mysqldump