开发者

How to copy tables from one website to another with php?

I have 2 websites, lets say - example.com and example1.com

example.com has a database fruits which has a table apple with 7000 records.

I exported apple and tried to import it to example1.com but I'm always getting "MYSQL Server has gone away" error. I suspect this is due to some server side restriction.

So, how can I copy the tables without having to contact the system admins? Is there a way开发者_StackOverflow to do this using PHP? I went through example of copying tables, but that was inside the same database.

Both example.com and example1.com are on the same server.


One possible approach:

  1. On the "source" server create a PHP script (the "exporter") that outputs the contents of the table in an easy to parse format (XML comes to mind as easy to generate and to consume, but alternatives like CSV could do).

  2. On the "destination" server create a "importer" PHP script that requests the exporter one via HTTP, parses the result, and uses that data to populate the table.

That's quite generic, but should get you started. Here are some considerations:

  • http://ie2.php.net/manual/en/function.http-request.php is your friend
  • If the table contains sensitive data, there are many techniques to enhance security (http_request won't give you https:// support directly, but you can encrypt the data you export and decrypt on importing: look for "SSL" on the PHP manual for further details).
  • You should consider adding some redundancy (or even full-fleshed encryption) to prevent the data from being tampered with while it sails the web between the servers.
  • You may use GET parameters to add flexibility (for example, passing the table name as a parameter would allow you to have a single script for all tables you may ever need to transfer).
  • With large tables, PHP timeouts may play against you. The ideal solution for this would be efficient code + custom timeout for the export and import scripts, but I'm not even sure if that's possible at all. A quite reliable workaround is to do the job in chunks (GET params come handy here to tell the exporter what chunk do you need, and a special entry on the output can be enough to tell the importer how much is left to import). Redirects help a lot with this approach (each redirect is a new request, to the servers, so timeouts get reset).

Maybe I'm missing something, but I hope there is enough there to let you get your hands dirty on the job and come back with any specific issue I might have failed to foresee.

Hope this helps.


EDIT: Oops, I missed the detail that both DBs are on the same server. In that case, you can merge the import and export task into a single script. This means that:

  • You don't need a "transfer" format (such as XML or CSV): in-memory representation within the PHP is enough, since now both tasks are done within the same script.
  • The data doesn't ever leave your server, so the need for encryption is not so heavy. Just make sure no one else can run your script (via authentication or similar techniques) and you should be fine.
  • Timeouts are not so restrictive, since you don't waste a lot of time waiting for the response to arrive from the source to the destination server, but they still apply. Chunk processing, redirection, and GET parameters (passed within the Location for the redirect) are still a good solution, but you can squeeze much closer to the timeout since execution time metrics are far more reliable than cross-server data-transfer metrics.

Here is a very rough sketch of what you may have to code:

$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
    $q = $q . sprintf("(%s, %s, %s), ", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);
/* removing the trailing ',' from $q is left as an exercise (ok, I'm lazy, but that's supposed to be just a sketck) */
mysql_query($q, $link_dst);

You'll have to add the chunking logics in there (those are too case- & setup- specific), and probably output some confirmation message (maybe a DESCRIBE and a COUNT of both source and destination tables and a comparison between them?), but that's quite the core of the job. As an alternative you may run a separate insert per row (invoking the query within the loop), but I'm confident a single query would be faster (however, if you have too small RAM limits for PHP, this alternative allows you to get rid of the memory-hungry $q).


Yet another edit:

From the documentation link posted by Roberto:

You can also get these errors if you send a query to the server that is incorrect or too large. If mysqld receives a packet that is too large or out of order, it assumes that something has gone wrong with the client and closes the connection. If you need big queries (for example, if you are working with big BLOB columns), you can increase the query limit by setting the server's max_allowed_packet variable, which has a default value of 1MB. You may also need to increase the maximum packet size on the client end. More information on setting the packet size is given in Section B.5.2.10, “Packet too large”.

An INSERT or REPLACE statement that inserts a great many rows can also cause these sorts of errors. Either one of these statements sends a single request to the server irrespective of the number of rows to be inserted; thus, you can often avoid the error by reducing the number of rows sent per INSERT or REPLACE.

If that's what's causing your issue (and by your question it seems very likely it is), then the approach of breaking the INSERT into one query per row will most probably solve it. In that case, the code sketch above becomes:

$link_src = mysql_connect(source DB connection details);
$link_dst = mysql_connect(destination DB connection details);
/* You may want to truncate the table on destination before going on, to prevent data repetition */
$q = "INSERT INTO `table_name_here` (column_list_here) VALUES ";
$res = mysql_query("SELECT * FROM `table_name_here`", $link_src);
while ($row = mysql_fetch_assoc($res)) {
    $q = $q . sprintf("INSERT INTO `table_name_here` (`field1_name`, `field2_name`, `field3_name`) VALUE (%s, %s, %s)", $row['field1_name'], $row['field2_name'], $row['field3_name']);
}
mysql_free_result($res);

The issue may also be triggered by the huge volume of the initial SELECT; in such case you should combine this with chunking (either with multiple SELECT+while() blocks, taking profit of SQL's LIMIT clause, or via redirections), thus breaking the SELECT down into multiple, smaller queries. (Note that redirection-based chunking is only needed if you have timeout issues, or your execution time gets close enough to the timeout to threaten with issues if the table grows. It may be a good idea to implement it anyway, so even if your table ever grows to an obscene size the script will still work unchanged.)


After struggling with this for a while, came across BigDump. It worked like a charm! Was able to copy LARGE databases without a glitch.


Here are reported the most common causes of the "MySQL Server has gone away" error in MySQL 5.0:

http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

You might want to have a look to it and to use it as a checklist to see if you're doing something wrong.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜