curl_multi_exec stops if one url is 404, how can I change that?
Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions:
1: Why开发者_Go百科 does it stop? That doesn't make sense to me.
2: How can I make it continue?
EDIT: Here is my code:
$SQL = mysql_query("SELECT url FROM shells") ;
$mh = curl_multi_init();
$handles = array();
while($resultSet = mysql_fetch_array($SQL)){
//load the urls and send GET data
$ch = curl_init($resultSet['url'] . $fullcurl);
//Only load it for two seconds (Long enough to send the data)
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_multi_add_handle($mh, $ch);
$handles[] = $ch;
}
// Create a status variable so we know when exec is done.
$running = null;
//execute the handles
do {
// Call exec. This call is non-blocking, meaning it works in the background.
curl_multi_exec($mh,$running);
// Sleep while it's executing. You could do other work here, if you have any.
sleep(2);
// Keep going until it's done.
} while ($running > 0);
// For loop to remove (close) the regular handles.
foreach($handles as $ch)
{
// Remove the current array handle.
curl_multi_remove_handle($mh, $ch);
}
// Close the multi handle
curl_multi_close($mh);
`
Here you go:
$urls = array
(
0 => 'http://bing.com',
1 => 'http://yahoo.com/thisfiledoesntexistsoitwill404.php', // 404
2 => 'http://google.com',
);
$mh = curl_multi_init();
$handles = array();
foreach ($urls as $url)
{
$handles[$url] = curl_init($url);
curl_setopt($handles[$url], CURLOPT_TIMEOUT, 3);
curl_setopt($handles[$url], CURLOPT_AUTOREFERER, true);
curl_setopt($handles[$url], CURLOPT_FAILONERROR, true);
curl_setopt($handles[$url], CURLOPT_FOLLOWLOCATION, true);
curl_setopt($handles[$url], CURLOPT_RETURNTRANSFER, true);
curl_setopt($handles[$url], CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($handles[$url], CURLOPT_SSL_VERIFYPEER, false);
curl_multi_add_handle($mh, $handles[$url]);
}
$running = null;
do {
curl_multi_exec($mh, $running);
usleep(200000);
} while ($running > 0);
foreach ($handles as $key => $value)
{
$handles[$key] = false;
if (curl_errno($value) === 0)
{
$handles[$key] = curl_multi_getcontent($value);
}
curl_multi_remove_handle($mh, $value);
curl_close($value);
}
curl_multi_close($mh);
echo '<pre>';
print_r(array_map('htmlentities', $handles));
echo '</pre>';
Returns:
Array
(
[http://bing.com] => <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html...
[http://yahoo.com/thisfiledoesntexistsoitwill404.php] =>
[http://google.com] => <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=ISO-8859-1"><title>Google</title>...
)
As you can see all URLs are fetched, even Google.com that comes after the 404 Yahoo page.
I don't have a platform to test this on but most of the examples I've seen compare the constant returned from curl_multi_exec instead of checking the $running variable.
//execute the handles
do {
// Call exec. This call is non-blocking, meaning it works in the background.
$mrc = curl_multi_exec($mh,$running);
// Sleep while it's executing. You could do other work here, if you have any.
sleep(2);
// Keep going until it's done.
} while ($mrc == CURLM_CALL_MULTI_PERFORM);
I hope this works for you.
精彩评论