how to get title and description of a website using php? [duplicate]
Possible Duplicate:
Help ge开发者_StackOverflow中文版tting meta title and description
I've spend a full day on it. Searched on the net. Saw some similar questions on satckoverflow also. but all I got disappointment.
I want to get some php code by which I can output the title and some 4-5 lines for the description of any website using php.
<?php
$url = "http://www.drquincy.com/";
$fp = fopen($url, 'r');
$content = "";
while(!feof($fp)) {
$buffer = trim(fgets($fp, 4096));
$content .= $buffer;
}
$start = '<title>';
$end = '</title>';
preg_match(" / $start( . * )$end / s", $content, $match);
$title = $match[1];
$metatagarray = get_meta_tags($url);
$keywords = $metatagarray["keywords"];
$description = $metatagarray["description"];
echo " <div><strong>URL: </strong >$url</div> \n";
echo " <div><strong>Title: </strong >$title</div> \n";
echo " <div><strong>Description: </strong >$description</div>\n";
echo " <div><strong>Keywords: </strong >$keywords</div>\n";
Just change the url:)
There are many ways to parse a HTML. First, you want the content itself:
$res = file_get_contents("http://www.google.com");
That assumes file_get_contents allowed to access uri. For example, you might utilize a regex:
preg_match("~<title>(.*?)</title>~", $res, $match);
$title = $match[1];
But it would be better to use a DOM parser. http://php.net/manual/en/book.xml.php, though that may be problem if the target content is not valid xml.
精彩评论