Get title of website via link
Notice how Google New开发者_Python百科s has sources on the bottom of each article excerpt.
The Guardian - ABC News - Reuters - Bloomberg
I'm trying to imitate that.
For example, upon submitting the URL http://www.washingtontimes.com/news/2010/dec/3/debt-panel-fails-test-vote/
I want to return The Washington Times
How is this possible with php?
My answer is expanding on @AI W's answer of using the title of the page. Below is the code to accomplish what he said.
<?php
function get_title($url){
$str = file_get_contents($url);
if(strlen($str)>0){
$str = trim(preg_replace('/\s+/', ' ', $str)); // supports line breaks inside <title>
preg_match("/\<title\>(.*)\<\/title\>/i",$str,$title); // ignore case
return $title[1];
}
}
//Example:
echo get_title("http://www.washingtontimes.com/");
?>
OUTPUT
Washington Times - Politics, Breaking News, US and World News
As you can see, it is not exactly what Google is using, so this leads me to believe that they get a URL's hostname and match it to their own list.
http://www.washingtontimes.com/ => The Washington Times
$doc = new DOMDocument();
@$doc->loadHTMLFile('http://www.washingtontimes.com/news/2010/dec/3/debt-panel-fails-test-vote/');
$xpath = new DOMXPath($doc);
echo $xpath->query('//title')->item(0)->nodeValue."\n";
Output:
Debt commission falls short on test vote - Washington Times
Obviously you should also implement basic error handling.
Using get_meta_tags() from the domain home page, for NYT brings back something which might need truncating but could be useful.
$b = "http://www.washingtontimes.com/news/2010/dec/3/debt-panel-fails-test-vote/" ;
$url = parse_url( $b ) ;
$tags = get_meta_tags( $url['scheme'].'://'.$url['host'] );
var_dump( $tags );
includes the description 'The Washington Times delivers breaking news and commentary on the issues that affect the future of our nation.'
You could fetch the contents of the URL and do a regular expression search for the content of the title
element.
<?php
$urlContents = file_get_contents("http://example.com/");
preg_match("/<title>(.*)<\/title>/i", $urlContents, $matches);
print($matches[1] . "\n"); // "Example Web Page"
?>
Or, if you don't want to use a regular expression (to match something very near the top of the document), you could use a DOMDocument object:
<?php
$urlContents = file_get_contents("http://example.com/");
$dom = new DOMDocument();
@$dom->loadHTML($urlContents);
$title = $dom->getElementsByTagName('title');
print($title->item(0)->nodeValue . "\n"); // "Example Web Page"
?>
I leave it up to you to decide which method you like best.
PHP manual on cURL
<?php
$ch = curl_init("http://www.example.com/");
$fp = fopen("example_homepage.txt", "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
?>
PHP manual on Perl regex matching
<?php
$subject = "abcdef";
$pattern = '/^def/';
preg_match($pattern, $subject, $matches, PREG_OFFSET_CAPTURE, 3);
print_r($matches);
?>
And putting those two together:
<?php
// create curl resource
$ch = curl_init();
// set url
curl_setopt($ch, CURLOPT_URL, "example.com");
//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// $output contains the output string
$output = curl_exec($ch);
$pattern = '/[<]title[>]([^<]*)[<][\/]titl/i';
preg_match($pattern, $output, $matches);
print_r($matches);
// close curl resource to free up system resources
curl_close($ch);
?>
I can't promise this example will work since I don't have PHP here, but it should help you get started.
I try to avoid regular expressions when it isn't necessary, I have made a function to get the website title with curl and DOMDocument below.
function website_title($url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// some websites like Facebook need a user agent to be set.
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36');
$html = curl_exec($ch);
curl_close($ch);
$dom = new DOMDocument;
@$dom->loadHTML($html);
$title = $dom->getElementsByTagName('title')->item('0')->nodeValue;
return $title;
}
echo website_title('https://www.facebook.com/');
above returns the following: Welcome to Facebook - Log In, Sign Up or Learn More
Alternatively you can use Simple Html Dom Parser:
<?php
require_once('simple_html_dom.php');
$html = file_get_html('http://www.washingtontimes.com/news/2010/dec/3/debt-panel-fails-test-vote/');
echo $html->find('title', 0)->innertext . "<br>\n";
echo $html->find('div[class=entry-content]', 0)->innertext;
i wrote a function to handle it:
function getURLTitle($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$content = curl_exec($ch);
$contentType = curl_getinfo($ch, CURLINFO_CONTENT_TYPE);
$charset = '';
if($contentType && preg_match('/\bcharset=(.+)\b/i', $contentType, $matches)){
$charset = $matches[1];
}
curl_close($ch);
if(strlen($content) > 0 && preg_match('/\<title\b.*\>(.*)\<\/title\>/i', $content, $matches)){
$title = $matches[1];
if(!$charset && preg_match_all('/\<meta\b.*\>/i', $content, $matches)){
//order:
//http header content-type
//meta http-equiv content-type
//meta charset
foreach($matches as $match){
$match = strtolower($match);
if(strpos($match, 'content-type') && preg_match('/\bcharset=(.+)\b/', $match, $ms)){
$charset = $ms[1];
break;
}
}
if(!$charset){
//meta charset=utf-8
//meta charset='utf-8'
foreach($matches as $match){
$match = strtolower($match);
if(preg_match('/\bcharset=([\'"])?(.+)\1?/', $match, $ms)){
$charset = $ms[1];
break;
}
}
}
}
return $charset ? iconv($charset, 'utf-8', $title) : $title;
}
return $url;
}
it fetches the webpage content, and tries to get document charset encoding by ((from highest priority to lowest):
- An HTTP "charset" parameter in a "Content-Type" field.
- A META declaration with "http-equiv" set to "Content-Type" and a value set for "charset".
- The charset attribute set on an element that designates an external resource.
(see http://www.w3.org/TR/html4/charset.html)
and then uses iconv
to convert title to utf-8
encoding.
Get title of website via link and convert title to utf-8 character encoding:
https://gist.github.com/kisexu/b64bc6ab787f302ae838
function getTitle($url)
{
// get html via url
$ch = curl_init();
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$html = curl_exec($ch);
curl_close($ch);
// get title
preg_match('/(?<=<title>).+(?=<\/title>)/iU', $html, $match);
$title = empty($match[0]) ? 'Untitled' : $match[0];
$title = trim($title);
// convert title to utf-8 character encoding
if ($title != 'Untitled') {
preg_match('/(?<=charset\=).+(?=\")/iU', $html, $match);
if (!empty($match[0])) {
$charset = str_replace('"', '', $match[0]);
$charset = str_replace("'", '', $charset);
$charset = strtolower( trim($charset) );
if ($charset != 'utf-8') {
$title = iconv($charset, 'utf-8', $title);
}
}
}
return $title;
}
Simple but it takes some time:
$tags = get_meta_tags('https://google.com');
if (array_key_exists('title', $tags)) {
# Do something with it
echo nl2br("Page Title: $tags[title]\n");
}
I haven't tried the proposed answers by others here to compare for performance, but you should do.
精彩评论