开发者

Getting a "summary" of a webpage

I have something of a a hairy problem, I'd like to generate a couple of paragraphs of "description" of a given url, normally the start of an article. The Meta description field is one way to go but it isn't always good or set properly.

It's fair to say it's a bit problematic to accomplish this from the screenscraped HTML. I had a general idea that perhaps one could scan the HTML for the first "appropriate" segment but it's hard to say what that is, perhaps something like the first paragraph containing a certain amoun开发者_运维问答t of text...

Anyone have any good ideas? :) It doesn't have to be foolproof


So, you wanna become a new Google, heh? :-)

Many sites are "SEO friendly" these days. This enables you to go for the headings and then look for paragraphs bellow.

Also, look for lists. There is a lot of content in some sort of tab-like (tabs, accordions...) interfaces that is done using ordered or unordered lists.

If that fails, maybe look for a div with class "content" or "main" or a combination and start from there.

If you use different approaches, make sure you keep statistics of what worked and what didn't (maybe even save a full page), so you can review and tweak your parsing and searching methods.

As a side note, I've used htmlagilitypack to parse and search through html with success. Well, at leasts it beats parsing with regex :-)


Perhaps look for the div element that contains the most p elements, and then grab the first p child. If no div, get the first p from the body element.

This will always have its problems.


You can strip the HTML tags using this regular expression

string stripped = Regex.Replace(textBox1.Text,@"<(.|\n)*?>",string.Empty)

You will them get the content text you can use to generate your paragraphs.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜