开发者

Scraping a messy html website with PHP

I am in the following situation. I am trying to convert a messy scraped html code to a nice and neat xml structure.

A partial HTML code of the scraped website:

<p><span class='one'>week number</span></p>

<p><span class='two'>day of the week</span></p>
<table class='spreadsheet'>
table data
</table>

<p><sp开发者_C百科an class='two'>another day of the week</span></p>
<table class='spreadsheet'>
table data
</table>

<p><span class='one'>another week number</span></p>

ETC

Now I want to create the following xml structure with php:

<week number='week number'>
 <day name='day of the week'>
  <data id='table data'>table data</data>
 </day>

 <day name='another day of the week'>
  <data id='table data'>table data</data>
 </day>
</week>
<week number='another week number'>
 ETC
</week>

Have been trying the simple html dom method, but have no idea how to get the next sibling and check wether it is a new day of the week, a new table data or a new week etc..

I am, of course, also open to other solutions.

Thanks.

Cheers, Dandoen


There is no silver bullet. A typical way to handle this, would be to first filter the html through htmltidy, to get a somewhat predictable tag soup, and then feed it to a parser (Such as DomDocument). Then use DomXPath to select the nodes that you need and assemble an intermediate structure of associative arrays and finally transform this into an output xml document.

Hint: Use firebug's "Copy XPath" feature to grab the xpath expression for a node.


A good option is the Tidy (aka HTML Tidy) PHP extension.

http://php.net/tidy

However, if you are using a web hosting service, it might not be enabled or you might need to ask for it explicitly.

Edit:

Another option which should not have any more dependencies with regard to php modules could be something like this project:

http://www.bioinformatics.org/phplabware/internal_utilities/htmLawed/index.php


You need transformation services xsl and xslt.

http://en.wikipedia.org/wiki/XSLT


The most "error prone" method IMHO is to scrape using a real browser, which is pretty easy if using Selenium RC for remote browser control. See my sample code to scrape Google using jQuery : HERE.

Most of the contents can be extracted with just a few lines of codes.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜