开发者

Scrape and convert website into HTML? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.

We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.

Closed 2 years ago.

Improve this question

I haven't done this开发者_StackOverflow中文版 in 3 or 4 years, but a client wants to downgrade their dynamic website into static HTML.

Are there any free tools out there to crawl a domain and generate working HTML files to make this quick and painless?

Edit: it is a Coldfusion website, if that matters.


Getleft is a nice Windows client that can do this. It is very configurable and reliable.

Wget can, too, with the --mirror option.


Try using httrack (or webhttrack/winhttrack, if you want a GUI) to spider the web site. It's free, fast, and reliable. It's also much more powerful than primitive downloaders like wget; httrack is designed for mirroring web sites.

Be aware that converting a dynamic page to static will lose you a lot of functionality. It's also not always possible - a dynamic site can present an infinite number of different static pages.


It's been a long time since I used it, but webzip was quite good.

It is not free, but for $35.00, I think your client won't go broke.

A quick google for offline browsers came up with this and this that look good..

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜