开发者

Read HTML source to string

I hope you don't frown on me too much, but this should be answerable by someone fairly easily. I want to read a file on a website into a string, so I can extract information from it.

I just want a simple way to get the HTML source read into a string. After looking around for hours I see all these libraries and curl and stuff. All I need is the raw HTML data. I don't even need a definite answer. Just something that will help me refine my search.

Just to be clear I want the raw code in 开发者_开发技巧a string I can manipulate, don't need any parsing etc.


You need an HTTP Client library, one of many is libcurl. You would then issue a GET request to a URL and read the response back how ever your chosen library provides it.

Here is an example to get you started, it is C so I am sure you can work it out.

#include <stdio.h>
#include <curl/curl.h>

int main(void)
{
  CURL *curl;
  CURLcode res;

  curl = curl_easy_init();
  if(curl) {
    curl_easy_setopt(curl, CURLOPT_URL, "http://example.com");
    res = curl_easy_perform(curl);

    /* always cleanup */ 
    curl_easy_cleanup(curl);
  }
  return 0;
}

But you tagged this C++ so if you want a C++ wrapper for libcurl then use curlpp

#include <curlpp/curlpp.hpp>
#include <curlpp/Easy.hpp>
#include <curlpp/Options.hpp>

using namespace curlpp::options;

int main(int, char **)
{
  try
  {
    // That's all that is needed to do cleanup of used resources
    curlpp::Cleanup myCleanup;

    // Our request to be sent.
    curlpp::Easy myRequest;

    // Set the URL.
    myRequest.setOpt<Url>("http://example.com");

    // Send request and get a result.
    // By default the result goes to standard output.
    myRequest.perform();
  }

  catch(curlpp::RuntimeError & e)
  {
    std::cout << e.what() << std::endl;
  }

  catch(curlpp::LogicError & e)
  {
    std::cout << e.what() << std::endl;
  }

  return 0;
}


HTTP is built on top of TCP. If you know socket programming, you can write a simple networking application that opens a socket to the desired server and issues an HTTP GET command. Whatever the server responds with, you'll have to remove the HTTP headers that precede the actual document you want.

If that sounds complicated, then just stick with libcurl.


if it is a hack - then just grab the source from show source, and save as txt. then you can open it with a normal file io stream.

  • all thos pesky libraries are a hint that it is a common and non-trivial excercise to do it right... :)


If all you want to do is grab the entire HTML code without any kind of parsing and extern libraries, my sugestion would be copying the code with a IO stream into a string.

It is the simplest way that I have in mind but be aware that it isn't the most efficient way to do it.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜