开发者

Getting the URL by crawling whose content-type is not text/html

I can get all those url's whose content/type is text/html, but If I want those urls whose content/type is not text/html. Then how can we check that. As for the string we can use contains method, but it doesn't have anything like notcontains.. Any suggestions will be appreciated.. And also

The key variable contains:

Content-Type=[text/html; charset=ISO-8859-1]

This is the below code to check for text/html and I tried also for content-type that are not text/html but it also prints out those whose content-type are also text/html.

    try {
            URL url1 = new URL(url);
            System.out.println("URL:- " +url1);
            URLConnection connection = url1.openConnection();

            Map responseMap = connection.getHeaderFields();
            Iterator iterator = responseMap.entrySet().iterator();
            while (iterator.hasNext())
            {
                String key = iterator.next().toString();

                if (key.contains("text/html") || key.contains("text/xhtml"))
                {
                    System.out.println(key);
                    // Content-Type=[text/html; charset=ISO-8859-1]
                    if (filters.matcher(key) != null){
                        System.out.println(url1);
                        try {
                            final File parentDir = new File("crawl_html");
                            parentDir.mkdir();
                            final String hash = MD5Util.md5Hex(url1.toString());
                            final String fileName = hash + ".txt";
                            final File file = new File(parentDir, fileName);
                            boolean success =file.createNewFile(); // Creates file crawl_html/abc.txt


                             System.out.println("hash:-"  + hash);

                                    System.out.println(file);
                            // Create file if it does not exist



                                // File did not exist and was created
                                FileOutputStream fos = new FileOutputStream(file, true);

                                PrintWriter out = new PrintWriter(fos);

                                // Also could be written as follows on one line
                                // Printwriter out = new PrintWriter(new FileWriter(args[0]));

                                            // Write text to file
                                Tika t = new Tika();
                                String content= t.parseToString(new URL(url1.toString()));


                                out.println("===============================================================");
                                out.println(url1);
                                out.println(key);
                                out.println(success);
                                out.println(content);

                                out.println("===============================================================");
                                out.close();
                                fos.flush();
                                fos.close();



                        } catch (FileNotFoundException e) {
                            // TODO Auto-generated catch block
                            e.printStackTrace();
                        } catch (IOException e) {
                            // TODO Auto-generated catch block

                            e.printStackTrace();
                        } catch (TikaException e) {
                            // TODO Auto-generated catch block
                            e.printStackTrace();
                        }


                        // http://google.com
                    }
                }
  else if (!connection.getContentType().startsWith("text/html"))//print duplicate records of each url
                //else if (!key.contains("text/html"))
                {
                    if (filters.matcher(key) != null){
                     try {
                        final File parentDir = new File("crawl_media");
                        parentDir.mkdir();
                        final String hash = MD5Util.md5Hex(url1.toString());
                        final String fileName = hash + ".txt";
                        final File file = new File(parentDir, fileName);
                     // Create file if it does not exist
                        boolean success =file.createNewFile(); // Creates file crawl_html/abc.txt


                         System.out.println("hash:-"  + hash);

                         Tika t = new Tika();
                        String content_media= t.parseToString(new URL(url1.toString()));



                             // File did not exist and was created
                            FileOutputStream fos = new FileOutputStream(file, true);

                             PrintWriter out = new PrintWriter(fos);

                             // Also could be written as follows on one line
                             // Printwriter out = new PrintWriter(new FileWriter(args[0]));

                                         // Write text to file
                             out.println("===============================================================");
                             out.println(url1);
                             out.println(key);
                             out.println(success);
                             out.println(content_media);
                             //out.println("===============================================================");
                             out.close();
                             fos.flush();
                             fos.close();




                     } catch (FileNotFoundException e) {
                         // TODO Auto-generated catch block
                         e.printStackTrace();
                     } catch (IOException e)开发者_StackOverflow {
                         // TODO Auto-generated catch block

                         e.printStackTrace();
                     } catch (TikaException e) {
                        // TODO Auto-generated catch block
                        e.printStackTrace();
                    }
                    }

                }



            }
        } catch (MalformedURLException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }



        System.out.println("=============");
    }   
}

One method is to check individually for each content-type like for pdf it is application/pdf

if (key.contains("application/pdf")

and in the same way for xml... But any other method other than this...


Would this help?

 if (!connection.getContentType.startsWith("text/html"))


What is wrong with using:

if (key.contains("text/html") || key.contains("text/xhtml")) {
  //Do stuff
} else if (key.contains("application/pdf") {
  //Do other stuff
} else {
  //All other cases
}

Since the content type on other formats may vary from each type you probably need explicit cases for each content type. If a generic content type is encountered then the generic method (else) should be sufficient no? The Strategy Pattern may be of use to you here.

My apologies if I misunderstood your issue. Can you provide an example printout of what the different values of key are through a test run? (10th line of your code)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜