开发者

Search a list of terms from this website, and nostop even any one of the terms are missing

I am trying to use RCurl package to get data from the genecard databases

http://www-bimas.cit.nih.gov/cards//

I read a wonderful solution in a previous posted questions:

How can I use R (Rcurl/XML packages ?!) to scrape this webpage?

However, my problem is different in a form that I need further supports from experist. Instead of exctracting all the links from the webpages. I have a list of ~ 1000 genes in my mind. They are in the form of gene symbols (some of the gene symbols can be found in the webpage, some of them are new to the database). Here is part of my lists of genes.

TP53 SOD1 EGFR C2d AKT2 NFKB1

C2d is not in the database, so, when I do the search manually I will see. "Sorry, there is no GeneCard for C2d".

When I use to the solution posted in the previous questions for my analysis.

How can I use R (Rcurl/XML packages ?!) to scrape this webpage?

(1) I firstly readin the list

(2) I then use the get_structs function in the previous solution to subsitute each gene sybmols in the list to the following website http://www-bimas.cit.nih.gov/cgi-bin/cards/carddisp.pl?gene=genesybol.

(3) Scrap the information that I needed for each genes in the list, using the get_data_url function in the previous message.

It works for the TP53, SOD1, EGFR, but when the search comes to C2d. The process stopped.

As I got ~ 1000 genes, I am sure some of them are missing from the webpage.

How can I get a modified gene list to tell me out of ~1000 genes, which one of them are missing automatically? So, that I can use the same approach as listed in the previous question to get all the data that I 开发者_C百科needed based on the new gene lists that are EXISTING in webpage?

Or are there any methods to ask the R to skip those missing items and do the scrapping continuously till the end of the list but mark those missing items in the final results.

In order to faciliate the discussion process. I have make a sudo input files using the scripts using in the previous questions for the same webpage that they used.

u <- c ("Aero_pern", "Ppate", "didnotexist", "Sbico")

library(RCurl)  
base_url<-"http://gtrnadb.ucsc.edu/" base_html<-getURLContent(base_url)[[1]] 
links<-strsplit(base_html,"a href=")[[1]] 

get_structs<-function(u) {     
struct_url<-paste(base_url,u,"/",u,"-structs.html",sep="")     
raw_data<-getURLContent(struct_url)     
s_split1<-strsplit(raw_data,"<PRE>")[[1]]     
all_data<-s_split1[seq(3,length(s_split1))]     
data_list<-lapply(all_data,parse_genomes)     
for (d in 1:length(data_list)) {data_list[[d]]<-append(data_list[[d]],u)}     
return(data_list) 
}

I guess the problem can be solved by modifing the get_structs scripps above or ifelse function may help, but I cannot figure out how to modify it further. Pls comments.


You can enclose your function call inside a try() so that the process won't break if you get errors. Usually this will let you loop over problematic cases and it will return an error message instead of breaking your process. e.g.

dat <- list()
for (i in 1:length(u)){
   dat[[i]] <- try(get_structs(u[i]))
}
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜