开发者

Pulling historic analyst opinions from yahoo finance in R

Yahoo Finance has data on historic analyst opinions for stocks. I'm interested in pulling this data into R for analysis, and here is what I have so far:

getOpinions <- function(symbol) {
    require(XML)
    require(xts)
    yahoo.URL <- "http://finance.yahoo.com/q/ud?"
    tables <- readHTMLTable(paste(yahoo.URL, "s=", symbol, sep = ""), stringsAsFactors=FALSE)
    Data <- tables[[11]]
    Data$Date <- as.Date(Data$Date,'%d-%b-%y')
    Data <- xts(Data[,-1],order.by=Data[,1])
    Data
}

getOpinions('AAPL')

I'm worried that this code will break if the position of the table (currently 11) changes, but I can't think of an elegant way to detect which table has the data I want. I tried the solution posted here, but it doesn't seem to work for this problem.

Is there a better way to scrape this data that is less likely to break if yahoo re-arranges their site?

edit: it looks like there's already a package (fImport) out there to do this.

library(fImport)
yahooBriefing("AAPL")

Here is their solution, which doesn't return an xts object, and will probably break if the page layout changes (the yahooKeystats function in fImport is already broken):

function (query, file = "tempfile", source = NULL, save = FALSE, 
    try = TRUE) 
{
    if (is.null(source)) 
        source = "http://finance.yahoo.com/q/ud?s="
    if (try) {
        z = try(yahooBriefing(query, file, source, save, try = FALSE))
        if (class(z) == "try-error" || class(z) == "Error") {
            return("No Internet Access")
        }
        else {
        开发者_运维问答    return(z)
        }
    }
    else {
        url = paste(source, query, sep = "")
        download.file(url = url, destfile = file)
        x = scan(file, what = "", sep = "\n")
        x = x[grep("Briefing.com", x)]
        x = gsub("</", "<", x, perl = TRUE)
        x = gsub("/", " / ", x, perl = TRUE)
        x = gsub(" class=.yfnc_tabledata1.", "", x, perl = TRUE)
        x = gsub(" align=.center.", "", x, perl = TRUE)
        x = gsub(" cell.......=...", "", x, perl = TRUE)
        x = gsub(" border=...", "", x, perl = TRUE)
        x = gsub(" color=.red.", "", x, perl = TRUE)
        x = gsub(" color=.green.", "", x, perl = TRUE)
        x = gsub("<.>", "", x, perl = TRUE)
        x = gsub("<td>", "@", x, perl = TRUE)
        x = gsub("<..>", "", x, perl = TRUE)
        x = gsub("<...>", "", x, perl = TRUE)
        x = gsub("<....>", "", x, perl = TRUE)
        x = gsub("<table>", "", x, perl = TRUE)
        x = gsub("<td nowrap", "", x, perl = TRUE)
        x = gsub("<td height=....", "", x, perl = TRUE)
        x = gsub("&amp;", "&", x, perl = TRUE)
        x = unlist(strsplit(x, ">"))
        x = x[grep("-...-[90]", x, perl = TRUE)]
        nX = length(x)
        x[nX] = gsub("@$", "", x[nX], perl = TRUE)
        x = unlist(strsplit(x, "@"))
        x[x == ""] = "NA"
        x = matrix(x, byrow = TRUE, ncol = 9)[, -c(2, 4, 6, 8)]
        x[, 1] = as.character(strptime(x[, 1], format = "%d-%b-%y"))
        colnames(x) = c("Date", "ResearchFirm", "Action", "From", 
            "To")
        x = x[nrow(x):1, ]
        X = as.data.frame(x)
    }
    X
}


Here is a hack you can use. Inside your function, add the following

# GET THE POSITION OF TABLE WITH MAX. ROWS
position = which.max(sapply(tables, NROW))
Data     = tables[[position]]

This will work as long as the longest table on the page is what you seek.

If you want to make it a little more robust, here is another approach

# GET POSITION OF TABLE CONTAINING RESEARCH FIRM IN ITS NAMES
position = sapply(tables, function(tab) 'Research Firm' %in% names(tab))
Data     = tables[position == TRUE]
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜