开发者

Subsetting a data frame with top-n rows for each group, and ordered by a variable

I would like to subset a data frame for n rows, which are grouped by a variable and are sorted descending by another variable. This would be clear with an example:

    d1 <- data.frame(Gender = c("M", "M", "F", "F", "M", "M", "F", 
  "F"), Age = c(15, 38, 17, 35, 26, 24, 20, 26))

I would like to get 2 rows, wh开发者_开发问答ich are sorted descending on Age, for each Gender. The desired output is:

Gender  Age  
F   35  
F   26  
M   38  
M   26  

I looked for order, sort and other solutions here, but could not find an appropriate solution to this problem. I appreciate your help.


One solution using ddply() from plyr

require(plyr)
ddply(d1, "Gender", function(x) head(x[order(x$Age, decreasing = TRUE) , ], 2))


With data.table package

require(data.table)
dt1<-data.table(d1)# to speedup you can add setkey(dt1,Gender)
dt1[,.SD[order(Age,decreasing=TRUE)[1:2]],by=Gender]


I'm sure there is a better answer, but here is one way:

require(plyr)
ddply(d1, c("Gender", "-Age"))[c(1:2, 5:6),-1]

If you have a larger data frame than the one you provided here and don't want to inspect visually which rows to select, just use this:

new.d1=ddply(d1, c("Gender", "-Age"))[,-1]
pos=match('M',new.d1$Gender) # pos wil show index of first entry of M
new.d1[c(1:2,pos:(pos+1)),]


It is even easier than that if you just want to do the sorting:

d1 <- transform(d1[order(d1$Age, decreasing=TRUE), ], Gender=as.factor(Gender))

you can then call:

require(plyr)
d1 <- ddply(d1, .(Gender), head, n=2)

to subset the top two of each Gender subgroup.


I have a suggestion if you need, for example, the first 2 females and the first 3 males:

library(plyr)
m<-d1[order(d1$Age, decreasing = TRUE) , ] 
h<-mapply(function(x,y) head(x,y), split(m$Age,m$Gender),y=c(2,3)) 
ldply (h, data.frame)

You just need to change the names of the final dataframe.


d1 = d1[order(d1$Gender, -d1$Age),]  
d1 = d1[ave(d1$Age, d1$Gender, FUN = seq_along) <= 2, ]

Had a similar problem and found this method really fast when used on a data.frame with 1.5 million records

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜