开发者

Expand Categorical Column in a Time Series to Mulitple Per Second Count Columns

What is the best way to make the following transformation? There are two parts to this conversion. The first is to convert the speed to a per second mean. The second is to take the categorical column and transform that into multiple columns -- one column per categorical value where the value is the count of occurrences per second. For example:

Input (xts A):

Time(PosixCT), Observed Letter, Speed
2011/01/11 12:12:01.100,A,1
2011/01/11 12:12:01.200,A,2
2011/01/11 12:12:01.400,B,3
2011/01/11 12:12:01.800,C,4
2011/01/11 12:12:02.200,D,2
2011/01/11 12:12:02.200,A,7

Output: (xts B)

Time, A_Per_Second, B_Per_Second, C_Per_Second, D_Per_Second, Aggregate_Speed
2011/01/11 12:12:01,2,1,1,0,2.5
20开发者_C百科11/01/11 12:12:02,1,0,0,1,4.5

I am looking to do this in such a way that I don't need to know what all the categories are. Basically I am trying to collapsing the time to per second without loosing any of my categorical data and summarizing the numeric data as a per second mean.


I don't often use data in time series format (i.e. xts), so I provided a solution using data in data.frame format.

(Note also that I have changed the column names of this data frame to single words to make it easier to work with. I post the structure of my data frame at the end of this question.)

I make use of two packages:

  1. HMisc for trunc methods for POSIXt classes
  2. plyr for some magic to split, apply and combine data

The code:

A <- as.data.frame(A)

library(Hmisc)
A$Date <- trunc(A$Date, units="secs")
A

library(plyr)
ddply(A, .(Date, Observed), summarise, Speed=mean(Speed))

the results are in slightly different format than you specified, but it should be easy to reshape this into the wide format you asked for.

                 Date Observed Speed
1 2011-01-11 12:12:01        A   1.5
2 2011-01-11 12:12:01        B   3.0
3 2011-01-11 12:12:01        C   4.0
4 2011-01-11 12:12:02        A   7.0
5 2011-01-11 12:12:02        D   2.0

Here is the dput results of A:

A <- structure(list(Date = structure(list(sec = c(1, 1, 1, 1, 2, 2
), min = c(12L, 12L, 12L, 12L, 12L, 12L), hour = c(12L, 12L, 
12L, 12L, 12L, 12L), mday = c(11L, 11L, 11L, 11L, 11L, 11L), 
    mon = c(0L, 0L, 0L, 0L, 0L, 0L), year = c(111L, 111L, 111L, 
    111L, 111L, 111L), wday = c(2L, 2L, 2L, 2L, 2L, 2L), yday = c(10L, 
    10L, 10L, 10L, 10L, 10L), isdst = c(0L, 0L, 0L, 0L, 0L, 0L
    )), .Names = c("sec", "min", "hour", "mday", "mon", "year", 
"wday", "yday", "isdst"), class = c("POSIXlt", "POSIXt"), tzone = c("", 
"GMT", "BST")), Observed = structure(c(1L, 1L, 2L, 3L, 4L, 1L
), .Label = c("A", "B", "C", "D"), class = "factor"), Speed = c(1L, 
2L, 3L, 4L, 2L, 7L)), .Names = c("Date", "Observed", "Speed"), row.names = c(NA, 
-6L), class = "data.frame")


Here is a zoo solution. First we read in the data splitting it on column 2. Then we truncate the times to seconds and calculate the counts and sums. Finally we put it all together.

Lines <- "Time(PosixCT), Observed Letter, Speed
2011/01/11 12:12:01.100,A,1
2011/01/11 12:12:01.200,A,2
2011/01/11 12:12:01.400,B,3
2011/01/11 12:12:01.800,C,4
2011/01/11 12:12:02.200,D,2
2011/01/11 12:12:02.200,A,7"

library(zoo)
z <- read.zoo(textConnection(Lines), header = TRUE, sep = ",", split = 2, tz = "")

tt <- as.POSIXct(trunc(time(z), "sec"))
z.knt <- aggregate(z, tt, function(x) sum(!is.na(x)))
z.sum <- aggregate(z, tt, sum, na.rm = TRUE)

cbind(z.knt, Speed = rowSums(z.sum) / rowSums(z.knt))

The result looks like this:

                    A B C D Speed
2011-01-11 12:12:01 2 1 1 0   2.5
2011-01-11 12:12:02 1 0 0 1   4.5


Here's the structure I'm using for A. Note than the "numbers" are really character, since you can't mix types in a matrix.

A <- structure(c("A", "A", "B", "C", "D", "A", "1", "2", "3", "4", 
"2", "7"), .Dim = c(6L, 2L), .Dimnames = list(NULL, c("Observed_Letter", 
"Speed")), index = structure(c(1294769521.1, 1294769521.2, 1294769521.4, 
1294769521.8, 1294769522.2, 1294769522.2), tzone = "", tclass = c("POSIXct", 
"POSIXt")), .indexCLASS = c("POSIXct", "POSIXt"), .indexTZ = "",
class = c("xts", "zoo"))

This function will clean up each of the categories.

clean <- function(x) {
  # construct xts object with only Speed and convert it to numeric
  out <- xts(as.numeric(x$Speed),index(x))
  # add column names
  colnames(out) <- paste(x$Observed_Letter[1],"_Per_Second",sep="")
  out  # return object
}

Here's the guts of what you need. Note the need to explicitly state split.default since there's a split method for xts objects that splits by time. You also don't need align.time, but it will round each period up to the whole second. Otherwise your index will be the last actual value in the index for each second.

# split by Observed_Letter, apply clean() to each list element, and merge results
combA <- do.call(merge, lapply(split.default(A, A$Observed_Letter), clean))
alignA <- align.time(combA,1)
# get the last obs for each 1-second period (for period.apply)
EPalignA <- endpoints(combA, "seconds")
# count the number of non-NA observations by column for each 1-second period
counts <- period.apply(alignA, EPalignA, function(x) colSums(!is.na(x)))
# sum the non-NA observations for each column and 1-second period
values <- period.apply(alignA, EPalignA, colSums, na.rm=TRUE)
# calculate aggregate speed
B <- counts
B$Aggregate_Speed <- rowSums(values)/rowSums(counts)
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜