scaling/shifting experimental data vectors using Mathematica
I have some vectors of experimental data that I need to massage, for example:
{
{0, 61237, 131895, 194760, 249935},
{0, 61939, 133775, 197516, 251018},
{0, 60919, 131391, 194112, 231930},
{0, 60735, 131015, 193584, 249607},
{0, 61919, 133631, 197186, 250526},
{0, 61557, 132847, 196143, 258687},
{0, 61643, 133011, 196516, 249891},
{0, 62137, 133947, 197848, 251106}
}
Each vector is the result of one run and consists of five numbers, which are times at which an object passes each of five sensors. Over the measurement interval the object's speed is constant (the sensor-to-sensor intervals are different because the sensor spacings are not uniform). From one run to the next the sensors' spacing remains the same, but the object's speed will vary a bit from one run to the next.
If the sensors were perfect, each vector ought to simply be a scalar multiple of any other vector (in proportion to the ratio of their speeds). But in reality each sensor will have some "jitter" and trigger early or late by some small random amount. I am trying to analyze how good the sensors themselves are, i.e. how much "jitter" is there in the measurements they give me?
So I think I need to do the following. To each vector I must scale it, and add then shift the vector a bit (adding or subtracting a fixed amount to each of its five elements). Then the StandardDeviation
of each column will describe the amount of "noise" or "jitter" in that sensor. The amount that each vector is scaled, and the amount each vector is shifted, has to be chosen to min开发者_StackOverflow中文版imize the standard deviation of columns.
It seemed to me that Mathematica probably has a good toolkit for getting this done, in fact I thought I might have found the answer with Standardize[]
but it seems to be oriented towards processing a list of scalars, not a list of lists like I have (or at least I can't figure out to apply it to my case here).
So I am looking for some hints toward which library function(s) I might use to solve this problems, or perhaps the hint I might need to cleave the algorithm myself. Perhaps part of my problem is that I can't figure out where to look - is what I have here is a "signal processing" problem, or a data manipulation or data mining problem, or a minimization problem, or maybe it's a relatively standard statistical function that I simply haven't heard of before?
(As a bonus I would like to be able to control the weighting function used to optimize this scaling/shifting; e.g. in my data above I suspect that sensor#5 is having problems so I would like to fit the data to only consider the SDs of sensors 1-4 when doing the scaling/shifting)
I can't comment much on your algorithm itself, as data analysis is not my forte. However, from what I understand, you're trying to characterize the timing variations in each sensor. Since the data from a single sensor is in a single column of your matrix, I'd suggest transposing it and mapping Standardize
on to each set of data. In other words,
dat = (* your data *)
Standardize /@ Transpose[dat]
To put it back in columnar form, Transpose
the result. To exclude you last sensor from this process, simply use Part
([ ]) and Span
(;;)
Standardize /@ Transpose[dat][[ ;; -2 ]]
Or, Most
Standardize /@ Most[Transpose[dat]]
Thinking about it, I think you're going to have a hard time separating out the timing jitter from variation in velocity. Can you intentionally vary the velocity?
精彩评论