开发者

Time Series Prediction via Neural Networks

I have been working on Neural Networks for various purposes lately. I have had great success in digit recognition, XOR, and various other easy/hello world'ish applications.

I would like to tackle the domain of time series estimation. I do not have a University account at the moment to read all the IEEE/ACM papers on the topic (for free), nor can I find many resources detailing using ANN for time series forcasting.

I would like to know if anyone has any suggestions or can recommend any resources concerning using ANN for forcasting via time series data?

I would assume that to train the NN, you would insert a few immediately time steps and the expected output would be the next timestep (example: inputs of n-5, n-4, n-3, n-2, n-1 should come out with an output of result at timestep N. ... and 开发者_JS百科slide down some amount of timesteps and do it all again.

Can anyone confirm this or comment on it? I would appreciate it!


I think that you've got the basic idea: a "sliding window" approach where a network is trained to use the last k values of a series (Tn-k ... Tn-1) to predict the current value (Tn).

There are a lot of ways you can do this, however. For example:

  • How big should that window be?
  • Should the data be preprocessed in any way (e.g. to remove outliers)?
  • What network configuration (e.g. # of hidden nodes, # of layers) and algorithm should be used?

Often people end up figuring out the best way to learn from their particular data by trial and error.

There are a fair number of publicly-accessible papers out there about this stuff. Start with these, and look at their citations and papers that cite them via Google Scholar, and you should have plenty to read:

  • Frank, R. J. and Davey, N. and Hunt, S. P. Time Series Prediction and Neural Networks. Journal of Intelligent and Robotic Systems, 2001. Volume 31, Issue 1, pp. 91-103.
  • J.T. Connor, R.D. Martin, and L.E. Atlas. Recurrent neural networks and robust time series prediction. IEEE Transactions on Neural Networks, Mar 1994. Volume 5, Issue 2, pp. 240 - 254.


There is a kind of neural networks named recurrent neural networks (RNNs. One advantage of using these models is you do not have to define an sliding window for the input examples. A variant of RNNs known as Long-Short Term Memory (LSTM) can potentially take into account many instances in the previous time stamps and a "forget gate" is used to allow or disallow remembering the previous results from the previous time stamps.


Technically this is the same as your digit recognition - it is recognizing something and returning what it was...

Well - now your inputs are the previous steps (T-5 ... T-1) - and your output or outputs are the predicted steps (T0, T1...).

The mechanics in the ANN itself are the same - you will have to teach every layer for feature detection, correcting its reconstruction of the thing, so that it looks like what is actually going to happen.

(some more info about what do I mean: tech talk )

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜