Java + Hadoop + NoSql ( what combinations to use )
I am new to this, and my requirement is as follows:
I want to process huge dataset of text with movie ratings(in text format) and store them in some NoSQL database, then do some processing and recommend movies given one particular movie.So i need speed - I think Hadoop w开发者_C百科ould help me in this, and keeping data in NoSQL database, would further help in speed. I would like to know if some other approach is well known and what combinations are used with Java
Thank you
How huge is huge? You might want to check Apache Mahout. It has very efficient data structures for exactly this purpose - storing and processing sparse data for collaborative filtering algorithms. It will handle a dataset of say 10 million ratings on a moderately sized machine + if your dataset starts to grow out of one machine it supports splitting processing with hadoop.
If you're wondering about which NoSQL data store to use, this post might help.
精彩评论