开发者

Hadoop Map/Reduce - simple use example to do the following

I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California".

There are about 500k of such records for now, but they are growing. And each JSON object is quite big.

My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco".

I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify.

Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow开发者_开发百科 (the data structure), and I see that the number of them will grow too. :-)

Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important?

Thank you.


Few pointers to consider:

  • Hadoop (HDFS specifically) distributes data around a cluster of machines. Using MapReduce to analyze/process this data requires that the data is stored on the HDFS to make use of the parallel processing power Hadoop offers.

  • Hadoop/MapReduce is no where near real-time. Even when running on small amounts of data the time Hadoop takes to set-up a Job can be 30+ seconds. This is something that can't be stopped.

Maybe something to look into would be using Lucene to index your JSON objects as documents. You could store the index in solr and easily query on anything you want.


in fact you are.. because searching in a single huge field for text will take much more time than indexing the database and searching the proper sql way. The database was built to be used with sql and indexes, it does not have the capability to parse and index json, so whatever way you will find to search in the json (probably just hacky string matching) will be much slower. 500k rows is not that much to handle for mysql , you don't really need hadoop, just a good normalized schema , the right indices and optimized queries


Sounds like you are trying to recreate CouchDB. CouchDB is built with a map-reduce framework and is made to work specifically with JSON objects.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜