开发者

HadoopFS (HDFS) as distributive file storage

I'm consider to use HDFS as horizontal scaling file storage system for our client video hosting service. My main concern that HDFS wasn't developed for this needs this is more "an open source system currently being used in situations where massive amounts of data need to be processed". We don't want to process data just store them, create on a base of 开发者_如何学编程HDFS something like small internal Amazon S3 analog.

Probably important moment is that stored file size will be quite git from 100Mb to 10Gb.

Did anyone use HDFS in such purposes?


If you are using an S3 equivalient then it should already provide a distributed, mountable file-system no? Perhaps you can check out OpenStack at http://openstack.org/projects/storage/.


The main disadvantage would be the lack of POSIX semantics. You can't mount the drive, and you need special APIs to read and write from it. The Java API is the main one. There is a project called libhdfs that makes a C API over JNI, but I've never used it. Thriftfs is another option.

I'm also not sure about the read performance compared to other alternatives. Maybe someone else knows. Have you checked out other distributed filesystems like Lustre?


You may want to consider MongoDB for this. They have GridFS which will allow you to use it as a storage. You can then horizontally scale your storage through shards and provide fault tolerance with replication.

  • http://docs.mongodb.org/manual/core/gridfs/
  • http://docs.mongodb.org/manual/replication/
  • http://docs.mongodb.org/manual/sharding/
0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜