In hadoop, how do I initialize the a DistributedFileSystem object via the initialize method?
There are two arguments, a URI and a Configuration. I assume that the JobConf object that the client is set to should work for Configuration, but what about the URI?
Here is the code I have for the driver:
JobClient client = new JobClient();
JobConf conf = new JobConf(ClickViewSessions.class);
conf.setJobName("ClickViewSessions");
conf.setOutputKeyClass(LongWritable.class);
conf.setOutputValueClass(MinMaxWritable.class);
FileInputFormat.addInputPath(conf, new Path("input"));
FileOutputFormat.setOutputPath(conf, new Path("output"));
conf.setMapperClass(ClickViewSessionsMapper.class);
conf.setReducerClass(ClickViewSessionsReducer.class);
client.setConf(conf);
DistributedFileSystem dfs = new DistributedFileSystem();
try {
dfs.initialize(new URI("blah") /* what goes here??? */, conf);
} catch (Exception e) {
开发者_如何学运维 throw new RuntimeException(e.toString());
}
How do I get the URI to supply to the call to initialize
above?
You could also use as shown below to intialize a file system
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public static void main(String args[]){
try {
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://localhost:54310/user/hadoop/");
FileSystem fs = FileSystem.get(conf);
FileStatus[] status = fs.listStatus(new Path("."));
for(int i=0;i<status.length;i++){
System.out.println(status[i].getPath());
}
} catch (IOException e) {
e.printStackTrace();
}
}
The URI is the location of the HDFS that you are running. The default value for the filesystem name should be in conf/core-site.xml. The value of 'fs.default.name' should be the URI that you connect to.
If you haven't yet looked at the tutorial on how to set up a simple single-node system, I would highly recommend it:
http://hadoop.apache.org/common/docs/current/single_node_setup.html
精彩评论