开发者

no namenode error in pseudo-mode

I'm new to hadoop and is in learning phase. As per Hadoop Definitve guide, i have set up my hadoop in pseudo distributed mode and everything was working fine. I was even able to execute all the examples from chapter 3 yesterday. Today, when i rebooted my unix and tried to run start-dfs.sh and then tried localhost:50070... it is showing error and when i try to stop dfs (stop-dfs.sh) it says no namenode to stop. I have been googling the issue but no result. Also, when i format my namenode again...everything starts working fine and i'm able to connect to the localhost:50070 and even replicate files and directories in hdfs but as soon as i restart my linux and try to connect to hdfs the same problem comes up.

Below is the error log:

************************************************************/
2011-06-22 15:45:55,249 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.203.0
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
************************************************************/
2011-06-22 15:45:56,383 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2011-06-22 15:45:56,455 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2011-06-22 15:45:56,494 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2011-06-22 15:45:57,007 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2011-06-22 15:45:57,031 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2011-06-22 15:45:57,059 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2011-06-22 15:45:57,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 32-bit
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^22 = 4194304 entries
2011-06-22 15:45:57,374 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=anshu
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2011-06-22 15:45:57,854 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2011-06-22 15:45:57,868 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2011-06-22 15:45:57,869 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2011-06-22 15:45:58,769 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2011-06-22 15:45:58,809 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
**2011-06-22 15:45:58,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-anshu/dfs/name does not exist.
2011-06-22 15:45:58,827 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.h**adoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
    at org.apache.hadoop.hdfs.serv开发者_JS百科er.namenode.NameNode.createNameNode(NameNode.java:1153)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
2011-06-22 15:45:58,828 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)

2011-06-22 15:45:58,829 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/

Any help is appreciated Thank-you


here is the kicker:

org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-anshu/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

i'd been having similar issues. i used stop-all.sh to shut down hadoop. i guess it was foolish of me to think this would properly save the data in my HDFS.

but as far as i can tell from what appears to be the appropriate code chunk in the hadoop-daemon.sh script, this is not the case - it just kills the processes:

(stop)

    if [ -f $pid ]; then
      if kill -0 `cat $pid` > /dev/null 2>&1; then
        echo stopping $command
        kill `cat $pid`
      else
        echo no $command to stop
      fi
    else
      echo no $command to stop
    fi

did you look to see if the directory it's complaining about existed? i checked and mine did not, although there was an (empty!) data folder in there here I imagine data might have once lived.

so my guess was that what we need to do is configure Hadoop such that our namenode and datanode are NOT stored in a tmp directory. there is some possibility that the OS is doing maintenance and getting rid of these files. either that hadoop figures you don't care about them anymore because you wouldn't have left them in a tmp directory if you did, and you wouldn't be restarting your machine in the middle of a map-reduce job. I don't really think this should happen (i mean, that's not how i would design things) but it seemed like a good guess.

so, based on this site http://wiki.datameer.com/display/DAS11/Hadoop+configuration+file+templates i edited my conf/hdfs-site.xml file to point to the following paths (obviously, make your own directories as you see fit):

<property>
  <name>dfs.name.dir</name>
  <value>/hadoopstorage/name/</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/hadoopstorage/data/</value>
</property>

Did this, formatted the new namenode (sadly, data loss seems inevitable in this situation), stopped and started hadoop with the shell scripts, restarted the machine, and my files were still there...

YMMV...hope this works for you! i'm on OS X but i don't think you should have dissimilar results.

J


If you dont care about losing data just execute the command:

./hadoop namenode -format


I had similar issue and this helped

chown -R hdfs:hadoop /path/to/namenode/date/dir


Setting this properties in conf/hdfs-site.xml file worked for me!!!

Thanks jsh

<property>
  <name>dfs.name.dir</name>
  <value>/hadoopstorage/name/</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>/hadoopstorage/data/</value>`enter code here`
</property>

Dont forget to set proper permissions to those directories


JSH answer is correct.

Just a couple of changes for hadoop 2.6 i had to do:

<property>
  <name>dfs.namenode.name.dir</name>
  <value>/hadoopstorage/name/</value>
</property>

<property>
  <name>dfs.datanode.data.dir</name>
  <value>/hadoopstorage/data/</value>
</property>


If you have not resolved the problem, try this: give the dfs.name.dir directory to in the user group hadoop and give the group to write permission.


See your coresite.xml in hadoop config directory

  1. Go to config directory
  2. vi core-site.xml,hdf.site.xml
  3. Make sure your port numbers and paths are correct


I have the similar problem, but slightly different.

Running start-all.sh seams quite well, but jps shows that there is no namenodes and I could not see the list when I run hdfs dfs -ls /.

My first attempt is to run hadoop namenode -format, then namenode appears but datanode disappears.

After googling the solution, I run rm -rf /usr/local/hadoop_store/hdfs/datanode/* and restart hadoop, jps shows:

    12912 ResourceManager  
    13391 FsShell  
    13420 Jps  
    13038 NodeManager  
    12733 SecondaryNameNode  
    12432 NameNode  
    12556 DataNode  

Now I can use hadoop commands as usual.

HTH!

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜