开发者

Temporary failure in name resolution while run Hadoop/bin/start-all.sh

I caught "Temporary failure in name resolution" while run Hadoop/bin/start-all.sh on my SUSE Linux.I have searched many website to look for the problem,but can not find the effective answer. I look forward your help,thank you!!

It is deployed on single same maching,so in both master/slaves files only one line:localhost

solom@linux87:~/hadoop> bin/hadoop namenode -format
11/07/12 17:43:10 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = linux87/10.18.6.87
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =        
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707;
compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /home/solom/tmp/hadoop/dfs/name ? (Y or N) 
Y
11/07/12 17:43:13 INFO namenode.FSNamesystem: fsOwner=solom,solom,dialout,video
11/07/12 17:43:13 INFO namenode.FSNamesystem: supergroup=supergroup
11/07/12 17:43:13 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/07/12 17:43:13 INFO common.Storage: Image file of size 95 saved in 0 seconds.
11/07/12 17:43:14 INFO common.Storage: Storage directory 
/home/solom/tmp/hadoop/dfs/name  has been successfully formatted.
11/07/12 17:43:14 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at linux87/10.18.6.87
************************************************************/
solom@linux87:~/hadoop> bin/start-all.sh
starting namenode, logging to /home/solom/hadoop/bin/../logs/hadoop-solom-namenode-
linux87.out

: Temporary failure in name resolution
: Temporary failure in name resolution
starting jobtracker, logging to
 /home/solom/hadoop/bin/../logs/hadoop-solom-jobtracker-linux87.out
: Temporary failure in name resolution
additional:
solom@linux87:~> cat /etc/hosts
#
# hosts         This file describes a number of hostname-to-address
#               mappings for the TCP/IP subsystem.  It is mostly
#               used at boot time, when no name servers are running.
#               On s开发者_StackOverflowmall systems, this file can be used instead of a
#               "named" name server.
# Syntax:
#    
# IP-Address  Full-Qualified-Hostname  Short-Hostname
#

127.0.0.1       localhost

# special IPv6 addresses
::1             localhost ipv6-localhost ipv6-loopback

fe00::0         ipv6-localnet

ff00::0         ipv6-mcastprefix
ff02::1         ipv6-allnodes
ff02::2         ipv6-allrouters
ff02::3         ipv6-allhosts
10.18.6.87      linux87.site linux87
solom@linux87:~> 


$ vi -b conf/slaves

Maybe you can know what happened!


I had the same problem and I've resolved it adding to /etc/hosts file the next line: 192.168.56.101 localhost hadoop where you must change the ip and change hadoop and put your own hostname.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜