I am currently trying to set up a Hadoop distributed compute cluster on my network. Currently. my cluster nodes are encountering a problem communicating with my master server.
Right now I am working on two computers, CLIENT and SERVER.
On SERVER:
$ nmap SERVER -p 9000
Starting Nmap 5.21 ( http://nmap.org ) at 2012-05-29 13:16 PDT
Nmap scan report for ncoiasi1 (127.0.0.1)
Host is up (0.000032s latency).
Hostname ncoiasi1 resolves to 2 IPs. Only scanned 127.0.0.1
rDNS record for 127.0.0.1: localhost
PORT STATE SERVICE
9000/tcp open cslistener
On CLIENT:
$ nmap SERVER -p 9000
Starting Nmap 5.21 ( http://nmap.org ) at 2012-05-29 13:16 PDT
Nmap scan report for ncoiasi1 (10.23.95.197)
Host is up (0.00020s latency).
rDNS record for 10.23.95.197: NCOIASI1
PORT STATE SERVICE
9000/tcp closed cslistener
I have done the following things:
- Made sure both machines have an entry in /etc/hosts, and put ALL:ALL in the /etc/hosts.allow on both machines
- Disabled the firewall on both machines (safe to do since I am behind a stringent corporate firewall)
- used
lsof
to verify that the correct process is listening on 9000.
Any help would be appreciated; I know it's just some configuration I've forgotten somewhere, but I can't find where.
Best Answer
Your server is set up to only listen on its loopback, not on the NIC exposed to the client. Try 'netstat -an | grep :9000' and you'll likely only see 127.0.0.1:9000. Edit the Hadoop server's configuration file so that it includes its NIC's address. This post covers what needs to be changed: https://stackoverflow.com/questions/4855808/hadoop-job-tracker-only-accessible-from-localhost