Vanilla Plugin with default configuration throws "Connection Refused"
Hi,
My set up is:
Ubuntu 12.04
OpenStack Havana with Vanilla Plugin
I have deployed a cluster with the following node groups:
1 x master:
-Uses 1 cinder volume : 2TB
-namenode
-secondarynam
-oozie
-datanode
-jobtracker
-tasktracker
2x slaves:
-Uses 1 cinder volume: 2TB
-datanode
-tasktracker
Both node groups used the following flavor:
VCPUs: 32
RAM: 250000
Root disk: 300GB
Ephemeral: 300GB
Swap: 0
They also use the default Ubuntu Hadoop Vanilla image downloadable from https:/
The /etc/hosts file in all nodes is:
127.0.0.1 localhost
10.0.0.2 test-master2T-
10.0.0.3 test-slave2T-
10.0.0.4 test-slave2T-
Without changing any of the default configuration, the cluster boots correctly.
The problem is that, when running a job (for example, teragen 100GB), the map tasks fail many times, having to repeat them, thus increasing the job time. They seem to fail randomly, from one slave or the other, depending on the execution.
Checking the logs of the datanotes in the slaves, I can see this error:
WARN org.apache.
Full error: http://
The logs of the datanode in the master, gives this error:
WARN org.apache.
java.net.
Full error: http://
I have tried changing hadoop.tmp.dir to point to the 2TB cinder volume /volumes/
Thank you in advance.
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Sahara Edit question
- Assignee:
- No assignee Edit question
- Last query:
- Last reply:
Can you help with this problem?
Provide an answer of your own, or ask Marc Solanas for more information if necessary.