I have a hadoop environment with three docker containers:
- hadoop namenode/datanode
- spark master
- spark worker
The environment is properly set as I can run any spark job on it. On top of that when I execute the gitlab-runner command, it ends with the expected result (job success). However in the official Gitlab CI site it ends with an error. The error is:
Call From hadoop/172.19.0.2 to hadoop:8020 failed on connection exception: java.net.ConnectException: Connection refused
The last simplification of the file looks like this one:
hadoop_and_spark_build_job: image: docker:git stage: build services: - docker:dind before_script: - cd environment - apk add bash py-pip - pip install docker-compose - docker login --username=$CI_GIT_USER --password=$CI_GIT_TOKEN registry.gitlab.com script: - docker-compose up -d sbd-worker # Pulls and run the hadoop and master containers opening the proper ports and launching the scripts of spark and hadoop - docker exec -i hadoop bash -c "pwd && ping -c 2 hadoop && ping -c 2 master && ping -c 2 sbd-worker" - docker exec -i hadoop bash -c "hadoop dfsadmin -report" # <<<<<<<<< Fails here <<<<< - docker exec -i master bash -c "hadoop dfsadmin -report && mkdir -p /tmp"
I guess that the failure is related with some configuration from gitlab.com but I am really lost here… any assistance would be appreciated. The sentence that fails is the last but one (it has the comment <<<<<<<<< Fails here <<<<< )
The command I use to check it locally is:
gitlab-runner exec docker --docker-privileged hadoop_and_spark_build_job
Thank you in advance.