Docker Compose exit code 141 on shared runners

Hi,

I’m having some difficulty using docker-compose to build containers on shared runners and hope that someone may have seen a similar issue previously or can help me debug it.

My goal is to have GitLab CI automatically build, test and publish containers from my docker-compose.yml, with a manual step to deploy the published containers onto a server. In order to make docker-compose available from the test runner, I use an extension of the official Docker image, which has Docker Compose installed.

My .gitlab-ci.yml is as follows (only one job included, for brevity):

image: djbingham/docker-compose

services:
  - docker:dind

before_script:
  - apk update
  - apk add bash
  - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com

stages:
  - build
  - test
  - publish
  - deploy

...

build:data_api:
  stage: build
  script:
    - services/data/script/prepare # runs composer install using official Docker image
    - docker-compose build data_api
    - docker-compose push data_api

...

When this is run through the GitLab CI pipeline, the build:data_api step produces the expected output for composer install, followed by:

$ docker-compose build data_api
Building data_api
ERROR: Job failed: exit code 141

I’ve been searching for a while and found only two suggestions:

  1. the presence of a ‘.swp’ file can cause this error.
  2. Perhaps docker-compose ran out of memory.

I tried adding ls -la just before the docker-compose build command and saw that no .swp file (or similar) exists on the runner at that point.

With regard to memory, I don’t know how much is typically available on a shared runner. It appears to have failed at the first step of building the container image, which copies in the vendor folder generated by the preceding composer install. Running docker-compose build on my development machine works fine and yields a final image size of 381 MB.

Has there been any update to this? I’m also running into the same issue.

Hi @nesl247,

I was previously just experimenting with GitLab CI in my spare time so I ignored the issue for a while and worked on other things. However, I’ve recently picked up my old project and started using GitLab CI again but not seen this issue reappear so far. I don’t know much about the internals of GitLab CI or its runners, so I can’t offer any real insight for debugging but I always had a suspicion it’s a issue with specific runners, rather than GitLab itself. If you have any budget for a small cloud server, maybe you could try setting up a private runner and see if you get the same error? At least then you’d be able to inspect logs to get more information.

The issue actually magically went away for me overnight (literally).

Has this any solution or explanation? I’m still having this problem and it hasn’t magically gone away!

I found that I was tagging the docker images automatically from branch names and that some branches had invalid characters for tags and that resulted in a exit code 141.

I spent a long time on this before discovering a bash script was returning the 141 status code. It was only reproducable in the docker:stable container. Running the same command in Ubuntu would not cause the 141 status code. I fixed it with trap '' PIPE from https://stackoverflow.com/questions/22464786/ignoring-bash-pipefail-for-error-code-141

When attempting to reproduce, it’s helpful to check the return code with echo "$?" as it may not be obvious a script that is working is returning a non-0 status code.

I cannot explain why it seems to happen only sometimes for myself and people in this thread. Perhaps some system version matters here.