Speeding up with yarn and docker-runner


I’m trying to figure out what exactly is persistent between builds when using the docker-runner.

Is the docker-image always clean when starting a new build?

Yarn is an alternative for NPM and makes it safe to use a cache-directory when pulling dependencies. As pulling all dependencies is a huge part of my CI-tests (5 out of 6 minutes), I would love to use a global cache for that.

Even different repos are okay to use the same runners global cache.

What I tried was to use volumes for that, but that does not work for some reason… I was also wondering if there is a more clever way to do that (i.e. one that does not depend on special runner-configuration). If I am not mistaken it is possible to link one docker-image to another one. Maybe a yarn-cache-container would be possible?

That’s my current config:

concurrent = 1
check_interval = 0

  name = "sven.macbook"
  url = "https://git.XXX/"
  token = "XXX"
  executor = "docker"
    tls_verify = false
    image = "ruby:2.1"
    privileged = false
    disable_cache = false
    volumes = [


I’m still interested in this :x

I found the solution in a stackoverflow question: http://stackoverflow.com/questions/41483926/gitlab-ci-not-caching

image: steveedson/ci

  - build

  untracked: true
  key: "$CI_PROJECT_ID"
    - node_modules/
    - _site/vendor/
    - .bundled/
    - .yarn

  stage: build
    - ls -l
    - yarn config set cache-folder .yarn
    - yarn install
    - ...


This can be even shorter one-liner:

yarn install --pure-lockfile --cache-folder .yarn

Pure lockfile flag is needed to avoid updating the yarn.lock file.