Pass ssh key to build container

I’ve read several posts on here and elsewhere, as well as the docs, regarding passing an SSH key to a running build container in order for it to - typically - transfer output files to the target webserver. The suggested approach (injecting it via secret variables and ssh-agent) seems at odds with my setup, and I’m wondering if I’m failing to grasp something, or approaching it wrong. Here’s my setup:

We’ve got a single box running two docker containers: one is Gitlab itself, the other is a gitlab-runner instance. The SSH key I want to use is stored on the host box. It is this key that we’ve copied to the relevant server’s authorized_keys dir.

Here’s an example CI yaml (stripped down a bit for brevity):

variables:
  SERVER_IP_DEV: "***.***.***.***"
  SERVER_IP_PROD: "***.***.***.***"

stages:
  - build
  - deploy

.deploy: &import_deploy
  stage: deploy
  variables:
    DEPLOY_SRC_DIR: "./"
    DEPLOY_TARGET_DIR: "/srv/mysite/"
  script:
    - rsync -avzh $DEPLOY_SRC_DIR deployer@$SERVER:$DEPLOY_TARGET_DIR

deploy production:
  <<: *import_deploy
  variables:
    SERVER: $SERVER_IP_PROD
  only:
    - production
  environment:
    name: production

The simplest approach, to me, seems to be to simply define a volume in the CI yaml that maps the relevant .ssh/ dir of the container, onto the corresponding dir of the host box.

Nothing I’ve seen, though, suggests that it’s possible to do that. Instead, the suggestion seems to be to manually duplicate the private key from the host box - which doesn’t seem like a terribly clever move, security-wise - into a secret variable (just how “secret” is it? where does Gitlab store it?) and jump through a few hoops in order to dynamically inject it at runtime.

Either I’m missing a trick, or that’s daft. Can’t I just map a volume? If so, how? If not, why?

In a docker environment hosts can be very ephemeral. And containers can move to new hosts all the time. So mapping a host drive might not be the most practical thing to do.

What I do is this: (All in AWS)

  1. Setup a docker container to host files that need to be downloaded into docker builds. [This is a simple tomcat server serving up one directory. The directory is mapped to an EFS volume. I place my files, like ssh keys in the EFS volument]
  2. I add a line in my Dockerfile to do a simple wget to copy the the file and restart the SSH Service.

All of this is running internally on a private network. and is not available to the public.

Perhaps this will work for you??

Hello Benc,

if you are in charge of the node of the gitlab-runner, you may define a volume mount in it’s configuration.