I’ve read several posts on here and elsewhere, as well as the docs, regarding passing an SSH key to a running build container in order for it to - typically - transfer output files to the target webserver. The suggested approach (injecting it via secret variables and ssh-agent) seems at odds with my setup, and I’m wondering if I’m failing to grasp something, or approaching it wrong. Here’s my setup:
We’ve got a single box running two docker containers: one is Gitlab itself, the other is a gitlab-runner instance. The SSH key I want to use is stored on the host box. It is this key that we’ve copied to the relevant server’s authorized_keys dir.
Here’s an example CI yaml (stripped down a bit for brevity):
variables: SERVER_IP_DEV: "***.***.***.***" SERVER_IP_PROD: "***.***.***.***" stages: - build - deploy .deploy: &import_deploy stage: deploy variables: DEPLOY_SRC_DIR: "./" DEPLOY_TARGET_DIR: "/srv/mysite/" script: - rsync -avzh $DEPLOY_SRC_DIR deployer@$SERVER:$DEPLOY_TARGET_DIR deploy production: <<: *import_deploy variables: SERVER: $SERVER_IP_PROD only: - production environment: name: production
The simplest approach, to me, seems to be to simply define a volume in the CI yaml that maps the relevant .ssh/ dir of the container, onto the corresponding dir of the host box.
Nothing I’ve seen, though, suggests that it’s possible to do that. Instead, the suggestion seems to be to manually duplicate the private key from the host box - which doesn’t seem like a terribly clever move, security-wise - into a secret variable (just how “secret” is it? where does Gitlab store it?) and jump through a few hoops in order to dynamically inject it at runtime.
Either I’m missing a trick, or that’s daft. Can’t I just map a volume? If so, how? If not, why?