GL Runner: Best way to Deploy Artifacts on Host System

GitLab Runner: Best way to deploy to host system

Hello,
im currently hosting GitLab-EE and my the project-specific Runner on a Virtual Machine.
I’m using GitLab to Develop and Publish Projects using the above mentioned Runner.

Context

My current project involves storybook
I’m running npm run build-storybook to create a static version of the storybook page. storybook-static. I’m serving this folder using NGINX (also acts as reverse proxy for gitlab).

GitLab Runner is setup with docker executor.

Every Project has it’s own Runner on it’s own VM ! (I’m not the only one who uses the instance)

Main Question

Now the main question: GitLab Runner is installed on the same machine as the webserver. How can i easily copy artifacts, created by the Runner to the webroot? (var/www/...)

Current Pipeline: Test -> Coverage -> Build -> Deploy.
Can i access the Filesystem directly from the CI/CD Pipeline? or do i have to copy it using scp or tftp or some other tool ?
As far as i understand, i cannot access the filesystem directly, because everything is built inside a docker container. Right ?

.gitlab-ci.yml:

stages:
	 - test
	 - coverage
	 - build
	 - deploy
	  
Component Tests:
	image: node:latest
	stage: test
	when: manual
	script:
		- npm ci
		- npm run prettier
		- npm run test-components
	allow_failure: false
	only:
		- main
 		- production
		- */production-deployment
		- merge_requests
	 
Code Coverage:
	image: node:latest
	stage: coverage
	script:
		- npm ci
		- npm run prettier
		- npm run coverage
	coverage: /All files[^|]*\|[^|]*\s+([\d\.]+)/
	artifacts:
		reports:
			junit: junit.xml
	only:
		- production
		- merge_requests
			 
Build Storybook Production:
	image: node:latest
	stage: build
	dependencies:
		- Component Tests
		- Code Coverage
	script:
		- npm ci
		- npm run prettier
		- npm run build-storybook
	artifacts:
		 paths:
			 - storybook-static
	expire_in: 30min
 	only:
		- main
 		- production
		- */production-deployment
		- merge_requests
	 
Deploy Production:
	stage: deploy
	image: ubuntu:20.04
	dependencies:
		- Component Tests
		- Build Storybook Production
	script:
		- apt-get -y update
		- apt-get -y upgrade
		- apt-get install -y sshpass 
		- rm -f ~/.ssh/known_hosts
		- sshpass -p "$CI_DEPLOYMENT_PASSWORD" scp -rv ./storybook-static/* deployment@<removed>:/var/www/<removed>-storybook
	only:
		- production
		- */production-deployment

Anyone know a good way to do that?

Hi @AcrimEx

In my opinion the proper way is to publish your artifacts to GitLab Repository. With proper versioning and all. So you can rollback to previous version if required.
Have a deploy step similar to the one you have, but instead of scp execute commands to download the desired version using curl or wget to the location you need. There is also a trick how to avoid quotes issues and such where you encapsulate your commands using Base64

job:
  script:
    - MYCOMMANDS="curl -and -some -parameters https://gitlab/path/to/artifact; unzip this.zip; mv this there;"
    - B64COMMAND=$(echo "$MYCOMMANDS" | base64 | sed ':a;N;$!ba;s/\n//g')
    - ssh "${SERVER_IP}" "echo ${B64COMMAND} | base64 -d | bash"

This way you need to have a “deploy” user on the webserver with permissions to var/www/...

The dirty way:

You could use the volume option in Docker Executor and bind var/www/... from host to each job running on the machine. This way every job gets access to host’s local filesystem on a path you specify. You need to make sure the local filesystem permissions are right or even need to use the privileged = true option depending on your setup.