Backup gitlab chart. How to configure S3 buckets?

Hello,
I’m trying to backup (manually for the time being) my Gitlab chart running on Kubernetes (Azure). I want to store the backup in a bucket on AWS S3. Here is the relevant section from my configuration:

gitlab:                                                             
  task-runner:                                                      
backups:                                                        
  objectStorage:                                                
    config:                                                     
      key: config                                               
      secret: s3cmd-config                                      
global:                                                             
  appConfig:                                                        
backups:                                                        
  bucket: some-bucket                                  
  tmpBucket: some-bucket-tmp              

When I run the backup-utility dumping databse and repositories works fine, but then:

Bucket not found: registry. Skipping backup of registry ...
Bucket not found: gitlab-uploads. Skipping backup of uploads ...
Bucket not found: gitlab-artifacts. Skipping backup of artifacts ...
Bucket not found: git-lfs. Skipping backup of lfs ...
Bucket not found: gitlab-packages. Skipping backup of packages ...
WARNING: This version of GitLab depends on gitlab-shell 9.0.0, but you're running Unknown. Please update gitlab-shell.
Packing up backup tar
ERROR: [Errno -2] Name or service not known
ERROR: Connection Error: Error resolving a server hostname.
Please check the servers address specified in 'host_base', 'host_bucket', 'cloudfront_host', 'website_endpoint'
command terminated with exit code 74

Do I have to configure additional buckets for registry, gitlab-uploads etc.? Where is this documented?

I have the following s3cmd config

[default]                                                                     
access_key = ...                                             
secret_key = ...
bucket_location=us-east-1                                                     

And create a secret from this file with

kubectl create secret generic s3cmd-config --from-file=config=s3cmd.config

I also tried adding

host_base = s3.amazonaws.com                                                    
host_bucket = %(bucket)s.s3.amazonaws.com                                       
cloudfront_host = cloudfront.amazonaws.com                                      
website_endpoint = http://%(bucket)s.s3-website-% (location)s.amazonaws.com/    

to the s3cmd, but no change.

I can list the bucket using s3cmd just fine, so I’m probably missing some configuration.
Any idea what I’m doing wrong?
Thanks.

2 Likes

Hi.
Has there been a solution to this problem? We are struggling with the same problem but when working with minio from the standard GitLab chart.

1 Like

I’m using helm chart, it comes with a mini and it works fine. But I want to backup to aws s3 without stopping working with the minio

ages later… if that helps:

---
# chart values
gitlab:
  toolbox:
    backups:
      cron:
        extraArgs: "--s3config /s3/external"
      objectStorage:
        config:
          secret: s3_secrets
          key: local
    extraVolumes: |
      - name: s3
        secret:
          secretName: s3_secrets
    extraVolumeMounts: |
      - name: s3
        mountPath: /s3/external
        subPath: external
        readOnly: true
---
apiVersion: v1
kind: Secret
metadata:
  name: s3_secrets
  namespace: gitlab
type: Opaque
data:
  # those should be encoded in base64, that's just an example
  local: |
    [default]
    access_key = ABC
    secret_key = DEF
    host_base = your.local.s3
  external: |
    [default]
    access_key = ABC
    secret_key = DEF
    host_base = your.external.s3

see issue