Configuring gitaly for server-side repository backups to S3 compatible storage

We want to (try to) use the server-side repository backups, but are having some problems.

I have put

gitaly['env'] = {
  'AWS_ACCESS_KEY_ID' => 'QR792JIYAS6VFEVFVNIZ',
  'AWS_SECRET_ACCESS_KEY' => '<not-gonna-tell-you-that>'
}

and

gitaly['configuration'] = {
  backup: {
    go_cloud_url: 's3://oc-sys-infra-gitaly-gitlab4-repo-backup',
    endpoint: 'https://s3.<company.domain.name>',
    s3ForcePathStyle: 'true',
    region: 'us-east-1'
  },

(gitlab4 in the bucket name is because this on a an setup called gitlab4, which is a test setup - and the “4” has historic reasons)

But that makes

sudo gitlab-backup create STRATEGY=copy REPOSITORIES_SERVER_SIDE=true

output a lot of messages like:

{"command.name":"create","error":"server-side create: rpc error: code = Internal desc = backup repository: manager: stream refs: close writer: blob (key \"@hashed/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b/1753449376_2025_07_25_17.11.5/001.refs\") (code=Unknown): operation error S3: PutObject, resolve auth scheme: resolve endpoint: endpoint rule error, Invalid region: region was not a valid DNS name.","gl_project_path":"<user>/<project>","level":"error","msg":"create failed","pid":3775,"relative_path":"@hashed/6b/86/6b86b273ff34fce19d6b804eff5a3f5747ada4eaa22f1d49c01e52ddb7875b4b.git","storage_name":"default","time":"2025-07-25T13:16:16.607Z"}

(that’s rather long, the relevant error is probably Invalid region: region was not a valid DNS name.)

Can anyone tell me what is wrong?

It could be well something related to this: Amazon Simple Storage Service endpoints and quotas - AWS General Reference

It could well be that you have to provide the full region name, eg: region: "s3.us-east-1.amazonaws.com"

Incidently, I have this same problem with minio (as I don’t have access to aws), but I cannot provide a DNS style region name like you can. Nor can I disable it and not use a region altogether. Hence the above longer DNS entry should solve it for you.

I just thought I’d update this from my testing with S3 compatible storage, and the Gitlab docs are a bit hit-and-miss. In terms of AWS, and your situation, the following should work:

with the region name in full DNS format.

In a worse-case situation, whereby the region cannot be specified in full DNS format, like with minio, then you can use a config as below:

gitaly['configuration'] = {
  backup: {
    go_cloud_url: "s3://gitlab-backup?region=us-east-1&endpoint=https://minio.example.com&s3ForcePathStyle=true"
  }
}

if I attempt to fill in the fields for endpoint, region, etc separately from the go_cloud_url, it won’t work with S3 compatible like minio. Hence combining the URL like I’ve done above with the shortened region name, with gitlab-backup being the bucket name.

The above adapted to your AWS details:

go_cloud_url: "s3://oc-sys-infra-gitaly-gitlab4-repo-backup?region=us-east-1&endpoint=https://s3.<company.domain.name>&s3ForcePathStyle=true"

“us-east-1” is mostly a string we provide to please things that validate the configuration, as can be seen from the fact the endpoint is set and points to something with our company name, we have our own S3-compatible storage. So putting amazonaws.com in anywhere is clearly wrong - and the region setting probably isn’t used anywhere. (And luckily we’re not in the US.)

I’ll try to combine the various parts.

1 Like

I’ve put the parameters in a different order (go_cloud_url: 's3://oc-sys-infra-gitaly-gitlab4-repo-backup?endpoint=https://s3.<company name>&s3ForcePathStyle=true&region=us-east-1', but combining it into a single string seems to work. It seems a bit silly to me that that works, but it does, and now I can go with evaluating if it is usable in our case.

1 Like

Yep, I also assumed that by placing the entries separately as suggested by the docs, that it would concatenate the go_url from that data, only for it to then complain about the region not being a DNS entry.

It was the only way I found to get around it, for it to work with S3-compatible storage. I suppose could be worth opening an issue with Gitlab for them to actually resolve their configuration method properly without having to do it that way. But at least we have a workable solution :slight_smile: and glad it worked for you too.

I made an issue

1 Like