Are the ARM images down right now? (Solved, you have to change the way you specify the image)

I am asking because all my ARM builders for all my repos were working perfectly fine two days ago but no longer do, yet I didn’t modify anything there.

If this is actually my fault and needs troubleshooting, I’ll provide what you ask me to.

By the way the images are arm64v8/debian:bookworm and arm64v8/gcc:latest, depends on the repo. The runner can no longer pull them.

These are not Gitlab images, seems to be a dockerhub problem - maybe you should ask the maintainers of the image?

Did you try pulling them manually from a machine with docker installed? That would be a good way to debug a situation like this.

I cannot do any tests. All I have for development is a tablet with Termux. I have to rely on GitLab to build my apps for releases.

So you are using gitlab.com or your own gitlab server/runners?

I’m using the gitlab.com website

I don’t see anything mentioned here about them: https://status.gitlab.com/ I would expect runners are working.

If they worked previously, it’s possible Gitlab does have an issue that they may become aware of soon and the status page should update accordingly to reflect this.

Alright, I guess I’ll be waiting for it then.

As an aside, I don’t use arm, but I can do this to pull an image on x86_64 system:

root@os-docker:~# docker pull arm64v8/debian:bookworm --platform=linux/arm64
bookworm: Pulling from arm64v8/debian
1a3f1864ec54: Pull complete 
Digest: sha256:5795f37ddefb542232b211e833f8199b7fe2105f95ed653f16c3d0dfe7ece2d9
Status: Downloaded newer image for arm64v8/debian:bookworm
docker.io/arm64v8/debian:bookworm

which confirms the image and dockerhub is OK. Therefore would suggest a problem with Gitlab arm runners.

This might be a coincidence, but the image was updated two days ago.
Maybe this has something to do with the problem ?
Also here’s the simplest .gitlab-ci.yml I have, its job is to run two scripts that build packages for my personal use

stages:
    - build

build-arm64:
    stage: build
    rules:
    - if: $WE_BUILDING_ARM == '1'
    image: arm64v8/debian:bullseye
    script:
    - chmod +x setup_linux
    - chmod +x package_arm64
    - ./setup_linux
    - ./package_arm64
    artifacts:
        untracked: false
        when: on_success
        access: all
        expire_in: 1 month
        paths:
        - "sdl3-all-arm64-3.0.0.deb"


build-amd64:
    stage: build
    rules:
    - if: $WE_BUILDING_AMD == '1'
    image: debian:bullseye
    script:
    - chmod +x setup_linux
    - chmod +x package_amd64
    - ./setup_linux
    - ./package_amd64
    artifacts:
        untracked: false
        when: on_success
        access: all
        expire_in: 1 month
        paths:
        - "sdl3-all-amd64-3.0.0.deb"

yes this one technically uses bullseye but it doesn’t work either

Edit: I’m not sure why someone flagged this as spam. I confirmed I could reproduce the problem the OP reported.

I can confirm this affected i386 based images as well. The i386 CI job on my self-hosted instance worked fine last week, it’s broken this week. The only thing that changed was software updates via apt.

So I opened a ticket about this issue GitLab CI unable to pull non amd64 docker images (e.g. i386, arm) (#504433) · Issues · GitLab.org / GitLab · GitLab

If anyone knows how to get the GitLab CI to use the --platform flag when it’s pulling images, and control the value that is passed in, we should be able to work around for this issue. That would also allow the severity and priority of this bug to be lower.

1 Like

@negative_1 Were you able to find a workaround to this?

I’ve reproduced the issue on my local GitLab instance & CI runner, but the same CI job ran fine on GitLab[.]com’s CI. Links to the logs for the specific CI jobs are in the ticket I opened.

I’m hoping there’s some configuration that I can change to make my CI infrastructure compatible with the new behavior of GitLab [runner].

I looked in your ticket and reproduced the way you specified the image, it works now.

1 Like