I somewhat resolved my issue with docker-machine proxy settings simply by making the squid proxy transparent. However I am getting a new error now when it tries to pull a container:
6WARNING: Failed to pull image with policy “always”: Error response from daemon: Get https://registry-1.docker.io/v2/: http: server gave HTTP response to HTTPS client (manager.go:203:0s)
7ERROR: Job failed: failed to pull image “alpine:latest” with specified policies [always]: Error response from daemon: Get https://registry-1.docker.io/v2/: http: server gave HTTP response to HTTPS client (manager.go:203:0s)
Anyone know what might cause this? I am able to pull container images down on other systems using the transparent proxy without issue.
I am not sure what you are asking for ie context/URL, I simply spun up docker+machine executor on vsphere and when I used that runner for a CI job it failed because it had no internet access and I was unable to configure it to point it to the proxy server. My workaround for that is to simply add iptables rules redirecting ports 80 and 443 to the squid proxy IP and port.
Docker version is whatever is provided in the last release of boot2docker.iso, v19.03.12 according to the release.
The google searches on the error all show results for private insecure registries whereas in this case it is a public secure registry.
To test I simply ran: docker pull alpine:latest. On my gitlab-runner host with docker version 20.10.17-3 and with no proxy environment variables set I was able to pull down the alpine:latest container. HOWEVER, I now realise I did so with the proxy environments still set for docker as the /etc/systemd/system/docker.service.d/http-proxy.conf contains the proxy settings. I renamed the file and reloaded the daemon then restarted docker service and I get the same error now.
Clearly it does not like the transparent proxying. Ideally I would like to set docker+machine up with proxy config but it does not seem possible.
squid.conf:
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
#acl SSL_ports port 443
acl SSL_ports port 443 563 1863 5190 5222 5050 6667
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
#http_port 3128
#http_port 192.168.0.1:3128 intercept
http_port 3128 intercept
#visible_hostname proxy.anfieldroad.int
# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256
# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid
# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
I’m abandoning this approach, the setup of a transparent proxy to support HTTPS is very different to that of setting up one where the applications are configured with a proxy which works fine for me.
The issue here is clearly gitlab failing to implement a mechanism for providing proxy configuration to docker+machine hosts.
I honestly should not need to have to pull apart and reconfigure my squid proxy as a kludge workaround for such clearly narrow minded vision in the implementation of docker+machine executors.
Thanks for the context. The initial question seemed it was referring to an earlier problem you have encountered. Its original state, changes, and potential fixes can help analyze the current problem faster. I’m trying to understand the case for the reverse proxy, and its network flow and help identify the cause for the http-to-https error. I haven’t used this scenario myself - as many details as possible can help me understand while I research possible ways to help mitigate the problem
While looking for docker-machine gitlab proxy settings I found the earlier forum topic. Docker-machine proxy settings
I set up docker machine using the boot2docker iso, it all runs and deploys a VM but thats where the joy ends. Lo and behold there is no proxy settings so docker can’t pull images.
I haven’t used squid as proxy for Docker yet, so I was curious to try the setup. The ubuntu/squid image worked without configuration modifications - which could help your analysis with your squid configuration.
Docker CLI and squid proxy worked in my tests, I’ve also written a short blog post in
For your setup I’d suggest investigating the proxy logs if they provide more insights when the docker pull command is executed. Once the manual tests work with the CLI, the next step can be looking into docker-machine environment variables for the HTTP proxy settings, see below.
Note: Replace -d virtualbox with the machine driver you are using.
From there, the VM should be using the HTTP proxy for pulling container images
I have not tested the approach but it might be helpful for others finding this topic.
docker-machine for auto-scaling
docker-machine is unfortunately deprecated upstream by Docker, GitLab maintains a fork, and looks for alternatives and better auto-scaling implementations. A blueprint RFC is available at Next Runner Auto-scaling Architecture | GitLab
Thanks for your response. I am already using gitlab’s fork of docker-machine which provides the binary for creating docker hosts but does not include a boot image hence my use of the boot2docker iso. In fact the documentation does not mention anything about any boot images whatsoever outside of the google solution documented. It is my opinion that the docker-machine documentation is incomplete and minimal at best. How to install it, autoscale it but missing the crucial information since you can’t boot and autoscale it if you have no boot image.
It covers no driver solutions beyond google and virtualbox and there is absolutely no reference for all the machine-options in those drivers - in fact I had to google for blogs and forum posts to find out how to configure the vmware driver and that was how I learned about the existence of boot2docker.iso.
So if there is a gitlab provided iso I have been unable to find one and right now I don’t believe one exists.
I will try your suggestion but I have never understood the point of creating the docker-machine vm manually, it allows you to test the vm creation but in the end a combination of the gitlab docker+machine executor and the content of the config.toml file will determine what gets spun up and from what I have read in my google searches those engine-env variables do not work via the toml file, I have tried it, it failed. While the docker-machine executable has --engine-env flags they do not seem to be catered for in the config.toml.
This is shown in:
“We’re also experiencing the same issue; the HTTP_PROXY variables being passed in --engine-env options do not appear to be getting used”
Looking at that issue it never progressed into being solved and is marked as " Awaiting further demand". That was back in 2015.
If you know of a sure way this can be done then please share because I was tearing my hair out trying all variations of providing env vars to the docker machine with nothing gained.
I stand corrected. Adding the engine env settings this time to both the initial command to test docker-machine and to the machine-options worked!
I did run into a new problem though which is coming from the container on the docker-machine:
fatal: unable to access 'https://gitlab.anfieldroad.int/anfieldroad/movies-dev_kubernetes.git/': SSL certificate problem: unable to get local issuer certificate
So I hazard a guess I need to create a registry with my own container images containing the full chain cert for my network.
Long term I will be creating my own runner images which will have the ca certificate chain installed.
In the mean time I will call this success as my build succeeded albeit using a default .gitlab-ci.yml pipeline since I don’t have an actual container with tools set up yet.
Thanks, I wasn’t sure which docker-machine driver you are using. Happy to see that you’ve come back, tried again, and found a solution for your environment
I’ll try to summarize the setup, problem and solutions for everyone finding this topic later.
GitLab Runner with docker-machine for autoscaling
docker-machine driver is Virtualbox and needs a boot image for the VM. Therefore using the boot2docker project instead of the GitLab forked binary of docker-machine. My guess is that docker-machine somehow brings a default image already. This topic proved that boot2docker also works.
docker-machine is running on a host that only has HTTP/HTTPS access via a squid proxy upfront
Passing environment variables HTTP_PROXY and HTTPS_PROXY to the docker-machine VM is not clear but worked with both, the docker-machine create and MachineOptions in the runner settings.
Adding the engine env settings this time to both the initial command to test docker-machine and to the machine-options worked!