Docker-machine docker error

I somewhat resolved my issue with docker-machine proxy settings simply by making the squid proxy transparent. However I am getting a new error now when it tries to pull a container:

Running with gitlab-runner 15.0.0 (febb2a09)

2 on autoscale-runner fHoSVvg-

3Preparing the “docker+machine” executor01:35

4Using Docker executor with image alpine:latest …

5Pulling docker image alpine:latest …

6WARNING: Failed to pull image with policy “always”: Error response from daemon: Get https://registry-1.docker.io/v2/: http: server gave HTTP response to HTTPS client (manager.go:203:0s)

7ERROR: Job failed: failed to pull image “alpine:latest” with specified policies [always]: Error response from daemon: Get https://registry-1.docker.io/v2/: http: server gave HTTP response to HTTPS client (manager.go:203:0s)

Anyone know what might cause this? I am able to pull container images down on other systems using the transparent proxy without issue.

Can you share the context / URL of the previous problem, and also how you configured the proxy?

Somewhere the proxy passes an HTTP response to a client requesting HTTPS instead.

http: server gave HTTP response to HTTPS client (manager.go:203:0s)

Please share the command you are using for testing to pull the images, and the Docker version.

Hi Michi,

I am not sure what you are asking for ie context/URL, I simply spun up docker+machine executor on vsphere and when I used that runner for a CI job it failed because it had no internet access and I was unable to configure it to point it to the proxy server. My workaround for that is to simply add iptables rules redirecting ports 80 and 443 to the squid proxy IP and port.

Docker version is whatever is provided in the last release of boot2docker.iso, v19.03.12 according to the release.

The google searches on the error all show results for private insecure registries whereas in this case it is a public secure registry.

To test I simply ran: docker pull alpine:latest. On my gitlab-runner host with docker version 20.10.17-3 and with no proxy environment variables set I was able to pull down the alpine:latest container. HOWEVER, I now realise I did so with the proxy environments still set for docker as the /etc/systemd/system/docker.service.d/http-proxy.conf contains the proxy settings. I renamed the file and reloaded the daemon then restarted docker service and I get the same error now.

docker pull alpine:latest

Error response from daemon: Get “https://registry-1.docker.io/v2/”: http: server gave HTTP response to HTTPS client

Clearly it does not like the transparent proxying. Ideally I would like to set docker+machine up with proxy config but it does not seem possible.

squid.conf:
#
# Recommended minimum configuration:
#
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7       # RFC 4193 local private network range
acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines

#acl SSL_ports port 443
acl SSL_ports port 443 563 1863 5190 5222 5050 6667
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
#http_port 3128
#http_port 192.168.0.1:3128 intercept
http_port 3128 intercept
#visible_hostname proxy.anfieldroad.int

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320

Commands to implement transparent proxy:

iptables -t nat -A PREROUTING -i bond1.101 -p tcp --destination 0.0.0.0/0 --dport 80 -j DNAT --to 192.168.0.1:3128
iptables -t nat -A PREROUTING -i bond1.101 -p tcp --destination 0.0.0.0/0 --dport 443 -j DNAT --to 192.168.0.1:3128

I’m abandoning this approach, the setup of a transparent proxy to support HTTPS is very different to that of setting up one where the applications are configured with a proxy which works fine for me.

The issue here is clearly gitlab failing to implement a mechanism for providing proxy configuration to docker+machine hosts.

I honestly should not need to have to pull apart and reconfigure my squid proxy as a kludge workaround for such clearly narrow minded vision in the implementation of docker+machine executors.

Hi,

Thanks for the context. The initial question seemed it was referring to an earlier problem you have encountered. Its original state, changes, and potential fixes can help analyze the current problem faster. I’m trying to understand the case for the reverse proxy, and its network flow and help identify the cause for the http-to-https error. I haven’t used this scenario myself - as many details as possible can help me understand while I research possible ways to help mitigate the problem :slight_smile:

While looking for docker-machine gitlab proxy settings I found the earlier forum topic. Docker-machine proxy settings

I set up docker machine using the boot2docker iso, it all runs and deploys a VM but thats where the joy ends. Lo and behold there is no proxy settings so docker can’t pull images.

I’m not sure if boot2docker with its own docker-machine version will work. The documentation refers to installing the GitLab version: Install and register GitLab Runner for autoscaling with Docker Machine | GitLab

squid proxy test

I haven’t used squid as proxy for Docker yet, so I was curious to try the setup. The ubuntu/squid image worked without configuration modifications - which could help your analysis with your squid configuration.

Tests are performed on a Ubuntu 20 LTS VM.

docker run -d --name squid-container -e TZ=UTC -p 3128:3128 ubuntu/squid:5.2-22.04_beta

sudo mkdir -p /etc/systemd/system/docker.service.d
vim /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTPS_PROXY=http://localhost:3128"
Environment="HTTP_PROXY=http://localhost:3128"

 sudo systemctl daemon-reload
 sudo systemctl restart docker

docker rm squid-container
docker run -d --name squid-container -e TZ=UTC -p 3128:3128 ubuntu/squid:5.2-22.04_beta

docker logs -f squid-container
root@legendiary:~# docker pull alpine:latest
latest: Pulling from library/alpine
2408cc74d12b: Pull complete
Digest: sha256:686d8c9dfa6f3ccfc8230bc3178d23f84eeaf7e457f36f271ab1acc53015037c
Status: Downloaded newer image for alpine:latest
docker.io/library/alpine:latest
1655891221.435    499 172.17.0.1 TCP_TUNNEL/200 6087 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891221.872    425 172.17.0.1 TCP_TUNNEL/200 10284 CONNECT auth.docker.io:443 - HIER_DIRECT/3.86.127.18 -
1655891222.347    473 172.17.0.1 TCP_TUNNEL/200 6219 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891235.285    384 172.17.0.1 TCP_TUNNEL/200 6087 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891235.724    428 172.17.0.1 TCP_TUNNEL/200 10284 CONNECT auth.docker.io:443 - HIER_DIRECT/3.86.127.18 -
1655891236.172    445 172.17.0.1 TCP_TUNNEL/200 6219 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891236.631    450 172.17.0.1 TCP_TUNNEL/200 7857 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891237.091    428 172.17.0.1 TCP_TUNNEL/200 6741 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891237.494    388 172.17.0.1 TCP_TUNNEL/200 6139 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891237.510    404 172.17.0.1 TCP_TUNNEL/200 6141 CONNECT registry-1.docker.io:443 - HIER_DIRECT/3.228.155.36 -
1655891237.543     47 172.17.0.1 TCP_TUNNEL/200 5256 CONNECT production.cloudflare.docker.com:443 - HIER_DIRECT/104.18.122.25 -
1655891237.587     72 172.17.0.1 TCP_TUNNEL/200 2809548 CONNECT production.cloudflare.docker.com:443 - HIER_DIRECT/104.18.122.25 -

Docker CLI and squid proxy worked in my tests, I’ve also written a short blog post in

For your setup I’d suggest investigating the proxy logs if they provide more insights when the docker pull command is executed. Once the manual tests work with the CLI, the next step can be looking into docker-machine environment variables for the HTTP proxy settings, see below.

docker-machine with HTTP proxy

Found Set proxy servers on docker-machine (#75) · Issues · GitLab.org / Ops Sub-Department / docker-machine · GitLab

Digging further, I have found 2 things:

Combining this knowledge into

  1. Adding the proxy configuration in the current session
export HTTP_PROXY=http://yourproxy.net:8080
export HTTPS_PROXY=https://yourproxy.net:8080
  1. Re-creating the docker-machine machine
docker-machine rm default

docker-machine create -d virtualbox \
 --engine-env HTTP_PROXY="$HTTP_PROXY" \
 --engine-env HTTPS_PROXY="$HTTPS_PROXY" \
 default

Note: Replace -d virtualbox with the machine driver you are using.

From there, the VM should be using the HTTP proxy for pulling container images

I have not tested the approach but it might be helpful for others finding this topic.

docker-machine for auto-scaling

docker-machine is unfortunately deprecated upstream by Docker, GitLab maintains a fork, and looks for alternatives and better auto-scaling implementations. A blueprint RFC is available at Next Runner Auto-scaling Architecture | GitLab

I’d suggest following Strategy in response to deprecation of Docker Machine by Docker (#341856) · Issues · GitLab.org / GitLab · GitLab and Autoscaling Provider for GitLab Runner to replace Docker Machine (&2502) · Epics · GitLab.org · GitLab and add a comment describing your use case and proposal.

Cheers,
Michael

1 Like

Hi Michi,

Thanks for your response. I am already using gitlab’s fork of docker-machine which provides the binary for creating docker hosts but does not include a boot image hence my use of the boot2docker iso. In fact the documentation does not mention anything about any boot images whatsoever outside of the google solution documented. It is my opinion that the docker-machine documentation is incomplete and minimal at best. How to install it, autoscale it but missing the crucial information since you can’t boot and autoscale it if you have no boot image.

It covers no driver solutions beyond google and virtualbox and there is absolutely no reference for all the machine-options in those drivers - in fact I had to google for blogs and forum posts to find out how to configure the vmware driver and that was how I learned about the existence of boot2docker.iso.

So if there is a gitlab provided iso I have been unable to find one and right now I don’t believe one exists.

I will try your suggestion but I have never understood the point of creating the docker-machine vm manually, it allows you to test the vm creation but in the end a combination of the gitlab docker+machine executor and the content of the config.toml file will determine what gets spun up and from what I have read in my google searches those engine-env variables do not work via the toml file, I have tried it, it failed. While the docker-machine executable has --engine-env flags they do not seem to be catered for in the config.toml.

This is shown in:

“We’re also experiencing the same issue; the HTTP_PROXY variables being passed in --engine-env options do not appear to be getting used”

Looking at that issue it never progressed into being solved and is marked as " Awaiting further demand". That was back in 2015.

If you know of a sure way this can be done then please share because I was tearing my hair out trying all variations of providing env vars to the docker machine with nothing gained.

Hi Michael,

I stand corrected. Adding the engine env settings this time to both the initial command to test docker-machine and to the machine-options worked!

I did run into a new problem though which is coming from the container on the docker-machine:

fatal: unable to access 'https://gitlab.anfieldroad.int/anfieldroad/movies-dev_kubernetes.git/': SSL certificate problem: unable to get local issuer certificate

So I hazard a guess I need to create a registry with my own container images containing the full chain cert for my network.

Workaround by adding:

environment = [
“GIT_SSL_NO_VERIFY=1”
]

into the [[runner]] section of config.toml

Long term I will be creating my own runner images which will have the ca certificate chain installed.

In the mean time I will call this success as my build succeeded albeit using a default .gitlab-ci.yml pipeline since I don’t have an actual container with tools set up yet.

1 Like

Thanks, I wasn’t sure which docker-machine driver you are using. Happy to see that you’ve come back, tried again, and found a solution for your environment :slight_smile:

I’ll try to summarize the setup, problem and solutions for everyone finding this topic later.

  1. GitLab Runner with docker-machine for autoscaling
  2. docker-machine driver is Virtualbox and needs a boot image for the VM. Therefore using the boot2docker project instead of the GitLab forked binary of docker-machine. My guess is that docker-machine somehow brings a default image already. This topic proved that boot2docker also works.
  3. docker-machine is running on a host that only has HTTP/HTTPS access via a squid proxy upfront
  4. Passing environment variables HTTP_PROXY and HTTPS_PROXY to the docker-machine VM is not clear but worked with both, the docker-machine create and MachineOptions in the runner settings.

Adding the engine env settings this time to both the initial command to test docker-machine and to the machine-options worked!

Now that using docker-machine to spawn containers in a VM works, the new problem is that the GitLab server uses a self-signed certificate, and fails inside the CI/CD jobs on the runner, cloning the repository. Potential solution: The local CA needs to be added to the container base images, e.g. by following ubuntu - How do I add a CA root certificate inside a docker image? - Stack Overflow Disabling TLS verify in the runner configuration is a workaround and will be solved long term.