Docker In Docker Crazy Weird Issue [SOLVED]

I’m facing a very weird issue and couldn’t find a solution yet.
I’'ve set up a simple pipeline in gitlab for

  • building some container image and pushing this image to a registry
  • deploy to kubernetes

Problem is that the image that is build within the pipellne does not work properly. By not working properly, I mean that once I deploy the image to kubernetes, I can’t connect to the container (through nodeport).

Some important infos:

  • the app within the container seems to be working properly because I am able to get the log and see that the app properly initialized
  • if I generate the docker file on my dev machine using the same Dockerfile (windows if it makes any difference) and push it to the registry and resume the deployment using this image. Everything works perfectly within kubernetes.
  • there is a small size difference (49mb vs 53 mb) in size between the image I build locally and the image built inside gitlab pipeline … Could it be some compression difference between windows and linux ?
  • the pipeline code for building the image is pretty standard as far as I know:
    build_bo:
      image: docker:19
      services:
        - docker:19-dind
      stage: build
      script:
        - echo $REGISTRY
        - echo $REGISTRY_USER
        - echo $REGISTRY_PWD
        - echo $REGISTRY_IMAGE
        - echo $IMAGE_TAG
        - echo $IMAGE_RELEASE
        - docker info
        - docker login -u $REGISTRY_USER -p $REGISTRY_PWD $REGISTRY
        - docker build -t $IMAGE_TAG .
        - docker tag $IMAGE_TAG $IMAGE_RELEASE
        - docker push $IMAGE_TAG

Has anybody any ideas? This problem makes me crazy for 2 days.

Complementary information:
I can connect a terminal with both containers. I installed curl on both and when doing curl localhost:3300 (a node app), I only get a response from the container/image that works (the one that I build locally). On the other hand as mentioned in previous post the logs from both images/containers logs show that the node app initialized properly.

Therefore the problem is really at the image level (even if it was already extremely probable considering that an image build locally was working properly).
The question is how building an image within gitlab pipeline using docker-dind could possibly affect this image and make it different than an image built in a different env but based on the exact same docker file ???
Must be something stupid but I’m somewhat at lost here …

======== UPDATE ============================================
I did set up an ubuntu VM and built the exact same docker file from this machine and … boom … the image does not work (well at least where connectivity is concerned).
Therefore, it does not seem soemthing related to gitlab specifically but some difference between windows and linux when building the exact same dockerfile …!!!

Or possibly that when I checkout the repository from crash something is missing? That what I will investigate now …

Well in the end, the origin of the problem was because the utility copyfile did not work exactly the same way under linux and under windows.
When buliding the image in the context of gitlab pipeline, some files were missing and even though the app was properly initialized as confirmed by the container log, it was not returning anything giving the impession that it was not accessible.
Because I am rather new to both gitlab pipelines and kubernetes, I was initially focused on a networking/kubernetes problem before considering anything else.
Too bad, there is no utility to compare the filesystem between 2 images (as far as I know).
I was able to pinpoint the missing files using “dive” utility even if it’s a bit more tedious.