Docker deployment to either Amazon ECS or ECR

I’m trying to create a pipeline with either ECS or ECR, but currently not able to.

  1. Using ECS
    I’ve followed the instruction here https://docs.gitlab.com/ee/ci/cloud_deployment, I was able to build the docker image and push it on GitLab’s registry but in the build stage I got:
    The command '/bin/sh -c /bin/herokuish buildpack build' returned a non-zero code: 1

  2. Using ECR
    I’m attaching my script below, which is also the one I use in step 1 but slightly modified:

    image: docker:git

    variables:
    MAVEN_OPTS: >-
    -Dmaven.repo.local=.m2/repository
    -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN
    -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true
    MAVEN_CLI_OPTS: >-
    –batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true
    -DdeployAtEnd=true
    CI_AWS_ECS_CLUSTER: quarkus-cognito
    CI_AWS_ECS_SERVICE: quarkus-cognito-service
    CI_AWS_ECS_TASK_DEFINITION: quarkus-cognito-task-definition
    CI_AWS_REGISTRY_IMG: 981794644853.dkr.ecr.us-east-2.amazonaws.com/quarkus-cognito

    cache:
    paths:
    - .m2/repository
    - target
    - aws

    before_script:

      echo " ------------------------------- Global > Before Script
      ------------------------- ------"
    • echo $CI_COMMIT_BRANCH

    stages:

    • compile
    • build
    • review
    • deploy
    • production

    compile-project:
    image: maven:3.6.3-openjdk-11
    stage: compile
    before_script:
    - apt-get update -qq
    - apt-get install -y -qq build-essential libz-dev zlib1g-dev
    - ls -la
    - chmod +x ./mvnw
    script:
    - echo “Building native app.”
    - ./mvnw package

    build-docker:
    stage: build
    services:
    - docker:dind
    before_script:
    - echo “Login to GitLab”
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    - apk add curl python2 python-pip unzip
    - curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o “awscliv2.zip”
    - unzip awscliv2.zip
    - ls -la
    - ${CI_PROJECT_DIR}/aws/install -i /usr/local/aws-cli -b /usr/local/bin
    - aws --version
    - aws ecr get-login-password --region ap-southeast-1 | docker login --username xxx --password-stdin xxx
    script:
    - echo “Building image”
    - ls ${CI_PROJECT_DIR}/target
    - docker pull $CI_REGISTRY_IMAGE:latest || true
    - docker build -f src/main/docker/Dockerfile.jvm --cache-from $CI_REGISTRY_IMAGE:latest
    - docker $CI_REGISTRY_IMAGE:latest tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker $CI_REGISTRY_IMAGE:latest tag–tag $CI_REGISTRY_IMAGE:latest .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker push $CI_REGISTRY_IMAGE:latest
    - echo “Pushing to ECR”
    - docker $CI_REGISTRY_IMAGE:latest tag $CI_AWS_REGISTRY_IMG:$CI_COMMIT_SHA
    - docker $CI_REGISTRY_IMAGE:latest tag–tag $CI_AWS_REGISTRY_IMG:latest
    - docker push $CI_AWS_REGISTRY_IMG:$CI_COMMIT_SHA
    - docker push $CI_AWS_REGISTRY_IMG:latest
    only:
    - master

The problem with this approach is that aws cli is not properly configured. Here’s the error:
$ aws --version
/bin/sh: eval: line 118: aws: not found
ERROR: Job failed: exit code 127

Does somebody have a decent pipeline on how to deploy into Amazon EC2 from GitLab? I’m building a Java project, specifically using Quarkus framework.

Hello there!

You are talking about a lot of different AWS technologies there, so I am not really sure where you want to start from:

  • ECR is to store your docker images
  • ECS is to run your docker images
  • EC2 is the computing service - do you want to use ECS backed by EC2 instead of Fargate?

Anyhow, I can suggest how to build the docker image for ECR. First, I suggest to use kaniko, makes a lot easier to manage the whole thing.

The .gitlab-ci.yml then can be something like this:

buildDocker:
 image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
  stage: build
  variables:
    REGISTRY: xxx.dkr.ecr.us-east-2.amazonaws.com/quarkus-cognito
  only:
    - master
  when: manual
  script:
    # see https://github.com/GoogleContainerTools/kaniko/issues/1227
    - mkdir -p /kaniko/.docker
    - echo "{\"credsStore\":\"ecr-login\"}" > /kaniko/.docker/config.json
    - /kaniko/executor --cache=true --dockerfile Dockerfile --destination $REGISTRY:$CI_COMMIT_SHORT_SHA --destination $REGISTRY:latest

This build and push your Docker image to ECR: you need to configure in the secret variables of the project AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Kaniko will automatically login for you.

After you are able to push your Docker image to ECR we can talk about how to deploy it, but I need to understand if you want to use ECS or something else.

1 Like

Hi,

ECS backed by EC2 instead of Fargate?
-> Yes, this is what I’m trying to do. But since I’m not able to make it work I also tried the one in the documentation which uses GitLab’s image registry to update AWS ECS.

My goal is to deploy a Java app to an EC2 instance. My plan is to build a docker image in GitLab, push it on ECR, and trigger ECS to republish by fetching the updated Docker image from ECR and deploy it on EC2.

I am now able to successfully deploy to AWS ECR. I can deploy the ECR image when I stop the running ECS task first before I commit the project. Can you help me automate this process?

Thanks.

Sure, not a problem, it’s our pipeline as well :slight_smile:

So, now you have your image on ECR, and I suppose in your ECS task definition you are pointing to such image.

Personally, to deploy the new image I use this GitHub project. I know that GitLab has made a dedicated Docker image, but I haven’t tested it yet.

The job is something like this:

deployJob:
  stage: deploy
  image: python:3.8
  only:
    - master
  before_script:
    - pip install ecs-deploy
  when: manual
  needs:
    - buildJob
  script:
    - ecs deploy <cluster-name> <service-name> -t $CI_COMMIT_SHORT_SHA --timeout 600

This works because we tagged the Docker image on ECR with CI_COMMIT_SHORT_SHA as tag. The script will create a new revision of the ECS task using as Docker image the image with tag CI_COMMIT_SHORT_SHA.

Of course this is only a basic example, let me know if you need any detail on how to create a proper deployment over ECS with a clear shutdown of the task and so on. But on the Gitlab side this should be everything.

Hi,

Thanks for the help. I was able to complete our deployment pipeline.

For those who are developing the same pipeline, here’s the working configuration that we have:

Cheers,
Edward

1 Like

Link is broken, but this one works: