Mastering DevOps: Transforming a Go Web App with End-to-End Automation

Mastering DevOps: Transforming a Go Web App with End-to-End Automation

Implementing End-to-End DevOps Practices for a Go Web App

In this article, we’ll implement end-to-end DevOps practices for a Go web application that currently lacks any DevOps methodologies. We’ll cover everything from containerizing the app with Docker to deploying it on a Kubernetes cluster using EKS, Helm, GitHub Actions, and ArgoCD. Let’s dive in!

Source Code & Repository 👇

You can find the source code for this project here:

Steps We’ll Follow

  1. Containerize the application using a multistage Dockerfile.

  2. Create Kubernetes manifests for deployment, service, and ingress.

  3. Set up Continuous Integration (CI) using GitHub Actions.

  4. Implement Continuous Deployment (CD) using GitOps with ArgoCD.

  5. Deploy the application on an AWS EKS cluster.

  6. Use Helm charts for environment-specific deployments.

  7. Configure an ingress controller to make the app accessible via a load balancer.


Step 1: Containerize the Go Web App with Docker

We’ll start by creating a multistage Dockerfile to containerize the application. This ensures a lightweight and secure final image.

Create a Dockerfile

# Build stage
FROM golang:1.22 as base
WORKDIR /app
COPY go.mod .
RUN go mod download
COPY . .
RUN go build -o /app/main .

# Final stage - distroless image
FROM gcr.io/distroless/base
COPY --from=base /app/main .
COPY --from=base /app/static ./static
EXPOSE 8080
CMD ["./main"]

Build and Run the Docker Image

docker build -t thepraduman/go-web-app:v1 .
docker run -p 8080:8080 -it thepraduman/go-web-app:v1

Once the container is running, you can access the app at http://localhost:8080/home.


Step 2: Deploy on Kubernetes with YAML Manifests

Next, we’ll create Kubernetes manifests to deploy the app on a Kubernetes cluster.

Push the Docker Image

Before deploying, push the Docker image to a container registry:

docker push thepraduman/go-web-app:v1

Create Kubernetes Manifests

Create a folder k8s/manifest and add the following files:

  1. deployment.yaml:

     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: go-web-app
       labels:
         app: go-web-app
     spec:
       replicas: 2
       selector:
         matchLabels:
           app: go-web-app
       template:
         metadata:
           labels:
             app: go-web-app
         spec:
           containers:
           - name: go-web-app
             image: thepraduman/go-web-app:v1
             ports:
             - containerPort: 8080
    
  2. service.yaml:

     apiVersion: v1
     kind: Service
     metadata:
       name: go-web-app
       labels:
         app: go-web-app
     spec:
       type: ClusterIP
       selector:
         app: go-web-app
       ports:
         - protocol: TCP
           port: 80
           targetPort: 8080
    
  3. ingress.yaml:

     apiVersion: networking.k8s.io/v1
     kind: Ingress
     metadata:
       name: go-web-app
       annotations:
         nginx.ingress.kubernetes.io/rewrite-target: /
     spec:
       ingressClassName: nginx
       rules:
       - host: "go-web-app.local"
         http:
           paths:
           - pathType: Prefix
             path: "/"
             backend:
               service:
                 name: go-web-app
                 port:
                   number: 80
    

Apply the Manifests

kubectl apply -f k8s/manifest


Step 3: Set Up an EKS Cluster

To deploy the app on Kubernetes, we’ll use Amazon EKS. Ensure you have awscli, eksctl, and kubectl installed.

Create an EKS Cluster

eksctl create cluster --name demo-cluster --region ap-south-1

Step 4: Configure the Ingress Controller

At the moment, the resources aren't accessed directly through ingress because we need an ingress controller. This controller helps assign an address for the ingress resource. To make the app accessible, we’ll install the NGINX Ingress Controller.

  • First, let's make sure the service is running smoothly without using ingress. To check this, we'll change the service type from ClusterIP to NodePort.

  • Run this command after you've changed the service type to find out the NodePort where your application is running.

      kubectl get svc
    

  • Take a look at the external IP for the nodes in the Kubernetes cluster

      kubectl get nodes -o wide
    

  • You can now check out the application at http://13.126.11.218:31296/home.

Now, Install the Ingress Controller

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/aws/deploy.yaml

Let's check if the ingress controller is up and running.

kubectl get pod -n ingress-nginx

Verify the Ingress

kubectl get ing

Here, we can see that the ingress controller is managing our ingress resource. It has assigned a domain name: adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com.

Wait a minute, what happens if we try to access the load balancer at adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com? Will we be able to reach our application?

In this scenario, the application is not accessible through adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com. This raises the question: why did this occur?The reason for this is that in the ingress configuration, we have specified that the load balancer should only accept requests if they are accessing the hostname go-web-app.local.

Map the load balancer’s DNS to go-web-app.local in your /etc/hosts file to access the app.

  • To obtain the IP address of adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com, execute the following command

      nslookup adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
    

  • Now, let's link the IP of elastic loadbalancer address adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com with the host go-web-app.local.

      cat sudo vim /etc/hosts
    
  • You can now access the application at go-web-app.local! 🎉


Step 5: Simplify Deployments with Helm

When deploying an application across different environments, Helm becomes essential. So far, we've been using hard-coded configuration files for services, deployments, and more. Imagine we need the image go-web-app:dev for the development environment, go-web-app:prod for production, and go-web-app:qa for staging. Does this mean we have to create separate folders like k8s/dev, k8s/prod, and k8s/staging? Fortunately, no. Helm allows us to make these configurations variable, simplifying the process.

Create a Helm Chart

Make sure you have Helm installed on your computer.

helm create go-web-app-chart

Update the Helm Chart

  1. Copy the Kubernetes manifests into the templates folder. before that delete everything from the template folder.

     cp k8s/manifest/ingress.yaml helm/go-web-app-chart/template
     cp k8s/manifest/service.yaml helm/go-web-app-chart/template
     cp k8s/manifest/deployment.yaml helm/go-web-app-chart/template
    

  2. Replace the image tag in deployment.yaml present in template folder with {{ .Values.image.tag }}.

     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: go-web-app
       labels:
         app: go-web-app
     spec:
       replicas: 2
       selector:
         matchLabels:
           app: go-web-app
       template:
         metadata:
           labels:
             app: go-web-app
         spec:
           containers:
           - name: go-web-app
             image: thepraduman/go-web-app:{{ .Values.image.tag }}
             ports:
             - containerPort: 8080
    

    Now, whenever Helm runs, it checks the values.yaml file for the image tag.

  3. Update values.yaml:

     # Default values for go-web-app-chart.
     # This is a YAML-formatted file.
     # Declare variables to be passed into your templates.
    
     replicaCount: 1
    
     image:
       repository: abhishekf5/go-web-app
       pullPolicy: IfNotPresent
       # Overrides the image tag whose default is the chart appVersion.
       tag: "v1"
       # When we set up CI/CD, 
       # we'll make the Helm values.yaml update automatically. 
       # Every time the CI/CD runs, 
       # it will refresh the Helm values.yaml with the newest image created in the CI. 
       # Then, using ArgoCD, that latest image with the newest tag will be deployed automatically.
    
     ingress:
       enabled: false
       className: ""
       annotations: {}
         # kubernetes.io/ingress.class: nginx
         # kubernetes.io/tls-acme: "true"
       hosts:
         - host: chart-example.local
           paths:
             - path: /
               pathType: ImplementationSpecific
    
  4. To make sure ingress-nginx gets installed automatically as a dependency in your Helm chart, just add the following inside your Helm chart (./go-web-app-chart/Chart.yaml):

      dependencies:
       - name: ingress-nginx
         version: "4.10.0"  # Use latest stable version
         repository: "https://kubernetes.github.io/ingress-nginx"
    
  5. Let's verify whether Helm is working accordingly or not.

     # lets delete all the resources and recreate them using helm chart
     kubectl delete deploy/go-web-app svc/go-web-app ing/go-web-app
    

    Now, run the following command to create all the resources again with the Helm chart and watch the magic happen! 🚀

Deploy with Helm

helm install go-web-app ./go-web-app-chart

You can now access the application at go-web-app.local! 🎉

To uninstall everything, run the command

helm uninstall go-web-app

Here, we can say that Helm is working perfectly!


Step 6: Set Up CI with GitHub Actions

In CI, we will set up several stages:

  • Build and run unit tests.

  • Perform static code analysis.

  • Create a Docker image and push it.

  • Update Helm with the new Docker image.

Once this is complete, CD will take over:

  • When the Helm tag is updated, ArgoCD will pull the Helm chart and deploy it to the Kubernetes cluster.

To implement Continuous Integration (CI) using GitHub Actions, Add a file .github/workflows/ci.yaml :

name: CI/CD
# Exclude the workflow to run on changes to the helm chart
on:
  push:
    branches:
      - main
    paths-ignore:
      - 'helm/**'
      - 'k8s/**'
      - 'README.md'
jobs:
## stage 1
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout repository
      uses: actions/checkout@v4
    - name: Set up Go 1.22
      uses: actions/setup-go@v2
      with:
        go-version: 1.22
    - name: Build
      run: go build -o go-web-app
    - name: Test
      run: go test ./...
## stage 2
  code-quality:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout repository
      uses: actions/checkout@v4
    - name: Run golangci-lint
      uses: golangci/golangci-lint-action@v6
      with:
        version: latest
## stage 3
  push:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - name: Checkout repository
      uses: actions/checkout@v4
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v1
    - name: Login to DockerHub
      uses: docker/login-action@v3
      with:
        username: ${{ secrets.DOCKERHUB_USERNAME }}
        password: ${{ secrets.DOCKERHUB_TOKEN }}
    - name: Build and Push action
      uses: docker/build-push-action@v6
      with:
        context: .
        file: ./Dockerfile
        push: true
        tags: ${{ secrets.DOCKERHUB_USERNAME }}/go-web-app:${{github.run_id}}
## stage 4
  update-newtag-in-helm-chart:
    runs-on: ubuntu-latest
    needs: push
    steps:
    - name: Checkout repository
      uses: actions/checkout@v4
      with:
        token: ${{ secrets.TOKEN }}
    - name: Update tag in Helm chart
      run: |
        sed -i 's/tag: .*/tag: "${{github.run_id}}"/' helm/go-web-app-chart/values.yaml
    - name: Commit and push changes
      run: |
        git config --global user.email "abhishek@gmail.com"
        git config --global user.name "Abhishek Veeramalla"
        git add helm/go-web-app-chart/values.yaml
        git commit -m "Update tag in Helm chart"
        git push
  • Now that we've finished setting up the CI pipeline, let's check to make sure everything is working smoothly.

      git push
    

    Head over to the GitHub repository and take a look at the Actions tab.

Here, the GitHub Action is working perfectly, and we've successfully completed the implementation of continuous integration! 🎉


Step 7: Implement CD with ArgoCD

Now, let's turn our attention to the ArgoCD component. Every time the CI pipeline runs, ArgoCD should spot the changes and deploy them to the Kubernetes cluster

Create a namespace called argocd and install ArgoCD there using a manifest file.

  •       kubectl create namespace argocd
          kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
    
  • Access the Argo CD UI (Loadbalancer service).

      kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
    
  • Access the Argo CD UI (Loadbalancer service) -For Windows

      kubectl patch svc argocd-server -n argocd -p '{\"spec\": {\"type\": \"LoadBalancer\"}}
    
  • Get the Loadbalancer service IP

      kubectl get svc argocd-server -n argocd
    

    Now you can access the ArgoCD UI at ae9f68624737e4e49a0bea0a56d 6dce4-1141447750.ap-south-1.elb.amazonaws.com. Enjoy exploring!

  • To log in to ArgoCD, simply use admin as the username, and you can get the password by running the following command.

      kubectl get secrets -n argocd
      kubectl edit secrets argocd-initial-admin-secret -n argocd
    

    The password we got is in base64 format, so to decode it, just run this command.

      echo <password-that-you-got> | base64 --decode
    

We've got access to the ArgoCD UI! 🎉

  • Tap on the "New App" button and fill in the required details.

    Once you've filled in all the details, just click on "Create".

Now, ArgoCD will look for all the files within the Helm chart in the GitHub repository. It will update the values.yaml file with all the necessary changes. If you click on the style you will able to see that argocd started deploying everything

So, we're all done, and guess what? We deployed the application using CI/CD! 🎉

Conclusion

By following these steps, we’ve successfully implemented end-to-end DevOps practices for our Go web app. From containerization to automated CI/CD pipelines, we’ve streamlined the deployment process and made it scalable and efficient.