Mastering DevOps: Transforming a Go Web App with End-to-End Automation
Achieving Total DevOps Integration in One Project
Table of contents
- Steps We Will Take:
- Containerizing Your Go Web App with Docker: A Step-by-Step Guidekube
- Deploying on Kubernetes: Crafting Your YAML Manifests
- Setting Up Your EKS Cluster: A Prerequisite for Kubernetes Deployment
- Ingress Controller Configuration: Making Your App Accessible
- Harnessing the Power of Helm: Simplifying Multi-Environment Deployments
- Streamlining CI with GitHub Actions: Automate Your Workflow
- Seamless CD with ArgoCD: Automate Your Kubernetes Deployments
- Conclusion:
Source code & repository of the project ๐
In this article, we will implement comprehensive end-to-end DevOps practices on a project that currently lacks any DevOps methodologies!
Steps We Will Take:
First, we'll containerize the project using Docker by writing a multistage Dockerfile.
Next, we'll create Kubernetes manifests, including deployment, service, and ingress.
We'll set up continuous integration with GitHub Actions to keep everything running smoothly.
Then, we'll implement continuous deployment using GitOps with ArgoCD.
We'll set up a Kubernetes EKS cluster since our CI/CD process needs to deploy the application on Kubernetes.
We'll also set up Helm charts, so the development team can easily deploy the application in different environments (like dev, QA, and prod) by using Helm charts and passing the values.yaml, instead of writing a manifest for each environment.
Finally, we'll configure the ingress controller so it can create a load balancer based on the ingress settings, allowing the application to be accessible to the outside world.
Containerizing Your Go Web App with Docker: A Step-by-Step Guidekube
So, let's kick things off with the first step of our project: containerizing it by creating a multistage Dockerfile.
Create a Dockerfile
From golang:1.22 as base
WORKDIR /app
COPY go.mod .
RUN go mod download
COPY . .
RUN go build -o /app/main .
# final stage - distroless image
FROM gcr.io/distroless/base
COPY --from=base /app/main .
COPY --from=base /app/static ./static
EXPOSE 8080
CMD ["./main"]
Build a Docker image using the Dockerfile (just make sure Docker is installed on your server).
docker build -t praduman/go-web-app:v1 .
Now, let's run the image and check to make sure the containerization was successful!
docker run -p 8080:8080 -it praduman/go-web-app:v1
The server will start on port 8080! Can you believe it? You can access it by navigating to
http://localhost:8080/home
in your web browser.
Deploying on Kubernetes: Crafting Your YAML Manifests
Now, let's write the Kubernetes YAML manifest so our project is all set to be deployed on a Kubernetes cluster. But before we do that, you'll need to push the Docker image we created to the Docker repository.
docker push praduman/go-web-app:v1
Now, let's create a folder called
k8s
and inside it, make another folder namedmanifest
for our Kubernetes manifest files.
Create a
deployment.yaml
file inside themanifest
folder to deploy the application.apiVersion: apps/v1 kind: Deployment metadata: name: go-web-app labels: app: go-web-app spec: replicas: 2 selector: matchLabels: app: go-web-app template: metadata: labels: app: go-web-app spec: containers: - name: go-web-app image: thepraduman/go-web-app:v1 ports: - containerPort: 8080
Create a
service.yaml
file inside themanifest
folder.apiVersion: v1 kind: Service metadata: name: go-web-app labels: app: go-web-app spec: type: ClusterIP selector: app: go-web-app ports: - protocol: TCP port: 80 targetPort: 8080
Create an
ingress.yaml
file inside themanifest
folder.apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: go-web-app annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - host: "go-web-app.local" http: paths: - pathType: Prefix path: "/" backend: service: name: go-web-app port: number: 80
Setting Up Your EKS Cluster: A Prerequisite for Kubernetes Deployment
Now, let's check the Kubernetes manifest you've created. We'll need a Kubernetes cluster, and for this, we'll use EKS from AWS.
Ensure that AWS CLI, eksctl, and kubectl are installed on your server prior to setting up EKS.
Installation of EKS cluster with eksctl
eksctl create cluster --name demo-cluster --region ap-south-1
Let's proceed to create the
deployment, service, and ingress
using the Kubernetes manifest files.kubectl apply -f k8s/manifest
Ingress Controller Configuration: Making Your App Accessible
At the moment, the resources aren't accessed directly through ingress because we need an ingress controller. This controller helps assign an address for the ingress resource.
First, let's make sure the service is running smoothly without using ingress. To check this, we'll change the service type from ClusterIP to NodePort.
Run this command after you've changed the service type to find out the NodePort where your application is running.
kubectl get svc
Take a look at the external IP for the nodes in the Kubernetes cluster
kubectl get nodes -o wide
You can now check out the application at
http://13.126.11.218:31296/home
.
Now, let's implement the ingress controller! ๐
To install the NGINX ingress controller on AWS, just run the command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.1/deploy/static/provider/aws/deploy.yaml
Let's check if the ingress controller is up and running.
kubectl get pod -n ingress-nginx
Let's verify if the ingress controller is monitoring our ingress resource.
kuebctl get ing
Here, we can see that the ingress controller is managing our ingress resource. It has assigned a domain name:
adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
.
Wait a minute, what happens if we try to access the load balancer at adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
? Will we be able to reach our application?
In this scenario, the application is not accessible through adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
. This raises the question: why did this occur?The reason for this is that in the ingress configuration, we have specified that the load balancer should only accept requests if they are accessing the hostname go-web-app.local
.
To access the application, we need to map the hostname go-web-app.local
to the IP address of the load balancer adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
.
To obtain the IP address of
adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
, execute the following commandnslookup adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
Now, let's link the IP of elastic loadbalancer address
adc73383deb374481a2ea5c3f048b7d2-181af1fd367d9cc8.elb.ap-south-1.amazonaws.com
with the hostgo-web-app.local
.cat sudo vim /etc/hosts
You can now access the application at go-web-app.local! ๐
Harnessing the Power of Helm: Simplifying Multi-Environment Deployments
When deploying an application across different environments, Helm becomes essential. So far, we've been using hard-coded configuration files for services, deployments, and more. Imagine we need the image go-web-app:dev
for the development environment, go-web-app:prod
for production, and go-web-app:qa
for staging. Does this mean we have to create separate folders like k8s/dev, k8s/prod, and k8s/staging? Fortunately, no. Helm allows us to make these configurations variable, simplifying the process.
Create a folder called
Helm
in the project's root directory. Make sure you have Helm installed on your computer.
To get a Helm chart template for deploying applications in Kubernetes, head over to the helm directory and run the command
helm create go-web-app-chart
Delete everything from the template folder, then copy the configuration files
deployment.yaml
,service.yaml
, andingress.yaml
inside template folder.cp k8s/manifest/ingress.yaml helm/go-web-app-chart/template cp k8s/manifest/service.yaml helm/go-web-app-chart/template cp k8s/manifest/deployment.yaml helm/go-web-app-chart/template
Change the image tag value to
{{ .Values.image.tag }}
in the templatesdeployment.yaml
file.apiVersion: apps/v1 kind: Deployment metadata: name: go-web-app labels: app: go-web-app spec: replicas: 2 selector: matchLabels: app: go-web-app template: metadata: labels: app: go-web-app spec: containers: - name: go-web-app image: thepraduman/go-web-app:{{ .Values.image.tag }} ports: - containerPort: 8080
Now, whenever Helm runs, it checks the
values.yaml
file for the image tag.Proceed to modify the
values.yaml
file accordingly.# Default values for go-web-app-chart. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: repository: abhishekf5/go-web-app pullPolicy: IfNotPresent # Overrides the image tag whose default is the chart appVersion. tag: "v1" # When we set up CI/CD, # we'll make the Helm values.yaml update automatically. # Every time the CI/CD runs, # it will refresh the Helm values.yaml with the newest image created in the CI. # Then, using ArgoCD, that latest image with the newest tag will be deployed automatically. ingress: enabled: false className: "" annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific
To make sure
ingress-nginx
gets installed automatically as a dependency in your Helm chart, just add the following inside your Helm chart (./go-web-app-chart/Chart.yaml
):dependencies: - name: ingress-nginx version: "4.10.0" # Use latest stable version repository: "https://kubernetes.github.io/ingress-nginx"
Let's verify whether Helm is working accordingly or not.
# lets delete all the resources and recreate them using helm chart kubectl delete deploy/go-web-app svc/go-web-app ing/go-web-app
Now, run the following command to create all the resources again with the Helm chart and watch the magic happen! ๐
helm install go-web-app ./go-web-app-chart
You can now access the application at
go-web-app.local
! ๐To uninstall everything, run the command
helm uninstall go-web-app
Here, we can say that Helm is working perfectly!
Streamlining CI with GitHub Actions: Automate Your Workflow
In CI, we will set up several stages:
Build and run unit tests.
Perform static code analysis.
Create a Docker image and push it.
Update Helm with the new Docker image.
Once this is complete, CD will take over:
- When the Helm tag is updated, ArgoCD will pull the Helm chart and deploy it to the Kubernetes cluster.
To implement Continuous Integration (CI) using GitHub Actions, start by creating a directory named .github
in the root directory of your project. Inside this directory, create another directory called workflows
, and within workflows
, create a file named ci.yaml
and update the ci.yaml
.
name: CI/CD
# Exclude the workflow to run on changes to the helm chart
on:
push:
branches:
- main
paths-ignore:
- 'helm/**'
- 'k8s/**'
- 'README.md'
jobs:
## stage 1
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Go 1.22
uses: actions/setup-go@v2
with:
go-version: 1.22
- name: Build
run: go build -o go-web-app
- name: Test
run: go test ./...
## stage 2
code-quality:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v6
with:
version: latest
## stage 3
push:
runs-on: ubuntu-latest
needs: build
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and Push action
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/go-web-app:${{github.run_id}}
## stage 4
update-newtag-in-helm-chart:
runs-on: ubuntu-latest
needs: push
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
token: ${{ secrets.TOKEN }}
- name: Update tag in Helm chart
run: |
sed -i 's/tag: .*/tag: "${{github.run_id}}"/' helm/go-web-app-chart/values.yaml
- name: Commit and push changes
run: |
git config --global user.email "abhishek@gmail.com"
git config --global user.name "Abhishek Veeramalla"
git add helm/go-web-app-chart/values.yaml
git commit -m "Update tag in Helm chart"
git push
Now that we've finished setting up the CI pipeline, let's check to make sure everything is working smoothly.
git push
Head over to the GitHub repository and take a look at the Actions tab.
Here, the GitHub Action is working perfectly, and we've successfully completed the implementation of continuous integration! ๐
Seamless CD with ArgoCD: Automate Your Kubernetes Deployments
Now, let's turn our attention to the ArgoCD component. Every time the CI pipeline runs, ArgoCD should spot the changes and deploy them to the Kubernetes cluster
Create a namespace called argocd and install ArgoCD there using a manifest file.
kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Access the Argo CD UI (Loadbalancer service).
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
Access the Argo CD UI (Loadbalancer service) -For Windows
kubectl patch svc argocd-server -n argocd -p '{\"spec\": {\"type\": \"LoadBalancer\"}}
Get the Loadbalancer service IP
kubectl get svc argocd-server -n argocd
Now you can access the ArgoCD UI at
ae9f68624737e4e49a0bea0a56d 6dce4-1141447750.ap-south-1.elb.amazonaws.com
. Enjoy exploring!To log in to ArgoCD, simply use
admin
as the username, and you can get the password by running the following command.kubectl get secrets -n argocd kubectl edit secrets argocd-initial-admin-secret -n argocd
The password we got is in base64 format, so to decode it, just run this command.
echo <password-that-you-got> | base64 --decode
We've got access to the ArgoCD UI! ๐
Tap on the
"New App"
button and fill in the required details.Once you've filled in all the details, just click on
"Create"
.
Now, ArgoCD will look for all the files within the Helm chart in the GitHub repository. It will update the values.yaml file with all the necessary changes. If you click on the style you will able to see that argocd started deploying everything
So, we're all done, and guess what? We deployed the application using CI/CD! ๐
Conclusion:
This article outlines the implementation of end-to-end DevOps practices on a project lacking them. The process begins with containerizing the application using a multistage Dockerfile and pushing the built image to a Docker repository. It progresses to deploying the application on a Kubernetes EKS cluster using Kubernetes manifests for deployment, service, and ingress. A Helm chart is created for environment-specific deployments, allowing configurations to be easily managed. Continuous integration is set up using GitHub Actions, which includes stages for building, testing, static analysis, Docker image creation, and updating Helm chart tags. The deployment workflow is automated using ArgoCD for continuous deployment, enabling seamless and efficient updates to the Kubernetes cluster.