Deploying Node.js Socket.IO Application with DevSecOps, GitOps, and Istio on Kubernetes

Deploying Node.js Socket.IO Application with DevSecOps, GitOps, and Istio on Kubernetes

DevSecOps, short for Development, Security, and Operations, is an approach that integrates security practices into the DevOps process. It aims to ensure that security is a shared responsibility throughout the entire IT lifecycle, from initial design through development, integration, testing, deployment, and operations.

GitOps is a modern operational framework that uses Git as the single source of truth for declarative infrastructure and application deployment. It combines the principles of Git version control with DevOps practices to automate and streamline the deployment and management of applications in cloud-native environments, particularly Kubernetes.

Istio is an open-source service mesh that provides a way to manage, secure, and monitor microservices in cloud-native applications. It is designed to handle the complexities of microservices architectures by offering various capabilities that enhance the communication between services.

This blog generally provides a clear understanding of the above-mentioned approaches to deploy applications onto Kubernetes using modern-day tools like Jenkins (Continuous Integration), Trivy, Sonarqube, OWASP and quality gates (Security Tools ensuring SecOps), Docker and Kubernetes (Containerization and Container Orchestration tools), and finally ArgoCD (helps in Continuous Deployment).

Steps Followed in setting up the complete pipeline is :-

Architecture of the complete pipeline that needs to be followed:-

  1. Pre-requisites for this setup are:

    1. An EC2 instance with t3.large capacity and 30 GB of storage.

    2. A Kubernetes cluster with 4 nodes (we used AWS EKS to set up the Kubernetes cluster).

    3. A GitHub account for storing code and Kubernetes manifests.

Note:- Command to create a 4 node cluter onto AWS cloud.

eksctl create cluster --region <aws-region> --name <name of your cluster> --nodes <number of nodes you need> --nodegroup-name <name of the nodegrp> --node-type <type of the nodes you needs>

In our case we have used following command:- eksctl create cluster --region ap-south-1 --name istio-cluster --nodes 4 --nodegroup-name istio-ng --node-type t3.medium

Note:- You can also pass other flags to configure the networking stuff if your requirement is foccussed to one of the particular VPC in your AWS account.

Reference Documents :-

  1. Eksctl docs

  2. Checkout my blog regarding other methods of creating EKS cluster

Setting up Jenkins, Trivy, Docker and SonarQube on the EC2 server.

  • To setup Jenkins in the server you can use the following scripts.
#!/bin/bash
sudo apt-get update
sudo apt-get install openjdk-17-jdk -y
sudo wget -O /usr/share/keyrings/jenkins-keyring.asc https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins -y
sudo systemctl enable jenkins

  • After installing all the default Jenkins plugins, you need to install additional plugins to build Node.js applications and perform OWASP and quality gate dependency checks on the application code in GitHub.

  • The plugins needed to installed for this pipeline is that:-

    Goto Dashboard → Manage Jenkins → Plugins. Here, we’ll install all the necessary plugins required for application.

    1) Eclipse Temurin Installer (Install without restart)

    2) SonarQube Scanner (Install without restart)

    3) NodeJs Plugin (Install Without restart)

    4) Email Extension Plugin

    5) Quality Gates

    6) Owasp Dependency check

  • Next, we need to configure all the tools required on the Jenkins server to ensure proper dependencies are set up for building and checking our application code.

In order to configure the tools :- Goto Manage Jenkins → Tools → JDK installation

The same process applies for installing Node.js, SonarQube, and Dependency-Check tools.

  • With this, the initial setup of Jenkins is complete. We first set up the SonarQube server and installed Trivy on the server before configuring all the credentials needed by the pipeline to be executed on the Jenkins server.

  • To install sonarqube, we need to install docker into the server as we will be running sonarqube lts community image as a docker container.

# command to install docker into the server 
sudo apt update
sudo apt install docker.io -y
 docker run -d --name sonar -p 9000:9000 sonarqube:lts-community

Then sonarqube with accessible on the http://<public ip of the server>:9000/ with username as admin and password as admin.

  • In order to connect to the Jenkins server we need to configure some stuff in the sonarqube dashboard:-

  • First, create a token to set up authentication between Jenkins and the SonarQube container: Go to Administration → Security → Update Token.

  • In order to add a webhook for the quality gates plugin can ensure that each phase of the process meets certain predefined standards before moving on to the next.

  • Go to Credentials → Webhook

  • Then, we need to create a project using the token created above so that the SonarQube tool configured in Jenkins can process the vulnerability assessment on the code in GitHub. The detailed report will then be visible in the SonarQube dashboard under the same project.

  • Go to Project → Click on Manually

In order to store GitHub credentials in next step, we need to first create a personal access token as password policy for GitHub login is not accepted.

Goto Settings → Developer Settings → Personal Access token → Generate a new token.

With this, the SonarQube setup is complete. Now, we move forward with the installation of Trivy and adding Docker and Jenkins to the same user groups so that Jenkins can run Docker commands without any errors.

sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy

To add docker and Jenkins user in the same group apply the following commands on the server.

sudo usermod -aG docker jenkins
systemctl restart jenkins
# Then inorder to test that jenkins user is able to perform the docker commands 
# use the following commands.
sudo su - jenkins # change from current user to jenkins user
docker ps -a  # if you get response of this command then its mean Jenkins user is able to perform docker commands.
  • Now we have configure all the credentials in the jenkins dashboard:- Goto Manage Jenkins → Global Credentials → Add Credentials

  • After adding all the credentials, we need to configure the docker tool and sonarqube server under :- Goto Manage Jenkins → System

  • Now its time to configure the first pipeline and then check that the docker image are created and pushed to docker hub repo.
pipeline{
    agent any
    tools{
        jdk 'jdk17'
        nodejs 'node20'
    }
    environment {
        SCANNER_HOME=tool 'sonar-scanner'
    }
    stages {
        stage('clean workspace'){
            steps{
                cleanWs()
            }
        }
        stage('Checkout from Git'){
            steps{
                git branch: 'main', url: 'https://github.com/sibasish934/code-sync.git'
            }
        }
        stage("Sonarqube Analysis "){
            steps{
                withSonarQubeEnv('sonar-server') {
                    sh ''' $SCANNER_HOME/bin/sonar-scanner -Dsonar.projectName=code-sync \
                    -Dsonar.projectKey=code-sync '''
                }
            }
        }
        stage("quality gate"){
           steps {
                script {
                    waitForQualityGate abortPipeline: false, credentialsId: 'code-sync-token' 
                }
            } 
        }
        stage('Install Dependencies') {
            steps {
                sh "npm install"
            }
        }
        stage('OWASP FS SCAN') {
            steps {
                dependencyCheck additionalArguments: '--scan ./ --disableYarnAudit --disableNodeAudit', odcInstallation: 'dp-check'
                dependencyCheckPublisher pattern: '**/dependency-check-report.xml'
            }
        }
        stage('TRIVY FS SCAN') {
            steps {
                sh "trivy fs . > trivyfs.txt"
            }
        }
        stage("Docker Build & Push"){
            steps{
                script{
                   withDockerRegistry(credentialsId: 'docker-creds'){ 
                       sh "docker login"
                       sh "docker build --build-arg VITE_APP_BACKEND_URL=http://localhost:5000/ -t code-sync:${BUILD_NUMBER} ."
                       sh "docker image ls"
                       sh "docker tag code-sync:${BUILD_NUMBER} <Docker hub registry username>/code-sync:${BUILD_NUMBER} "
                       sh "docker push <Docker hub registry username>/code-sync:${BUILD_NUMBER}"
                    }
                }
            }
        }
        stage("TRIVY"){
            steps{
                sh "trivy image <Docker hub registry username>/code-sync:${BUILD_NUMBER} > trivyimage.txt" 
            }
        }

        stage('Trigger update-manifest for github') {
            steps{
                echo "Triggering the update job"
                build job: 'update-manifest', parameters: [string(name: 'DOCKERTAG', value: env.BUILD_NUMBER)]
            }
    }
  }
}

Note: The Dockerfile used in the application code to dockerize the application is:

# Use an official Node.js runtime as a parent image
FROM node:20.9.0

# Create and change to a non-root user
RUN useradd -ms /bin/bash appuser

# Set the working directory
WORKDIR /app

# Copy package.json and package-lock.json
COPY package*.json ./

# Define build argument and environment variable
ARG VITE_REACT_BACKEND_URL
ENV VITE_REACT_BACKEND_URL=${VITE_REACT_BACKEND_URL}

# Install dependencies
RUN npm install

# Copy the rest of the application code
COPY . .

# Change ownership of the /app directory to the non-root user
RUN chown -R appuser:appuser /app

# Expose the port your app runs on
EXPOSE 5000

# Switch to the non-root user
USER appuser

# Start the application
CMD [ "npm", "run", "dev" ]

Rest of the application code is available here.

Now, configure a pipeline job and add the pipeline script. Then, execute the pipeline shared above, ensuring that all the credentials and tools are set up as shown.

  • After successfully executing the first scripts, you will find that the Docker image is pushed to Docker Hub with a tag number corresponding to the build number, which is unique for every build that takes place on the Jenkins server.

  • Now its time to configure the EKS cluster with Istio and argocd.

# Steps to configure istio onto the EKS cluster.
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.22.3
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
# Steps to setup Argocd
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# This is command to get the password of the argoCD UI. 
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}"
#The above command will output an base64 encoded text. Use base64 encode-decode platform to decode it.
echo <output value> | base64 --decode # This command will give you base64 decoded password.

  • Then, we configured ArgoCD to automatically check for new commits in the GitHub repo and trigger automatic deployments.

  • Next, we need to configure another job on the Jenkins server named update-manifest. After every successful build of pipeline-1, it will log into the GitHub repo containing the Kubernetes manifests and automatically update the deployment and service files. Then, ArgoCD will detect the new commits in the GitHub repo and apply the changes without any downtime, enabling Continuous Deployment.
#update-manifest pipeline script. 
pipeline {
    agent any
    parameters {
        string(name: 'DOCKERTAG', defaultValue: '', description: 'Parameter passed from the first pipeline')
    }
    stages {
        stage("Git Clone") {
            steps {
                git branch: 'main', url: 'https://github.com/sibasish934/argocd.git'
            }
        }

        stage('Update GIT') {
            steps {
                script {
                    catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE') {
                        withCredentials([usernamePassword(credentialsId: 'git-creds', passwordVariable: 'GIT_TOKEN', usernameVariable: 'GIT_USERNAME')]) {
                            sh """
                                echo "Received parameter: ${params.DOCKERTAG}"
                                git config user.email '<github email address>'
                                git config user.name '<github username>'
                                cat istio/Deploy.yaml
                                sed -i 's|<Docker hub username>/code-sync:.*|<Docker hub username>/code-sync:${params.DOCKERTAG}|' istio/Deploy.yaml
                                cat istio/Deploy.yaml
                                git add .
                                git commit -m 'Done by Jenkins Job changemanifest: ${env.BUILD_NUMBER}'
                                git push https://${GIT_USERNAME}:${GIT_TOKEN}@github.com/sibasish934/argocd.git HEAD:main
                            """
                        }
                    }
                }
            }
        }
    }
}
  • Kubernetes Manifest including Istio files can be found here.

The application will be accessible on the Dns of the Ingress-gateway service present in the istio-system namespace.

Additional Section:- Observability

Observability is a property of a system that enables its state to be determined based on the data it generates, such as logs, metrics, and traces. In the context of DevOps, observability is crucial because it provides insights into the performance, reliability, and functionality of software applications and infrastructure

Key Components:

  • Logs: Records of events within the system.

  • Metrics: Quantitative measures of system performance (e.g., CPU usage).

  • Traces: Path analysis of requests through the system.

Benefits in DevOps

  • Improved Monitoring and Alerting:

    • Early Detection: Identifies issues before they impact users.

    • Actionable Alerts: Triggers alerts for rapid issue resolution.

  • Enhanced Debugging and Troubleshooting:

    • Root Cause Analysis: Quickly identifies the source of problems.
  • Better Performance Optimization:

    • Informed Decisions: Provides data for optimizing system performance and resource utilization.

    • Increased System Reliability:

      • Proactive Maintenance: Enables preventative measures and reduces downtime.
    • Facilitates Continuous Improvement:

      • Feedback Loop: Continuously gathers data to improve development and operations processes.

In this blogs we have a Observability tool known as PIXIE.

Pixie is an open-source observability platform for Kubernetes, providing immediate, detailed insights without code changes. Utilizing eBPF technology, Pixie offers kernel-level visibility with low overhead, differentiating it from other tools by enabling automatic instrumentation and real-time data collection directly from the Linux kernel.

  • Step to install pixie into kubernetes cluster.
# Copy and run command to install the Pixie CLI.
bash -c "$(curl -fsSL https://withpixie.ai/install.sh)"
# After successfull installation of Pixie CLI apply below commands
px auth login
px deploy

From above attached snips it is clear the pixie gives you more information regarding including logs, metrics and also data regarding p99, p70 other metrices. The best part of pixie as a tools is that we just need to install into the cluster and rest of the operation like scraping the data from kubernetes cluster will be done automatically pixie pods running in the cluster. You can check more about this tool here.

Needs more blogs on DevOps and cloud Stuff then please checkout my other blogs here.

ThankYou.