mins read

Helm: Paving the Way to Environment Aware Deployments

Helm: Paving the Way to Environment Aware Deployments

March 12, 2021
Green Alert
Last Update posted on
February 3, 2024
Beyond Monitoring: Predictive Digital Risk Protection with CloudSEK

Protect your organization from external threats like data leaks, brand threats, dark web originated threats and more. Schedule a demo today!

Schedule a Demo
Table of Contents
Author(s)
No items found.

 

Developers prefer to have separate environments for testing, development and production. This is because isolated environments reduce the risk of a disaster and allows different teams to execute multiple development initiatives. 

However, when several teams and departments have their own staging (testing) environment, managing deployments becomes a great challenge. Managing multiple environments is complicated, especially when we consider the system configuration, runtime, and environment configuration required to run the deployments. 

Containers, The Builder!

The first step to ensure isolation of all components and services is containerization and building an environment aware application code. 

In this blog, we are going to take the example of a node REST application. Which on pinging a GET request will return a simple “Hello from {Environment}” text string, wherein the environment could be QA, Private Eng, Private Sales or Production.

The application code for this will look something like:

 

const express = require('express')

const app = express()

const port = 3000

 

app.get('/', (req, res) => {

 res.send(`Hello From ${process.env.ENV}!`)

})

 

app.listen(port, () => {

 console.log(`Example app listening at http://localhost:${port}`)

})

 

We also have a Dockerfile to define services and to build and run containerized applications as defined below:

 

FROM node:14

WORKDIR /usr/src/app

COPY package*.json .

RUN npm install

COPY app.js ./

EXPOSE 3000

CMD 

 


Kubernetes, The Manager!

Now that we have a Docker image, it’s time to orchestrate Kubernetes Compute Resources to run the application. 

The manifest for creating Kubernetes service is defined below:

 

apiVersion: v1

kind: Service

metadata:

 name: hello-world-service

 labels:

   env: staging

spec:

 type: NodePort

 selector:

   app: HelloWorld

 ports:

   - protocol: TCP

     port: 3000

     targetPort: 3000

     # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)

     nodePort: 30007

 

And similarly, the manifest for creating Kubernetes deployments is as given below:

 

apiVersion: apps/v1

kind: Deployment

metadata:

 name: hello-world-service

 labels:

   env: staging

   app: hello-world

spec:

 replicas: 1

 selector:

   matchLabels:

     app: hello-world

 template:

   metadata:

     labels:

       app: hello-world

   spec:

     containers:

     - name: hello-world

       image: https://registry.gitlab.com/xyz/hello-world

 

       ports:

       - containerPort: 3000

       env:

       - name: ENV

         value: QA

 

The above example is suitable for a QA Environment and other environments like Production. For other environment variables, we will have to create a different set of manifest files containing different port and ENV values. To manage all the different deployments, we have different config. files.

 

Helm, The Deployer!

Creating and managing multiple deployments and sets of config. files can be a cumbersome task. Helm offers a solution for this; it allows you to write all Kubernetes-related setup files once and run them with different values each time, as required. In which case, we only require one set of config. files as a template for individual environments. We need to simply write value files to fill in those templates.

Here’s what you need to know about Helm.

 

What is Helm?

Helm is a package manager for Kubernetes. It streamlines managing and installing Kubernetes applications. It enables easy packaging, configuring, and deployment of applications and services onto Kubernetes clusters. 

 

Why Helm?

Setting up a single application on Kubernetes involves creating multiple independent Kube resources such as Services, Pods, Deployments, Secrets, ReplicaSets, etc. This requires that you write a YAML (recursive acronym for “YAML Ain’t Markup Language”) manifest file for each Kube resource.

Apart from resolving issues related to managing Kubernetes manifests, Helm is an official Kubernetes project and a part of Cloud Native Computing Foundation (CNCF). Helm provides basic features just as other package managers like Debian APT. Some of the features offered by Helm are:


i) Installing software

ii) Upgrading software

iii) Configuring software deployments

iv) Automatically install software dependencies

Now that we understand what Helm is and why we choose Helm, let’s suppose a scenario where you need to host multiple environments of an application and where these multiple manifest files need to be replicated. First create a Helm starter chart by using: helm create hello-world

Running the above command creates a repo named hello-world in the following file tree:

Helm Hello World

 

For Kubernetes configuration of applications, we simply need to configure service and deployment. So you only need to create service.yaml, deployment.yaml and NOTES.txt files (NOTES.txt contains help text for our chart which will be displayed when Helm is installed) and remove other repositories of file templates.

The original content of the services.yaml file is:

 

apiVersion: v1

kind: Service

metadata:

 name: {{ include "hello-world.fullname" . }}

 labels:

   {{- include "hello-world.labels" . | nindent 4 }}
spec:

 type: {{ .Values.service.type }}

 ports:

   - port: {{ .Values.service.port }}

     targetPort: {{ .Values.service.port }}

     protocol: TCP

     name: http

     # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)

     nodePort: {{ .Values.service.nodePort }}

 selector:

   {{- include "hello-world.selectorLabels" . | nindent
4 }}

 

The values written for this template are inherited from the values.yaml file. They can be renamed to stag_values.yaml.

Now, to update the values of ports and NodePorts for the configuration of Kubernetes services, change the services value inside stag_values.yaml.

 

service:

 type: NodePOrt

 port: 3000

 nodePort: 30007

 

Add the env value inside stag_values.yaml which adds environment variables inside Kubernetes deployment.

 

env:

 envValue: QA

 

Update the image value in stag_values.yaml to the image which you have decided to use:

 

image:

 repository: https://registry.gitlab.com/xyz/hello-world

 pullPolicy: IfNotPresent

 

You may also update the port number to change the default port number inside the deployment.yaml.

We are now ready to launch our Kubernetes deployment using Helm.

helm upgrade email-service . –install -n email-service-staging -f stag_values.yaml

This command will create the Kubernetes config. files and will apply those files to create the Kubernetes resources.

 

Let’s have a look at the Kubernetes service config. file created by Helm:

 

apiVersion: v1

kind: Service

metadata:

 creationTimestamp: "2021-02-14T09:45:44Z"

 labels:

   app.kubernetes.io/instance: hello-world

   app.kubernetes.io/managed-by: Helm

   app.kubernetes.io/name: hello-world

   app.kubernetes.io/version: b4ab7579

   helm.sh/chart: hello-world-0.1.0

 managedFields:

 - apiVersion: v1

   fieldsType: FieldsV1

   fieldsV1:

     f:metadata:

       f:labels:

         .: {}

         f:app.kubernetes.io/instance: {}

         f:app.kubernetes.io/managed-by: {}

         f:app.kubernetes.io/name: {}

         f:app.kubernetes.io/version: {}

         f:helm.sh/chart: {}

     f:spec:

       f:externalTrafficPolicy: {}

       f:ports:

         .: {}

         k:{"port":30551,"protocol":"TCP"}:

           .: {}

           f:name: {}

           f:nodePort: {}

           f:port: {}

           f:protocol: {}

           f:targetPort: {}

       f:selector:

         .: {}

         f:app.kubernetes.io/instance: {}

         f:app.kubernetes.io/name: {}

       f:sessionAffinity: {}

       f:type: {}

 name: hello-world

 namespace: hello-world-staging

resourceVersion: "31790318"

 selfLink: /api/v1/namespaces/hello-world/services/hello-world

 uid: 54d8f326-bf0c-4b28-ada6-3a05ef4ffdc6

spec:

 clusterIP: 109.999.1*3.111

 externalTrafficPolicy: Cluster

 ports:

 - name: http

   nodePort: 30007

   port: 3000

   protocol: TCP

   targetPort: 3000

 selector:

   app.kubernetes.io/instance: hello-world

   app.kubernetes.io/name: hello-world

 sessionAffinity: None

 type: NodePort

status:

 loadBalancer: {}

 

Thus, Helm has created all the Kube config. files for us. Which is not the most important feature it has to offer. Helm allows us to deploy the same application as production release which will be accessible on port 30099.

Instead of creating all the Helm config. files again, we can create another values.yaml file and point that value file while installing or upgrading Kube resources.

For this, let’s create another value config. file named prod_values.yaml and copy the content of stag_values.yaml to it and update the services port number and env values inside prod_values.yaml.

 

service:

 type: NodePOrt

 port: 3000

 nodePort: 30007

 

env:

 envValues: production

 

Now, apply the Helm upgrade command by passing new values to the config. file:

helm upgrade email-service . –install -n email-service-staging -f prod_values.yaml.

We have, thus, successfully released a production environment just by creating a different values.yaml file.

Working with templates and Helm simplifies the management of Kubernetes applications. It helps us deploy and orchestrate more efficiently and optimally.

Read more about the benefits of Helm here.

Author

Predict Cyber threats against your organization

Related Posts
Blog Image
May 29, 2024

Your Brand Guardians: A Deep Dive into CloudSEK's Takedown Services

Discover how CloudSEK's comprehensive takedown services protect your brand from online threats.

Blog Image
May 19, 2020

How to bypass CAPTCHAs easily using Python and other methods

How to bypass CAPTCHAs easily using Python and other methods

Blog Image
June 3, 2020

What is shadow IT and how do you manage shadow IT risks associated with remote work?

What is shadow IT and how do you manage shadow IT risks associated with remote work?

Join 10,000+ subscribers

Keep up with the latest news about strains of Malware, Phishing Lures,
Indicators of Compromise, and Data Leaks.

Take action now

Secure your organisation with our Award winning Products

CloudSEK Platform is a no-code platform that powers our products with predictive threat analytic capabilities.

Engineering

min read

Helm: Paving the Way to Environment Aware Deployments

Helm: Paving the Way to Environment Aware Deployments

Authors
Co-Authors
No items found.

 

Developers prefer to have separate environments for testing, development and production. This is because isolated environments reduce the risk of a disaster and allows different teams to execute multiple development initiatives. 

However, when several teams and departments have their own staging (testing) environment, managing deployments becomes a great challenge. Managing multiple environments is complicated, especially when we consider the system configuration, runtime, and environment configuration required to run the deployments. 

Containers, The Builder!

The first step to ensure isolation of all components and services is containerization and building an environment aware application code. 

In this blog, we are going to take the example of a node REST application. Which on pinging a GET request will return a simple “Hello from {Environment}” text string, wherein the environment could be QA, Private Eng, Private Sales or Production.

The application code for this will look something like:

 

const express = require('express')

const app = express()

const port = 3000

 

app.get('/', (req, res) => {

 res.send(`Hello From ${process.env.ENV}!`)

})

 

app.listen(port, () => {

 console.log(`Example app listening at http://localhost:${port}`)

})

 

We also have a Dockerfile to define services and to build and run containerized applications as defined below:

 

FROM node:14

WORKDIR /usr/src/app

COPY package*.json .

RUN npm install

COPY app.js ./

EXPOSE 3000

CMD 

 


Kubernetes, The Manager!

Now that we have a Docker image, it’s time to orchestrate Kubernetes Compute Resources to run the application. 

The manifest for creating Kubernetes service is defined below:

 

apiVersion: v1

kind: Service

metadata:

 name: hello-world-service

 labels:

   env: staging

spec:

 type: NodePort

 selector:

   app: HelloWorld

 ports:

   - protocol: TCP

     port: 3000

     targetPort: 3000

     # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)

     nodePort: 30007

 

And similarly, the manifest for creating Kubernetes deployments is as given below:

 

apiVersion: apps/v1

kind: Deployment

metadata:

 name: hello-world-service

 labels:

   env: staging

   app: hello-world

spec:

 replicas: 1

 selector:

   matchLabels:

     app: hello-world

 template:

   metadata:

     labels:

       app: hello-world

   spec:

     containers:

     - name: hello-world

       image: https://registry.gitlab.com/xyz/hello-world

 

       ports:

       - containerPort: 3000

       env:

       - name: ENV

         value: QA

 

The above example is suitable for a QA Environment and other environments like Production. For other environment variables, we will have to create a different set of manifest files containing different port and ENV values. To manage all the different deployments, we have different config. files.

 

Helm, The Deployer!

Creating and managing multiple deployments and sets of config. files can be a cumbersome task. Helm offers a solution for this; it allows you to write all Kubernetes-related setup files once and run them with different values each time, as required. In which case, we only require one set of config. files as a template for individual environments. We need to simply write value files to fill in those templates.

Here’s what you need to know about Helm.

 

What is Helm?

Helm is a package manager for Kubernetes. It streamlines managing and installing Kubernetes applications. It enables easy packaging, configuring, and deployment of applications and services onto Kubernetes clusters. 

 

Why Helm?

Setting up a single application on Kubernetes involves creating multiple independent Kube resources such as Services, Pods, Deployments, Secrets, ReplicaSets, etc. This requires that you write a YAML (recursive acronym for “YAML Ain’t Markup Language”) manifest file for each Kube resource.

Apart from resolving issues related to managing Kubernetes manifests, Helm is an official Kubernetes project and a part of Cloud Native Computing Foundation (CNCF). Helm provides basic features just as other package managers like Debian APT. Some of the features offered by Helm are:


i) Installing software

ii) Upgrading software

iii) Configuring software deployments

iv) Automatically install software dependencies

Now that we understand what Helm is and why we choose Helm, let’s suppose a scenario where you need to host multiple environments of an application and where these multiple manifest files need to be replicated. First create a Helm starter chart by using: helm create hello-world

Running the above command creates a repo named hello-world in the following file tree:

Helm Hello World

 

For Kubernetes configuration of applications, we simply need to configure service and deployment. So you only need to create service.yaml, deployment.yaml and NOTES.txt files (NOTES.txt contains help text for our chart which will be displayed when Helm is installed) and remove other repositories of file templates.

The original content of the services.yaml file is:

 

apiVersion: v1

kind: Service

metadata:

 name: {{ include "hello-world.fullname" . }}

 labels:

   {{- include "hello-world.labels" . | nindent 4 }}
spec:

 type: {{ .Values.service.type }}

 ports:

   - port: {{ .Values.service.port }}

     targetPort: {{ .Values.service.port }}

     protocol: TCP

     name: http

     # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)

     nodePort: {{ .Values.service.nodePort }}

 selector:

   {{- include "hello-world.selectorLabels" . | nindent
4 }}

 

The values written for this template are inherited from the values.yaml file. They can be renamed to stag_values.yaml.

Now, to update the values of ports and NodePorts for the configuration of Kubernetes services, change the services value inside stag_values.yaml.

 

service:

 type: NodePOrt

 port: 3000

 nodePort: 30007

 

Add the env value inside stag_values.yaml which adds environment variables inside Kubernetes deployment.

 

env:

 envValue: QA

 

Update the image value in stag_values.yaml to the image which you have decided to use:

 

image:

 repository: https://registry.gitlab.com/xyz/hello-world

 pullPolicy: IfNotPresent

 

You may also update the port number to change the default port number inside the deployment.yaml.

We are now ready to launch our Kubernetes deployment using Helm.

helm upgrade email-service . –install -n email-service-staging -f stag_values.yaml

This command will create the Kubernetes config. files and will apply those files to create the Kubernetes resources.

 

Let’s have a look at the Kubernetes service config. file created by Helm:

 

apiVersion: v1

kind: Service

metadata:

 creationTimestamp: "2021-02-14T09:45:44Z"

 labels:

   app.kubernetes.io/instance: hello-world

   app.kubernetes.io/managed-by: Helm

   app.kubernetes.io/name: hello-world

   app.kubernetes.io/version: b4ab7579

   helm.sh/chart: hello-world-0.1.0

 managedFields:

 - apiVersion: v1

   fieldsType: FieldsV1

   fieldsV1:

     f:metadata:

       f:labels:

         .: {}

         f:app.kubernetes.io/instance: {}

         f:app.kubernetes.io/managed-by: {}

         f:app.kubernetes.io/name: {}

         f:app.kubernetes.io/version: {}

         f:helm.sh/chart: {}

     f:spec:

       f:externalTrafficPolicy: {}

       f:ports:

         .: {}

         k:{"port":30551,"protocol":"TCP"}:

           .: {}

           f:name: {}

           f:nodePort: {}

           f:port: {}

           f:protocol: {}

           f:targetPort: {}

       f:selector:

         .: {}

         f:app.kubernetes.io/instance: {}

         f:app.kubernetes.io/name: {}

       f:sessionAffinity: {}

       f:type: {}

 name: hello-world

 namespace: hello-world-staging

resourceVersion: "31790318"

 selfLink: /api/v1/namespaces/hello-world/services/hello-world

 uid: 54d8f326-bf0c-4b28-ada6-3a05ef4ffdc6

spec:

 clusterIP: 109.999.1*3.111

 externalTrafficPolicy: Cluster

 ports:

 - name: http

   nodePort: 30007

   port: 3000

   protocol: TCP

   targetPort: 3000

 selector:

   app.kubernetes.io/instance: hello-world

   app.kubernetes.io/name: hello-world

 sessionAffinity: None

 type: NodePort

status:

 loadBalancer: {}

 

Thus, Helm has created all the Kube config. files for us. Which is not the most important feature it has to offer. Helm allows us to deploy the same application as production release which will be accessible on port 30099.

Instead of creating all the Helm config. files again, we can create another values.yaml file and point that value file while installing or upgrading Kube resources.

For this, let’s create another value config. file named prod_values.yaml and copy the content of stag_values.yaml to it and update the services port number and env values inside prod_values.yaml.

 

service:

 type: NodePOrt

 port: 3000

 nodePort: 30007

 

env:

 envValues: production

 

Now, apply the Helm upgrade command by passing new values to the config. file:

helm upgrade email-service . –install -n email-service-staging -f prod_values.yaml.

We have, thus, successfully released a production environment just by creating a different values.yaml file.

Working with templates and Helm simplifies the management of Kubernetes applications. It helps us deploy and orchestrate more efficiently and optimally.

Read more about the benefits of Helm here.