Voltar
Engineering
Tabela de conteúdo

 

Developers prefer to have separate environments for testing, development and production. This is because isolated environments reduce the risk of a disaster and allows different teams to execute multiple development initiatives. 

However, when several teams and departments have their own staging (testing) environment, managing deployments becomes a great challenge. Managing multiple environments is complicated, especially when we consider the system configuration, runtime, and environment configuration required to run the deployments. 

Containers, The Builder!

The first step to ensure isolation of all components and services is containerization and building an environment aware application code. 

In this blog, we are going to take the example of a node REST application. Which on pinging a GET request will return a simple “Hello from {Environment}” text string, wherein the environment could be QA, Private Eng, Private Sales or Production.

The application code for this will look something like:

 

const express = require('express') const app = express() const port = 3000   app.get('/', (req, res) => {  res.send(`Hello From ${process.env.ENV}!`) })   app.listen(port, () => {  console.log(`Example app listening at http://localhost:${port}`) })

 

We also have a Dockerfile to define services and to build and run containerized applications as defined below:

 

FROM node:14 WORKDIR /usr/src/app COPY package*.json . RUN npm install COPY app.js ./ EXPOSE 3000 CMD

 


Kubernetes, The Manager!

Now that we have a Docker image, it’s time to orchestrate Kubernetes Compute Resources to run the application. 

The manifest for creating Kubernetes service is defined below:

 

apiVersion: v1 kind: Service metadata:  name: hello-world-service  labels:    env: staging spec:  type: NodePort  selector:    app: HelloWorld  ports:    - protocol: TCP      port: 3000      targetPort: 3000      # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)      nodePort: 30007

 

And similarly, the manifest for creating Kubernetes deployments is as given below:

 

apiVersion: apps/v1 kind: Deployment metadata:  name: hello-world-service  labels:    env: staging    app: hello-world spec:  replicas: 1  selector:    matchLabels:      app: hello-world  template:    metadata:      labels:        app: hello-world    spec:      containers:      - name: hello-world        image: https://registry.gitlab.com/xyz/hello-world          ports:        - containerPort: 3000        env:        - name: ENV          value: QA

 

The above example is suitable for a QA Environment and other environments like Production. For other environment variables, we will have to create a different set of manifest files containing different port and ENV values. To manage all the different deployments, we have different config. files.

 

Helm, The Deployer!

Creating and managing multiple deployments and sets of config. files can be a cumbersome task. Helm offers a solution for this; it allows you to write all Kubernetes-related setup files once and run them with different values each time, as required. In which case, we only require one set of config. files as a template for individual environments. We need to simply write value files to fill in those templates.

Here’s what you need to know about Helm.

 

What is Helm?

Helm is a package manager for Kubernetes. It streamlines managing and installing Kubernetes applications. It enables easy packaging, configuring, and deployment of applications and services onto Kubernetes clusters. 

 

Why Helm?

Setting up a single application on Kubernetes involves creating multiple independent Kube resources such as Services, Pods, Deployments, Secrets, ReplicaSets, etc. This requires that you write a YAML (recursive acronym for “YAML Ain’t Markup Language”) manifest file for each Kube resource.

Apart from resolving issues related to managing Kubernetes manifests, Helm is an official Kubernetes project and a part of Cloud Native Computing Foundation (CNCF). Helm provides basic features just as other package managers like Debian APT. Some of the features offered by Helm are:


i) Installing software

ii) Upgrading software

iii) Configuring software deployments

iv) Automatically install software dependencies

Now that we understand what Helm is and why we choose Helm, let’s suppose a scenario where you need to host multiple environments of an application and where these multiple manifest files need to be replicated. First create a Helm starter chart by using: helm create hello-world

Running the above command creates a repo named hello-world in the following file tree:

Helm Hello World

 

For Kubernetes configuration of applications, we simply need to configure service and deployment. So you only need to create service.yaml, deployment.yaml and NOTES.txt files (NOTES.txt contains help text for our chart which will be displayed when Helm is installed) and remove other repositories of file templates.

The original content of the services.yaml file is:

 

apiVersion: v1 kind: Service metadata:  name: {{ include "hello-world.fullname" . }}  labels:    {{- include "hello-world.labels" . | nindent 4 }} spec:  type: {{ .Values.service.type }}  ports:    - port: {{ .Values.service.port }}      targetPort: {{ .Values.service.port }}      protocol: TCP      name: http      # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)      nodePort: {{ .Values.service.nodePort }}  selector:    {{- include "hello-world.selectorLabels" . | nindent 4 }}

 

The values written for this template are inherited from the values.yaml file. They can be renamed to stag_values.yaml.

Now, to update the values of ports and NodePorts for the configuration of Kubernetes services, change the services value inside stag_values.yaml.

 

service:  type: NodePOrt  port: 3000  nodePort: 30007

 

Add the env value inside stag_values.yaml which adds environment variables inside Kubernetes deployment.

 

env:  envValue: QA

 

Update the image value in stag_values.yaml to the image which you have decided to use:

 

image:  repository: https://registry.gitlab.com/xyz/hello-world  pullPolicy: IfNotPresent

 

You may also update the port number to change the default port number inside the deployment.yaml.

We are now ready to launch our Kubernetes deployment using Helm.

helm upgrade email-service . –install -n email-service-staging -f stag_values.yaml

This command will create the Kubernetes config. files and will apply those files to create the Kubernetes resources.

 

Let’s have a look at the Kubernetes service config. file created by Helm:

 

apiVersion: v1 kind: Service metadata:  creationTimestamp: "2021-02-14T09:45:44Z"  labels:    app.kubernetes.io/instance: hello-world    app.kubernetes.io/managed-by: Helm    app.kubernetes.io/name: hello-world    app.kubernetes.io/version: b4ab7579    helm.sh/chart: hello-world-0.1.0  managedFields:  - apiVersion: v1    fieldsType: FieldsV1    fieldsV1:      f:metadata:        f:labels:          .: {}          f:app.kubernetes.io/instance: {}          f:app.kubernetes.io/managed-by: {}          f:app.kubernetes.io/name: {}          f:app.kubernetes.io/version: {}          f:helm.sh/chart: {}      f:spec:        f:externalTrafficPolicy: {}        f:ports:          .: {}          k:{"port":30551,"protocol":"TCP"}:            .: {}            f:name: {}            f:nodePort: {}            f:port: {}            f:protocol: {}            f:targetPort: {}        f:selector:          .: {}          f:app.kubernetes.io/instance: {}          f:app.kubernetes.io/name: {}        f:sessionAffinity: {}        f:type: {}  name: hello-world  namespace: hello-world-staging resourceVersion: "31790318"  selfLink: /api/v1/namespaces/hello-world/services/hello-world  uid: 54d8f326-bf0c-4b28-ada6-3a05ef4ffdc6 spec:  clusterIP: 109.999.1*3.111  externalTrafficPolicy: Cluster  ports:  - name: http    nodePort: 30007    port: 3000    protocol: TCP    targetPort: 3000  selector:    app.kubernetes.io/instance: olá mundo    app.kubernetes.io/nome: olá mundo  Afinidade da sessão: Nenhum  tipo: NodePort status:  Balanceador de carga: {}

 

Assim, o Helm criou todos os arquivos de configuração do Kube para nós. O que não é o recurso mais importante que ele tem a oferecer. O Helm nos permite implantar o mesmo aplicativo da versão de produção, que estará acessível na porta 30099.

Em vez de criar todos os arquivos de configuração do Helm novamente, podemos criar outro arquivo values.yaml e apontar esse arquivo de valor ao instalar ou atualizar os recursos do Kube.

Para isso, vamos criar outro arquivo de configuração de valor chamado prod_values.yaml, copiar o conteúdo de stag_values.yaml para ele e atualizar o número da porta de serviços e os valores env dentro de prod_values.yaml.

 

manutenção:  tipo: NodePort  porto: 3000 Node da porta: 30007   ambiente: EnvVvalues: produção

 

Agora, aplique o comando Helm upgrade passando novos valores para o arquivo config.:

serviço de e-mail helm upgrade. —install -n email-service-staging -f prod_values.yaml.

Portanto, lançamos com sucesso um ambiente de produção apenas criando um arquivo values.yaml diferente.

Trabalhar com modelos e com o Helm simplifica o gerenciamento de aplicativos Kubernetes. Isso nos ajuda a implantar e orquestrar de forma mais eficiente e otimizada.

Leia mais sobre os benefícios do Helm aqui.

Nenhum item encontrado.

Blogs relacionados