Skip to main content

Works on my machine

To make a mirror of your local environment first you needone that works and that is relativeto the kubernetes format. Putting in simple words we need to use containers locally and make it as close as possible to what will be deployed in the cluster. Let’s create a simple docker image for our fictitious application.

FROM node:10

RUN mkdir -p /home/node/app/node_modules && chown -Rnode:node /home/node/app

WORKDIR /home/node/app

COPY package*.json ./RUNnpm installCOPY . .

COPY –chown=node:node . .

USER node

EXPOSE 8080

CMD [ “node”, “app.js”]

A relative from localhost and Kubernetes2 Forthe sake of this demonstration our application is defined as a simple end point to test he response. And is define in the app.jsfile.

const express = require(‘express’)

const app = express()

const port = process.env.PORT

app.get(‘/’, (req, res) => {

res.send(‘Your app isfine… For now.’)

})

app.listen(port, () => {

console.log(`Listening at ${port}`)

})

We can now build our image using the command dockerbuild -t my_app .. This will create an image named my_applocally that you can use docker to run with PORT=3333 dockerrun -dp 3000:${PORT} -e PORT my_app. Note that we define and need to pass the environment variable PORT to the containercontext because the application relies on itto dynamically to define in which port the application will listen. This make the setup easier and flexible;you can change it if your port 3000 is already being use for example.

Now we have avery simple application and a Docker file to make it run in your machine. It’s clear that this image lacks of some functionalities useful for development like hot-reloadand support for some tests propose features provided for some libs, like a local email server in a related dependency, but we gonna ignore that.

More complex than one app

The environment can be updated and tracked in your code repository, serving a quick start for new developers and a stableenvironment for everyone. But maybe you don’thave only an app with one endpoint, you have message brokers, databases and soon. For a reliable development workspace we will create that components too, making it integrated with our application and withdocker.

Using dockercompose we can setup and run all our dependencies with just a command. This can be done with only Dockerbut compose brings a lot of features tomake the configuration of the application, networks and volumes simpler to be integrated between the components of theenvironment.

Our docker-compose.yaml will be:

version: ‘3’

services:

rabbitmq:

restart: unless-stopped

healthcheck:

test: [“CMD”, “curl”, “-f”,”http://localhost:5672″]

interval: 30s

timeout: 30s

retries: 3

ports:]

– 8080:15672

– 5672:5672

image: rabbitmq:3-management

networks:

– lkp_dev

my-app:

depends_on: -rabbitmq

build:

context: .dockerfile:

Dockerfilecontainer_name: my-app

volumes: -./:/workspace:z

ports: -‘3333:3333’

restart: on-failure

networks: -lkp_dev

environment:

PORT: 3333

networks:

secure:

driver: bridge

Here we can catch important concepts that have to be considered when you plan to deploy your application in kubernetes anddevelop in a very close relative scenario.

·        Volumes

In the my-appservice, that is our api we define a volume mapped to the directory of the application in your local machine, so any changes that you do in your code will A relative from local host andKubernetes4bereflected in the live application faster,allowing a more productive development.Your application can use volumes to save data, or just to read from a volume created before, this is important for security reasons too.

·        Network

Inthe broker and api services we use a custom definition for networks, this allow us to have more control of the linking between applications and allow this network to be used by another containers locally thatdon’t make part of your services defined in the file.

·        Readiness

Kubernetes will check if your application is ready to be start working using a readinessprobe, that can be check a file, make a request or some other action. We use it in the rabbit mq service in the health check section, if this command fails theapplication will not be ready and after some time pass in the same state it will be restarted.

We can create some end point in the app for some sort of checks and create some sort of liveness probe too, to checkthe status of the application periodically after the startup.

To localhost we go

With these modifications now we have an environment a little more relative to the kubernetes scenario, versioned and consistent from developers make the job. To really make the deploy there’s still work to do, but you have some ideaof what will face. Some security driven modifications are welcome locally already thinking in the cluster. Limiting the permissions for the user that will execute the application, in case of an attacker gaining control of the machine it will have limited options of action. Another point to pay attentionis limit the writing to the disk, blocking any files that not belong to the application to enter, like scripts and other things. Of course you will need to check if your application don’t make use of it to work. Volume scan be used in that way to limit the scope of the writing. Both of this behaviors can be emulated in the docker compose,so you can adapt the application to be compliant with it. This will make your environment better to develop, easier to catch and turn upgrades less complicated and splited from the daily changes. Others tools can be used to create a local cluster from even further immersion like kind and k3d.