Setting Up Multiple Docker Containers That Provide Remote Sessions

Docker is a widely used virtualisation technology that is based on the Linux-kernels network-namespaces-feature and works in conjunction with github repositories that provide a wide array of various distinct services that can be installed and run in virtual machines just by typing a single command.

For a basic introduction I would recommend this tutorial.

When setting up a bunch of docker containers, however, you may soon notice, that using only a single Dockerfile for each container may create a lot of overhead and takes much longer than necessary. Thats where this howto comes into play.

This howto covers two topics. First of all we will create a simple Docker image that can be used in conjunction with Xrdp in order to administrate it remotely. Here we will explain basic handling of the Dockerfile and how we can enable remote sessions to our Docker container.

Then, the formerly created image will be used as a base for a list of other images and containers. Here we will cover topics such as networking between docker containers.

Altogether, this will enable you to efficiently create and comfortably manage a small array of docker containers that covers all your hosting needs.

Setup a Base Image

Setting up a base image is very easy in Docker. In Ubuntu all you need to have installed is the docker.io package including dependencies and then create a "Dockerfile":
FROM ubuntu:17.04
MAINTAINER Michael Mertins [michaelmertins@gmail.com]
RUN apt update && apt upgrade -y 

Once the file exists you can execute
docker build . --tag=myFirstDockerImage
And then run the image with
docker run myFirstDockerImage
Looking closely, you can see that we do two things in this Dockerfile: first of all, we use the command FROM ubuntu:17.04 to reference another, albeit external, Dockerfile that provides a basic Ubuntu 17.04 installation. Secondly, we are able to execute commands with the RUN operation.

In the following Dockerfile example we install all kinds of packages, including Xrdp and and the Mate-Desktop, and we also create a password for the root user. For security reasons you should not enter it in cleartext but the result of openssl passwd .
FROM ubuntu:17.04
MAINTAINER Michael Mertins [michaelmertins@gmail.com]
RUN apt update && apt upgrade -y && \
apt install -y \ 
software-properties-common \ 
htop \ 
nano \
apt install -y \
mate-core \
mate-desktop-environment \
mate-notification-daemon && \
sed -i.bak '/fi/a #xrdp multiple users configuration \n mate-session \n' /etc/xrdp/startwm.sh 
#enter openssl password hash here:
RUN usermod -p '[return of openssl command]' root

Creating Child Images From Your Base Image

Simply, by using what you already know, you could create a bunch of child images from your base image just by creating new docker files that use your base image and replacing FROM ubuntu:17.04 with FROM myFirstDockerImage.
All these images can be run as docker containers and can be used for a variety of purposes. If you just simply wanted to run the same docker image two times, you could just run it twice and it will be active as two distinct docker containers which the command docker ps will prove.
However, changes of the content of docker containers will only last while they are running. This makes docker images ideal for testing purposes as a simple docker restart command will delete all changes and restore the original state.

Persist Information

As just described, docker containers information is per default rather volatile. Two methods exist to keep data persistent.
If you know beforehand, which data you want to add to your docker image you can simply use the ADD or COPY operations to copy files to your image. This may be combined with an ENTRYPOINT file that will always be executed and started when you run your docker container. An example Dockerfile will be provided later on.
Another approach is to create a VOLUME within the image. A volume is a file path within the docker container that is mapped to a file path on the host system and that will be mounted every time your container starts. The following example shows how the described commands work together in order to provide an image that adds some custom lines to the official Postgres Dockerfile.
FROM myFirstDockerImage
MAINTAINER Michael Mertins [michaelmertins@gmail.com]
...
COPY backup.sh /etc/cron.daily/
ADD init-db.sh /docker-entrypoint-initdb.d/
#Expose Ports 5432 and 3389 to connect with postgresql and xrdp
EXPOSE 5432 3389
VOLUME "/var/lib/postgresql/data"
...
#parameter "postgres" is assigned to entrypoint script
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["postgres"]

Communication Between Docker Containers

Once you have created all your desired docker images, you may want to access their services. Be it simply by using them as test desktop environments via remote desktop protocol or using them as distinct services. The docker-network command allows you to create your own network and the --net, --ip and --ad-host parameters allow you to explicitly control the network parameters of your containers and specificy the hostnames they know.
#create my network
docker network create --subnet=172.18.0.0/16 mydockernet

#create container that uses my network and assign a static ip. also publish ports for postgresql db and xrdp, so that they can be reached from outside the host
docker create --net mydockernet --ip 172.18.0.16 --publish 5432:5432 --publish 3389:3389 --name postgresqldb postgresqlImageFromMyFirstDockerImage

#also assign the ip of the postgres db as a hostname and expose the Xrdp port as port 4489 on the host system
docker create --net mydockernet --ip 172.18.0.15 --add-host=postgresql:172.18.0.16 -p 4489:3389  --name=basicstuff myFirstDockerImage

Other helpful commands

Now you basically know enough about Docker to create your own service or test infrastructure. At last, I would also like to add a few helpful commands that may come in hand after experimenting with Docker for a while:
#remove all containers (even running containers!)
docker rm -f `sudo docker ps --no-trunc -aq`
#remove all images (forced!)
docker rmi -f $(sudo docker images -q)
#clean up old or unused volumes
docker volume rm $(docker volume ls -qf dangling=true)

Kommentare

Beliebte Posts aus diesem Blog

Ubuntu 16.04.2 Kiosk Mode

This and sed

What To Do When Your Ubuntu System Hangs On Reboot Or Shutdown?