Restrict users to their own container using docker

As an IT trainer I sometimes need to provide a small Linux environment to which the trainees can log in and try out things. Docker provides a great tool to manage such an environment. But how do you get the users there with minimal configuration effort and maximum flexibility?

In principle a virtual machine would suit well for this task. The problem here however is that the VM has to be visible to the outer world. In the restricted guest networks of my customers you typically can’t simply ask dhcp for a second IP and use it for the VM. Instead you have to forward port 22 to the VM and then you only have one VM for all trainees. Running one VM per user would mean additional work and consume a lot of memory. With the following solution you only need to run an ssh server on your laptop and provide a basic docker container which does not even need to be network-compatible. The users can simply log onto the container via ssh testuser@yourmachine and you can use the docker tools like commit, kill or rm to manage the container.

We’ll discuss some options:

  • [option 1] all trainees may login with the same credentials but still see their own separate container (one nonpersistent container per ssh session).
  • [option 2] all trainees use the same credentials and share one container
  • [option 3a] every trainee gets his own credentials on his own container
  • [option 3b] every trainee gets his own container but share the credentials with all others

All this by simply installing docker and writing a pretty minimal script!

With options 3 and 4 you would have to spin a VM for every trainee or even any ssh session. Each of these VMs will eat at least 1 GB of your memory. As I only have 16 GB memory on my laptop this is not an option for me!

To get started you’ll need to have docker installed and created an image that you want to use. We’ll call it testimage. The generation of this could look like following:

In this case baseimage could be anything available like ubuntu or fedora. Then you should commit the container you’ve just created and store it as a new image called testimage:

To start and run code in the testcontainer you can run this:

And that’s exactly what we’ll use to force the user (trainee in my case) into this container. But first we’ll have to add a new system user:

The new user is member of the docker group. That’s in principle very dangerous but we won’t allow this user to properly log onto the machine. Instead we change the default login shell of testuser to a script that we’ll have to provide:

Now you can have a look at /etc/passwd to see what happened:

This means as soon as the user testuser tries to log onto your machine testshell will be executed instead of e.g. /bin/bash. Now we only need to provide a proper, but restricted, shell in testshell:

Option 1: One nonpersistent container per ssh session

If you want to have one fresh environment for every ssh session you can simply have your testshell look like that:

This will spawn a new container for every ssh session of testuser and redirect the session to /bin/bash inside of this container. As soon as the testuser logs out, this container will be removed (see --rm) meaning the state of the container is not persisted.

Please note that this way you cannot open two ssh sessions with the same container. Every time you run ssh testuser@machine you’ll end up in a fresh container. This might become annoying when users accidentally logout during the coffee break.

Option 2: One shared, persistent container

A very simple alternative is to simply use only one container for every ssh connection. This means that all trainees log onto the same container with the same user name

docker start is idempotent. So if the container is not running yet, the first attempt to log in as testuser will start it.

The big advantage here is that all trainees can logout and login again and still see what they’ve done so far on the container. The fact that the container is shared between all trainees might be useful if you want to show how multiuser OS work (e.g. concurrent access to files).

Option 3a/b: One persistent container per trainee

If you want to have a separate container for every trainee but still keep the state of the containers during the coffee break, you’ll have to find a way to map between trainee and container.

3a: Separate credentials

One way would be to assign a separate linux user to every trainee (user1, user2, … userN). You could create a container for all these users:

Now all these users may use the same testshell script as login shell:

whoami returns the name of the current user which should be e.g. user3 and hence the user is forwarded to the container with the same name.

Even though there are many advantages of this solution, you’ll have to make sure that the trainees will use the right credentials. And that’s much harder than you’d expect!

3b: Shared credentials

If you want to use only one user for all trainees, you’ll have to use the client IP to map between trainee and container. Luckily there’s an environment variable to get this information:

Here SSH_CLIENT shows the ip of the client from where the user is currently connected. In this simple example I connected from localhost to localhost, hence the ip is

We’ll take this to create a unique name for a container that should be used by the corresponding user:

This script automatically creates an image with the name equal to the source IP of the ssh session if it doesn’t already exist. docker start is idempotent and will only start the container if it’s not running yet.

Removing containers

After the training session you can stop and revert the container the following way:

This example allows any number of testusers to log onto the same container. That’s very useful if you want to allow every user to start multiple ssh sessions simultaneously and see how these interact.

Potential security issues

Please note that using Linux containers you might have security issues as they are not as safe as virtual machines or Solaris zones. In general they might be able to get root access to the host machine. More details can be found here: