systemd-nspawn

Systemd-nspawn

This is a just a note on how to use systemd-nspawn. On no circumstances should you consider this a total documentation on ”systemd-nspawn” , im just trying to figure out how it works and what i can do with it.

Lets start from the beginning

We need a file system

There are probably 100 ways of getting a file system. one way is to use:

Here is a link to get more information. But to download the zesty ubuntu to the ubuntu directory, here is the command:

a more elaborate way is to specify more options e.g:

 

 

Another way to get  a root-filesystem is to use  alpine. This will download an root file system which can be extracted. Though some changes needs to be done to actually have it working i.e the tty should be changed, see this script for more information. So now that we have the root file system we need to launch the actual container.

Systemd-nspawn

To be able to login to any the new linux system it’s most of the time necessary to set the root password. This can be done by running one program from the file system

e.g

This will start passwd and enables the reset of the password. The same thing applies to the alpine

Now that the password is set , its time to startup the actual container.

This should start a clean ubuntu. Neat and tidy!

Sometimes it’s crucial to have some of the system from the host machine mounted in the container image.  This is done with the help of the bind option.

This will mount the /home/fun in the same mount place as the in the container. If you want to mount it somewhere else then use : to separate the host with the target e.g

There are some things still missing though, lets start with networking.

 

Setting up network

First of all we need to setup the dns server.  In Ubuntu this is done via

Unfortunately there seems to be a bug somewhere in the resolvconf package that made the resolving failed with some sort of dnssec problem. I fixed this problem with:

Then restart the resolver

 

first lets check what interfaces there are on the host.

Since the container started with -n, –network-veth which basically uses the same interface for the host and the container. Which is nice, but if you want to run the container as basically a standalone computer, it would be more nice to have its own ip address. There are some different ways of doing this e.g using bridge, macvlan and maybe also ipvlan. The last one I haven’t tried so I’m not going to mention this again in this post. The Bridge is kind of neat, but as i figured I had to setup NAT too.  The easiest way was macvlan, so the rest of this description will be on that.

MACVLAN

So what is it? ”Macvlan and ipvlan are Linux network drivers that exposes underlay or host interfaces directly to VMs or Containers running in the host.” For a more elaborate description read this link. The first thing to do is to create a macvlan device.

this will create a new device (called mymacvlan1) connected to eth0 and is using the bridge mode.  Now if we start the Ubuntu container with:

 

This creates a new device in the container called mv-mymacvlan1, this device gets its own hw address. You can now assign ip address to this device, and it should work as a normal device.  The same thing can be done with a second container and so on. The neat thing with this is that its now possible to connect from one container to another. But , there is always a but! The Host is unable to connect to the containers, this due to there is no route.  This can be fixed by creating a macvlan on the host too, and use that interface to route (see this link )  through this interface.  Though one should be aware that if the eth0 device goes down, all the interfaces looses there connection. Anyhow here is how to do it

 

Now its possible for the host to connect to the containers too.

Kommentera

E-postadressen publiceras inte. Obligatoriska fält är märkta *