×

LXD, ZFS and bridged networking on Ubuntu 16.04 LTS+

Network changes in Ubuntu 17.10+

This guide has been updated for netplan, introduced in 17.10. Please test the configuration and let me know if you have any issues with it (easiest via tweet, @jasonbayton).

LXD works perfectly fine with a directory-based storage backend, but both speed and reliability are greatly improved when ZFS is used instead. 16.04 LTS saw the first officially supported release of ZFS for Ubuntu and having just set up a fresh LXD host on Elastichosts utilising both ZFS and bridged networking, I figured it’d be a good time to document it.

In this article I’ll walk through the installation of LXD, ZFS and Bridge-Utils on Ubuntu 16.04 and configure LXD to use either a physical ZFS partition or loopback device combined with a bridged networking setup allowing for containers to pick up IP addresses via DHCP on the (v)LAN rather than a private subnet.

Before we begin

This walkthrough assumes you already have a Ubuntu 16.04 server host set up and ready to work with. If you do not, please download and install it now.

You’ll also need a spare disk, partition or adequate space on-disk to support a loopback file for your ZFS filesystem.

Finally this guide is reliant on the command line and some familiarity with the CLI would be advantageous, though the objective is to make this a copy & paste article as much as possible.

Part 1: Installation

To get started, let’s install our packages. They can all be installed with one command as follows:

sudo apt-get install lxd zfsutils-linux bridge-utils

However for this I will output the commands and the result for each package individually:

sudo apt-get install lxd

Reading package lists... Done
Building dependency tree       
Reading state information... Done
lxd is already the newest version (2.0.0-0ubuntu4).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

sudo apt-get install zfsutils-linux

Reading package lists... Done
Building dependency tree       
Reading state information... Done
[...]
The following NEW packages will be installed:
  libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
  zfsutils-linux
0 upgraded, 7 newly installed, 0 to remove and 16 not upgraded.
Need to get 884 kB of archives.
[...]
Setting up zfs-doc (0.6.5.6-0ubuntu8) ...
Setting up libuutil1linux (0.6.5.6-0ubuntu8) ...
Setting up libnvpair1linux (0.6.5.6-0ubuntu8) ...
Setting up libzpool2linux (0.6.5.6-0ubuntu8) ...
Setting up libzfs2linux (0.6.5.6-0ubuntu8) ...
Setting up zfsutils-linux (0.6.5.6-0ubuntu8) ...
[...]
Setting up zfs-zed (0.6.5.6-0ubuntu8) ...
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for ureadahead (0.100.0-19) ...
Processing triggers for systemd (229-4ubuntu4) ...

sudo apt-get install bridge-utils

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  bridge-utils
0 upgraded, 1 newly installed, 0 to remove and 16 not upgraded.
Need to get 28.6 kB of archives.
[...]
Preparing to unpack .../bridge-utils_1.5-9ubuntu1_amd64.deb ...
Unpacking bridge-utils (1.5-9ubuntu1) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up bridge-utils (1.5-9ubuntu1) ...

You’ll notice I’ve installed LXD, ZFS and bridge utils. LXD should be installed by default on any 16.04 host as is shown by the output above, however should there be any updates this will bring them down before we begin.

ZFS and bridge utils are not installed by default; ZFS needs to be installed to run our storage backend and bridge utils is required in order for our bridged interface to work.

Part 2: Configuration

With the relevant packages installed, we can now move on to configuration. We’ll start by configuring the bridge as before this is complete we won’t be able to obtain DHCP addresses for containers within LXD.

Setting up the bridge

Legacy ifupdown

We’ll begin by opening /etc/network/interfaces in a text editor. I like vim:

sudo vim /etc/network/interfaces

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

This is the default interfaces file. What we’ll do here is add a new bridge named br0. The simplest edit to make to this file is as follows (note the emphasis):

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto br0
iface br0 inet dhcp
	bridge_ports eth0

iface eth0 inet manual

This will set the eth0 interface to manual and create a new bridge that piggybacks directly off it.
If you wish to create a static interface while you’re editing this file, the following may help you:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto br0 
iface br0 inet static
        address 192.168.0.44
        netmask 255.255.255.0
        network 192.168.0.0
        broadcast 192.168.0.255
        gateway 192.168.0.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers 8.8.8.8 8.8.4.4 #google-dns
        dns-search localdomain.local #optional-line
        # bridge options
        bridge_ports eth0
iface eth0 inet manual

Following any edits, it’s a good idea to restart the interfaces to force the changes to take place. Obviously if you’re connected via SSH this will disconnect your session. You’ll need to have physical access to the machine/VM.

sudo ifdown eth0 && sudo ifup eth0 && sudo ifup br0

Modern netplan

We’ll begin by opening /etc/netplan/01-netcfg.yaml in a text editor. I like vim:

# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
 version: 2
 renderer: networkd
 ethernets:
   eth0:
     dhcp4: false
 bridges:
   br0:
     interfaces: [eth0]
     dhcp4: false
     addresses: [192.168.1.99/24]
     gateway4: 192.168.1.1
     nameservers:
       addresses: [1.1.1.1,8.8.8.8]
     parameters:
       forward-delay: 0

All of the above bolded lines have been added/modified for a static IP bridge. Edit to suit your environment and then run the following to apply changes:

sudo netplan apply

Running ifconfig on the CLI will now confirm the changes have been applied:

jason@ubuntu-lxdzfs:~$ ifconfig
br0       Link encap:Ethernet  HWaddr 00:0c:29:2f:cd:30  
          inet addr:192.168.0.44  Bcast:192.168.0.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe2f:cd30/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2235187 errors:0 dropped:37359 overruns:0 frame:0
          TX packets:111487 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:2108302272 (2.1 GB)  TX bytes:8296995 (8.2 MB)

eth0      Link encap:Ethernet  HWaddr 00:0c:29:2f:cd:30  
          inet6 addr: fe80::20c:29ff:fe2f:cd30/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:161568870 errors:0 dropped:0 overruns:0 frame:0
          TX packets:132702 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:185204051222 (185.2 GB)  TX bytes:23974566 (23.9 MB)

[...]

Configuring LXD & ZFS

With the bridge up and running we can now begin to configure LXD. Before we start setting up containers, LXD requests we run sudo lxd init to configure the package. As part of this, we’ll be selecting our newly created bridge for network connectivity and configuring ZFS as LXD will take care of both during setup.

For this guide I’ll be using a dedicated hard drive for the ZFS storage backend, though the same procedure can be used for a dedicated partition if you don’t have a spare drive handy. For those wishing to use a loopback file for testing, the procedure is slightly different and will be addressed below.

Find the disk/partition to be used

First we’ll run sudo fdisk -l to list the available disks & partitions on the server, here’s a relevant snippet of the output I get:

Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
[...]

Device     Boot    Start      End  Sectors  Size Id Type
/dev/sda1  *        2048 36132863 36130816 17.2G 83 Linux
/dev/sda2       36134910 41940991  5806082  2.8G  5 Extended
/dev/sda5       36134912 41940991  5806080  2.8G 82 Linux swap / Solaris


Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
[...]

Device     Boot    Start      End  Sectors  Size Id Type
/dev/sdb1           2048 41940991 41938944  20G  83 Linux

Make a note of the partition or drive to be used. In this example we’ll use partition sdb1 on disk /dev/sdb

Be aware

If your disk/partition is currently formatted and mounted on the system, it will need to be unmounted with sudo umount /path/to/mountpoint before continuing, or LXD will error during configuration.

Additionally if there’s an fstab entry this will need to be removed before continuing, otherwise you’ll see mount errors when you next reboot.

Configure LXD

Changes to bridge configuration

As of LXD 2.5 there have been a few changes. If installing a version of LXD under 2.5 please continue below, however for 2.5 and above in order to use the pre-configured bridge select No for Do you want to configure the LXD bridge (yes/no)? then see Configure LXD bridge (2.5+) below for details of adding the bridge manually after this.

Check the version of LXD by running sudo lxc info.

Start the configuration of LXD by running sudo lxd init

jason@ubuntu-lxd-tut:~$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd
Would you like to use an existing block device (yes/no)? yes
Path to the existing block device: /dev/sdb1
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket
LXD has been successfully configured.

Let’s break the above options down:

Name of the storage backend to use (dir or zfs): zfs

Here we’re defining ZFS as our storage backend of choice. The other option, DIR, is a flat-file storage option that places all containers on the host filesystem under /var/lib/lxd/containers/ (though the ZFS partition is transparently mounted under the same path and so accessed equally as easily). It doesn’t benefit from features such as compression and copy-on-write however, so the performance of the containers using the DIR backend simply won’t be as good.

Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd

Here we’re creating a brand new ZFS pool for LXD and giving it the name of “lxd”. We could also choose to use an existing pool if one were to exist, though as we left ZFS unconfigured it does not apply here.

Would you like to use an existing block device (yes/no)? yes
Path to the existing block device: /dev/sdb1

Here we’re opting to use a physical partition rather than a loopback device, then providing the physical location of said partition.

Would you like LXD to be available over the network (yes/no)? no

It’s possible to connect to LXD from other LXD servers or via the API from a browser (see https://linuxcontainers.org/lxd/try-it/ for an example of this).

As this is a simple installation we won’t be utilising this functionality and it is as such left unconfigured. Should we wish to enable it at a later date, we can run:

lxc config set core.https_address [::]
lxc config set core.trust_password some-secret-string

Where some-secret-string is a secure password that’ll be required by other LXD servers wishing to connect in order to admin the LXD host or retried non-public published images.

Do you want to configure the LXD bridge (yes/no)? yes

Here we tell LXD to use our already-preconfigured bridge. This opens a new workflow as follows:

Screenshot from 2016-05-02 10-54-58

We don’t want LXD to create a new bridge for us, so we’ll select no here.

Screenshot from 2016-05-02 10-55-09

LXD now knows we may have our own bridge already set up, so we’ll select yes in order to declare it.

Screenshot from 2016-05-02 10-55-19

Finally we’ll input the bridge name and select OK. LXD will now use this bridge.

And with that, LXD will finish configuration and ready itself for use.

Configure LXD bridge (2.5+)

In version 2.5, the above purple bridge workflow has been retired in favour of the new lxc network command.

With lxd init complete above, add the br0 interface to the default profile with:

lxc network attach-profile br0 default eth0

If by accident the lxdbr0 interface was configured, it must be first detached from the default profile with:

lxc network detach-profile lxdbr0 default eth0

It’ll be obvious if this needs to be done as running lxc network attach-profile br0 default eth0 will result in the error error: device already exists.

With that complete, LXD will now successfully use the pre-configured bridge.

Configuring LXD with a ZFS loopback device

Run sudo lxd init as above, but use the following options instead.

Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-loop
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 20

The size in GB of the ZFS partition is important, we don’t want to run out of space any time soon. Although ZFS partitions may be resized, it’s better to be a little generous now and not have to worry about reconfiguring it later.

Increasing file and inode limits

Since it’s entirely possible we may in the future wish to run multiple LXD containers, it’s a good idea to already increase the number of open files and inode limits, this will prevent the dreaded “too many open files” errors which commonly occur with container solutions.

For the inode limits, open the sysctl.conf file as follows:

sudo vim /etc/sysctl.conf

Now add the following lines, as recommended by the LXD project

fs.inotify.max_queued_events = 1048576
fs.inotify.max_user_instances = 1048576
fs.inotify.max_user_watches = 1048576

It should look as follows:

After saving the file we’ll need to reboot, but not yet as we’ll also configure the open file limits.

Open the limits.conf file as follows:

sudo vim /etc/security/limits.conf

Now add the following lines. 100K should be enough:

* soft nofile 100000
* hard nofile 100000

It should look as follows:

Once the server is rebooted (this is important!) the new limits will apply and we’ll have future-proofed the server for now.

sudo reboot

Part 3: Test

With our bridge set up, our ZFS storage backend created and LXD fully configured, it’s time to test everything is working as it should be.

We’ll first get a quick overview of our ZFS storage pool using sudo zpool list lxd

jason@ubuntu-lxd-tut:~$ sudo zpool list lxd
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
lxd   19.9G   646M  19.2G         -     2%     3%  1.00x  ONLINE  -

With ZFS looking fine, we’ll run a simple lxc info to generate our client certificate and verify the configuration we’ve chosen for LXD:

jason@ubuntu-lxd-tut:~$ lxc info
Generating a client certificate. This may take a minute...


apicompat: 0
auth: trusted
environment:
  addresses: []
  architectures:
  - x86_64
  - i686
  certificate: |
    -----BEGIN CERTIFICATE-----
    [...]
    -----END CERTIFICATE-----
  driver: lxc
  driverversion: 2.0.0
  kernel: Linux
  kernelarchitecture: x86_64
  kernelversion: 4.4.0-21-generic
  server: lxd
  serverpid: 5135
  serverversion: 2.0.0
  storage: zfs
  storageversion: "5"
config:
  storage.zfs_pool_name: lxd
public: false

It would appear the storage backend is correctly using our ZFS pool: “lxd”. If we now take a look at the default profile using:

lxc profile show default

We should see LXD using br0 as the default container eth0 interface:

jason@ubuntu-lxd-tut:~$ lxc profile show default
name: default
config: {}
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: br0
    type: nic

Success! The only thing left to do now is launch a container.

We can use the official Ubuntu image repo and spin up a Xenial container with the alias xen1 using the command:

lxc launch ubuntu:xenial xen1

Which should return an output like this:

jason@ubuntu-lxd-tut:~$ lxc launch ubuntu:xenial xen1
Creating xen1
Retrieving image: 100%
Starting xen1

Now, we can use lxc list to get an overview of all containers including their IP addresses:

jason@ubuntu-lxd-tut:~$ lxc list
+------+---------+----------------------+------+------------+-----------+
| NAME |  STATE  |        IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+------+------------+-----------+
| xen1 | RUNNING | 192.168.0.197 (eth0) |      | PERSISTENT | 0         |
+------+---------+----------------------+------+------------+-----------+

We can see the xen1 container has picked up an IP from our DHCP server on the LAN, which is exactly what we want.

Finally, we can use lxc exec xen1 bash to gain CLI access to the container we’ve just launched:

jason@ubuntu-lxd-tut:~$ lxc exec xen1 bash
root@xen1:~# 

Conclusion

While a little long-winded, setting up LXD with a ZFS storage backend and utilising a bridged interface for connecting containers directly to the LAN isn’t overly difficult, and it’s only gotten easier as LXD has matured to version 2.0.

Are you brand new to LXD? I thoroughly recommend you take a look at LXD developer Stéphane Graber’s incredible LXD blog series to get up to speed.

Comments

  1. Thank you for the amazing article! I’m considering migrating my existing server using this guide. I’m currently running ESXi with my 6-disk RAID-Z2 passed through to a FreeNAS VM as described in the guide here but on ESXi 6.5 and FreeNAS 11.x.

    My goal is to install Ubuntu 16.04 to a 120 GB SSD and create a partition on the SSD that will be dedicated to ZFS. My containers will live there. I need to figure out how to import my existing RAID-Z2 zpool, as that’s where my media lives.

    The LXD zpool will host a Plex container, and that Plex container will mount its media folders from the existing 6-disk zpool.

    Things I have yet to figure out as I lab this out in advance:

    1. Whether LXD is even the right application for my Plex container or if Docker is better suited.
    2. How I’m going to mount the media folders in the Plex container. Currently, I have Plex installed on an Ubuntu VM in ESXi that uses nfs mounts in fstab to mount the FreeNAS shares.

    Once I migrate using your tutorial, I shouldn’t need NFS mounts since they would be “local” storage if I’m understanding things correctly. This will be nice, as I can get away from having an internal “storage-only” network with 10Gbe VMXNET3 adapters and jumbo frames in addition to the standard, internet-routable virtual NICs at MTU 1500.

    Looking forward to putting your post into action!

  2. ZFS is pretty good with exports/imports and send/receive as long as the remote system can accept a pool of the size you’re trying to push :slight_smile: Check out an example here.

    Not dramatically dissimilar from my setup!

    • LXD containers reside on a dedicated SSD
    • Media accessed from an 8-disk ZFS mirror (though I’m all about Emby :slight_smile:)

    I need to add in a 2nd SSD at some point for LXD container redundancy. If that SSD dies I’ll need to do a lot of restoring.

    1 is really entirely your call, LXD containers AFAIK are light enough I run pretty much one for each service I use:

    Docker is more than adequate if that’s what you’re happier with though… and requires no update maintenance.

    For 2, I use LXD devices to push folders into containers… though you could well instead utilise a FUSE filesystem like SSFS to mount directly within the containers. If pushing the folders, you’ll need to ensure the UID/GID mapping is correct, else the container won’t be able to write to them. See this.

  3. Thanks for the tips! Considering you’re using a dedicated SSD for LXD while I’m using a partition on my Ubuntu SSD for LXD, should I just go ahead and make the entire SSD use ZFS? Is that possible? I haven’t even begun this project, so there’s still so much to plan. At the very least, I’d like to explore the option of setting up my home partition in ZFS (or moving it to ZFS post-install if I can’t do it while installing Ubuntu).

    My goal in all of this is to have an environment can be re-created very easily if my single point of failure (the SSD) were to fail. Ideally, I’d like to be in a position where all of the LXC containers in my Ubuntu SSD’s LXD zpool are backed up to my media’s zpool (6x3TB RAID-Z2).

    Then if my SSD were to fail, I could simply get a replacement SSD, Install Ubuntu, install LXD, and run a few commands to import my media’s zpool, which then would allow me to import a backup of my LXD zpool onto the new SSD’s LXD partition. If I understand this concept correctly, I’d be back in business in under an hour.

    Can you expand upon this?

    When I was using Proxmox a year ago, My LXC containers would have mountpoints in their configuration files. They’d look like this:
    mp0: /tank/media/movies,mp=/mnt/movies
    mp1: /tank/media/tv,mp=/mnt/tv

    Would I not use an identical (similar?) system for mounting media folders into the Plex container? Or do you have some kind of LXC container that mounts /tank/media directories into your Plex container so the Plex container is actually mount-agnostic?

  4. A little bit of Googling will highlight how difficult it is to get ZFS running as the root FS for Ubuntu. I’d stick with EXT4, or BTRFS if you’re feeling more adventurous, and retain the partition for ZFS method instead… or if you’ve got the space, pop in another SSD :wink:

    LXD backups aren’t super easy from experience to date, primarily because LXD creates a pool for every container, image, and snapshot:

    What I tend to do is lxc copy my snapshots to a 2nd host as a container, like lxc copy container1/snap2 remote:containerbackup2 which likely isn’t a very efficient way to do it either, but I have the machines running to accommodate it. Restoring LXD isn’t super easy either, it’s not just a case of reimporting the LXD pool as you’ve got configuration files and databases to consider too.

    LXD utilises devices rather than raw editing. I’m sure you could work around it, but easier not to TBH. Devices are reusable and attach to containers really easily, see:

Something to say?

Comment
Previous comments & pings (read only)

27 responses to “LXD, ZFS and bridged networking on Ubuntu 16.04 LTS+”

  1. Thanks for this post. I’ve been setting up test containers for the last couple of years but it’s always nice to see a concise howto to confirm what I know and pick up little hints. I had been missing the “lxc network attach-profile br0 default” step.

  2. Scott - American says:

    Thanks for the post. I have used it to setup my lxd host. If i could ask a question: I have /etc/defaults/lxd-bridge configured for my ipv4 network. I have added a lxd-dnsmasq.conf file to assign a static IP to container.
    But when I start the container I lose connectivity to the host (ping/ssh) over the network. If I shut down the container, restart networking it comes back online. What could I look at to see what is causing this?

    • Jason Bayton says:

      Start with your logs – syslog, dmesg, etc to see if anything interesting is being stated there.
      For some reason it makes me think there might be an issue with the bridge, but honestly too little info to start guessing.

      What versions are you running? >> lxc info

  3. Luis Rodriguez says:

    Thank you, beautiful post..

    Out of topic.,, I’m new to ZFS, and was reading some information regarding GlusterFS on ZFS. Is there any documentation or howto somewhere to create a distributed filesystem where I can have the LXD containers in a shared storage and run them in different nodes?

    Basically I want VM’s (now containers) so that they be moved easily from one host to another. I read about zfs send/receive but that will move the container and this is what I don’t want, maybe live migration, or in the worse case, stop and start in another host?

    • Jason Bayton says:

      That’s an interesting usecase that honestly I’ve not considered at all up to this point. I’d be wary about such an implementation as that’s not how LXD is designed to run, adding a container to storage won’t render it usable via lxc commands without first importing it and I’m not sure how things like snapshots would be handled (again maybe synced but not available outside the origin server to be interacted with.

      Live/cold migration is probably the way, since you’re both copying and importing in one go. I’m no expert though, someone may know more about it than me 🙂

      • Luis Rodriguez says:

        I’ll try the send/receive meanwhile.

        But I ran into another issue. I created the pool/bridge/etc// and I was using the bool for some other things.. I put backup files, home folders.. etc..

        I reinstalled the host, resintalled lxd and surprise. lxd init I can’t use an existing pool, I have to create a new one.

        I checked on the server guide https://help.ubuntu.com/lts/serverguide/lxd.html and ran

        lxc config set storage.zfs_pool_name lxd
        error: Setting the key “storage.zfs_pool_name” is deprecated in favor of storage pool configuration.

        lxc storage create default zfs source=lxd
        error: Provided ZFS pool (or dataset) isn’t empty

        Does this mean I can’t reuse a previously created pool?

        • Jason Bayton says:

          LXD init expects to setup a new environment, so it errors like that on purpose.

          If you can, mount the volumes, backup the containers and init LXD with a new ZFS pool or clear down the existing, then import the containers again.

          There’s a doc somewhere I recall that outlined the process but I can’t find it at the moment. If all else fails, the Devs hang out on freenode under #lxcontainers. They’ve guided me through a similar issue when an update broke everything.

          • Luis Rodriguez says:

            I worked it around instead of using a new empty pool using a dataset (e.g default/lxd) every time I run lxd init. But still the dataset has to be emptied every time. It seems to me that this storage management still needs some work, to be able to use multiple storage pools (i read somewhere is a work in progress) and to be able to reuse an old existing pool, plus a shared pool (common scenario for HA and real live migiration)

            I tried the container copy/move and it works fine between the same version (e.g 2.12) however I had an issue trying to copy a 2.12 (ubuntu 17.04) container into a 2.09 (ubuntu 16.04) but it takes a looooong time since it is copying all the data (in my case 1.7G container) Shared storage is the solution with XEN and PROXMOX for live migration, where only the VM metadata is moved from one server to another

            Anyway.. I’m loving these containers.

          • Jason Bayton says:

            Storage management is official as per a recent release (if you’re on their stable PPA) – I haven’t made use of it on my existing systems but will do when I next rebuild.

            I hear you loud and clear, it would make DR far easier if LXD was a little more flexible, but it’s still young and they’re definitely open to feedback if you want to submit it:

            https://discuss.linuxcontainers.org/

  4. Tobias Mertens says:

    Thank you, very nice post!

    But i have a problem, when i setup everything as shown up there my container didn’t get an ipv4 adress…
    I’m working with LXD version 2.0.9 i have an static IP at my br0 interface but my eth0 did not get an ipv4 (online ipv6) but as i see in your post this is normal… i did setup an new Ubuntu 16.04 server exactly as shown in this Post, two times, and both have the same Problem.

    Do you know what went wrong?

    • Jason Bayton says:

      Hi Tobias,

      I can’t really tell from here I’m afraid. Can you output your br0 configuration? Can you confirm bridge-utils is installed correctly?

    • Ryan Hass says:

      I had the same issue running via VirtualBox configured with a bridge network interface.

      TL;DR:
      Enable Promiscuous mode on the VirtualBox network bridge interface from the virtualbox settings and power-cycle the VM.

      I ran a TCP dump in the VM and on the host box and found that the DHCP requests are being sent all the way out to my my firewall which is receiving the DHCPDISCOVER requests, and even responding with a valid DHCPOFFER. While the reply packet gets to the VirtualBox host, the guest never receives the DHCPOFFER packet since VirtualBox was not allowing promiscuous mode listening for the interface.

      I hope this helps!

  5. Paul Smyth says:

    I’m trying to set up lxd on an existing server that already has an existing zfs pool. The problem I have is that lxd init is not even offering me the option to use zfs as a storage backend and no amount of googling seems to get the right search results.

  6. Arador says:

    Just an FYI if you try this static IP configuration on the next Ubuntu LTS release (Bionic Beaver 18.04) you’ll likely run into a networking nightmare. The way networks are configured has completely changed in 18.04, see man netplan.

    • Jason Bayton says:

      Thanks for the heads-up. The guide is due an update with new LXD features, so I’ll sort out the networking at the same time (or, depending on the changes required, document and link to it separately).

  7. Sergey Durnov says:

    Ok, this is all cool, but why I can’t find any info on how to make it work together with Netplan, which is a new standard for network configuration?