Securely Deploy and Manage LXC Containers on Ubuntu 14.04

Last Updated: Thu, Nov 30, 2017
Containers Linux Guides Ubuntu
Archived content

This article is outdated and may not work correctly for current operating systems or software.

LXC containers (Linux containers) are an operating system feature in Linux that can be used to run multiple isolated Linux systems on a single host.

These instructions will walk you through basic steps of server configuration for isolated Linux containers hosting. We will configure the following features:

  • LXC containers with Ubuntu 14.

  • Linux network settings and port forwarding for containers.

  • SSH forwarding for container administration as simple as ssh and ssh

  • Nginx proxy configuration for accessing websites inside containers (by hostname).

  • Additional security improvements for proper server management.

This guide assumes that:

  • You have an account at

  • You know how to configure a virtual machine with a custom ISO.

  • You know how to use SSH keys and you have already generated public and private keys.

In the end of the tutorial we will get two virtual containers that will have access to the internet, but cannot ping each other. We will also configure port forwarding from to containers. We will deploy secure configuration and management panel with the help of tools from Proxmox packet.


We will be using Proxmox only for management of LXC containers. Generally, it also supports KVM, but nested virtualization is forbidden on Vultr. Before start, a Proxmox ISO should be downloaded from the official website. We will use the Proxmox VE 5.0 ISO Installer. Install the OS from the image with default settings and reboot the virtual machine.

Also, you can manually install proxmox from sources, but that is not necessary in most cases (follow the instructions here).

OS Setup

Connect to your host by SSH, update the proxmox templates list and download a suitable template for containers.

apt-get update

pveam update

pveam available

pveam download local ubuntu-14.04-standard_14.04-1_amd64.tar.gz

Now, we need to create a linux container with network interface connected to a linux bridge. Open /etc/network/interfaces and append the following lines:

auto vmbr1

iface vmbr1 inet static



    bridge_ports none

    bridge_stp off

    bridge_fd 0

After system reboot, you can create a new container from the Ubuntu 14.04 template.

pct create 200 /var/lib/vz/template/cache/ubuntu-14.04-standard_14.04-1_amd64.tar.gz -storage local-lvm -net0 name=eth0,bridge=vmbr1,ip=,gw=

You can verify your container using pct list, start container #200 with pct start 200 and enter its shell with pct enter 200.

You can also verify network settings and addresses with ip addr.


To provide internet connection inside your container, we need to enable NAT. The following will allow traffic to be forwarded from the container to the internet with the help of NAT technology.

The vmbr0 bridge is connected to the external interface and the vmbr1 bridge is connected to the containers.

sysctl -w net.ipv4.ip_forward=1

iptables --table nat --append POSTROUTING --out-interface vmbr0 -j MASQUERADE

iptables --append FORWARD --in-interface vmbr1 -j ACCEPT

Enter the container with pct enter 200 and configure the web server inside.

apt-get update

apt-get install nginx

service nginx start


Now, we need to configure Nginx on your server to proxy websites into containers.

apt-get update

apt-get install nginx

Create a new configuration file /etc/nginx/sites-available/box200 with the following content:

server {

    listen 80;


    proxy_set_header Host $http_host;

    proxy_set_header X-Real-IP $remote_addr;

    proxy_set_header X-Forwarded-For $remote_addr;

    location / {




Nginx will now proxy each HTTP request for from your server to the container with IP Activate this configuration.

ln -s /etc/nginx/sites-available/box200 /etc/nginx/sites-enabled/

service nginx restart

SSH Access

If you want to provide easy access to sandboxes, you need to forward SSH sessions into the containers. To do so, create a new user on your root server. Do not forget to input a password, other parameters are not necessary.

adduser box200

su - box200


cat .ssh/


Copy this SSH key and enter the container to append the key.

pct enter 200

mkdir .ssh

nano .ssh/authorized_keys


On your server, add the following line to the .ssh/authorized_keys file.

command="ssh root@",no-X11-forwarding,no-agent-forwarding,no-port-forwarding <YOUR SSH KEY>

Do not forget to change <YOUR SSH KEY> to your home public key.

Alternatively, you can run the following from the command line.

echo 'command="ssh root@",no-X11-forwarding,no-agent-forwarding,no-port-forwarding <YOUR SSH KEY>' >> .ssh/authorized_keys

Then, you can connect to your sandbox with ssh.

`ssh box200@<your_server_IP>`

Additional Settings

It is time to implement several security improvements. First, we want to change the default SSH port. Then we want to protect our Proxmox management page with basic HTTP authentication.

nano /etc/ssh/sshd_config

Uncomment and change the line

#Port 22 


Port 24000 

Restart ssh.

service ssh restart

Reconnect to ssh with the new port.

ssh root@<your_IP> -p 24000

Set a Proxmox password.

Create file /etc/default/pveproxy.




Restart pveproxy for the changes to take effect.

/etc/init.d/pveproxy restart

Configure nginx (if you haven’t done it before).

apt-get install nginx

service nginx restart

Create a default configuration in /etc/nginx/site-available/default.

server {

        listen          80;


        rewrite         ^ https://$$request_uri? permanent;


server {

        listen                   443 ssl;

        server_name    ;

        #auth_basic              "Restricted";

        #auth_basic_user_file    htpasswd;

        #location / { proxy_pass; }


Obtain a valid SSL certificate and update your nginx configuration. For example, it can be done with the help of certbox and letsencrypt. For more information, click here.


chmod +x certbot-auto

./certbot-auto --nginx

Now, your nginx config should look like this (or you can change it manually after). Do not forget to uncomment ssl, auth and location lines.

server {

    listen          80;


    rewrite         ^ https://$$request_uri? permanent;


server {

        listen                  443 ssl;

        server_name   ;

        ssl on;

        auth_basic              "Restricted";

        auth_basic_user_file    htpasswd;

        location / { proxy_pass; }        

        ssl_certificate /etc/letsencrypt/live/; # managed by Certbot

        ssl_certificate_key /etc/letsencrypt/live/; # managed by Certbot


Create an /etc/htpasswd file using the Htpasswd generator.

nano /etc/nginx/htpasswd

Restart Nginx

service nginx restart

You can now view the management console on after basic authentication.

Port Forwarding

The containers are now available by HTTP requests and SSH. Now, we can configure port forwarding from the external server to the containers.

For example, for mapping to input the following.

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 8080 -j DNAT --to

You can view the current rules.

`iptables -t nat -v -L PREROUTING -n --line-number`

You can also delete a rule by number with the following.

`iptables -t nat -D PREROUTING <#>`.

Separated Containers

We can now access one container from another.

pct create 250 /var/lib/vz/template/cache/ubuntu-14.04-standard_14.04-1_amd64.tar.gz -storage local-lvm -net0 name=eth0,bridge=vmbr1,ip=,gw=

pct start 250

pct enter 250


If you want to restrict access from container 250 to 200, you need to connect each container to a personal bridge and disable forwarding between bridges.

  1. Delete existing containers.

    pct stop 200
    pct stop 250
    pct destroy 200
    pct destroy 250
  2. Change the content of /etc/network/interfaces.

    auto vmbr1
    iface vmbr1 inet static
        bridge_ports none
        bridge_stp off
        bridge_fd 0
    auto vmbr2
    iface vmbr2 inet static
        bridge_ports none
        bridge_stp off
        bridge_fd 0
  3. reboot the system

  4. Enable forwarding

    `sysctl -w net.ipv4.ip_forward=1`

    To make these changes permanent, you can edit the /etc/sysctl.conf file and find the following text.


    Uncomment it.


    You can also run sysctl -p to make the changes take effect immediatly.

  5. Create containers.

    pct create 200 /var/lib/vz/template/cache/ubuntu-14.04-standard_14.04-1_amd64.tar.gz -storage local-lvm -net0 name=eth0,bridge=vmbr1,ip=,gw=
    pct create 250 /var/lib/vz/template/cache/ubuntu-14.04-standard_14.04-1_amd64.tar.gz -storage local-lvm -net0 name=eth0,bridge=vmbr2,ip=,gw=
  6. Start the containers with pct start 200 and pct start 250.

  7. Flush the iptables rules.

    iptables -F
  8. Enable NAT.

    iptables --table nat --append POSTROUTING --out-interface vmbr0 -j MASQUERADE

    vmbr0 is the bridge which includes external interface.

  9. Allow forwarding from the external interface.

    iptables --append FORWARD --in-interface vmbr0 -j ACCEPT
  10. Allow forwarding from the containers to the internet.

    iptables -A FORWARD -i vmbr1 -o vmbr0 -s -j ACCEPT
    iptables -A FORWARD -i vmbr2 -o vmbr0 -s -j ACCEPT
  11. Drop the other forwarding.

    iptables -A FORWARD -i vmbr1 -j DROP
    iptables -A FORWARD -i vmbr2 -j DROP

Now, check that can ping but can not ping and that can ping but can not ping

The order of commands related to iptables is important. The best way to operate your rules is to use iptables-persistent. This package helps you to save iptables rules to the files /etc/iptables/rules.v4 and /etc/iptables/rules.v6 and it can automatically load them after system reboot. Just install it with the following.

apt-get install iptables-persistent

Select YES when prompted.

Want to contribute?

You could earn up to $600 by adding new articles.